I will freely admit that I have digital trust issues. In fact, I will go so far as to say that if you don't have these issues as well, then you are either not in the software engineering field, or you are being willfully ignorant. Allow me to explain my terminology and position.
Many incredible computing advances are being made every day. The latest piece of "that is so freaking cool" news is autonomous cars. Naturally, California is at the forefront of this emerging field, with Google as the star of the show. I'm very excited, as a geek, to see this technology advance to the point of being truly usable and useful. But personally, I'm terrified of these things becoming popular. The main reason for that is, I know what kind of people wrote the software that runs those cars: software engineers. And I have a hard time trusting their code. The sad truth is that most software engineers out there in the world are bad at their jobs. Some are too lazy; some are unable to solve logical problems that they face every day; some are just not passionate about what they do. That's right, even that last one is a big problem: if when a good software engineer finds a problem with code tangential to their current task, they will either fix it or at the very least note/report it. This is how overall quality of software improves on a day to day basis. However, software engineers who are not passionate about what they do will just ignore the problem they noticed and assume that someone else will find it and take care of it. And that is how bugs creep in.
I suppose my digital trust issues are a reflection of my corporate trust issues, that big corporations tend to look for ways to make (and save) as much money as possible in the short term. This is why they tend to outsource development to the lowest bidders and, ultimately, end up regretting those decisions when software comes back half-baked, deadlines are missed, and their clients/customers are unhappy. Incidentally, this is why I am very picky about my employers - I will not work for such "lowest bidder" shops.
There are other companies that have their own in-house software development teams. Unfortunately, unless these companies are technically oriented and relatively small in size, they will tend to hire the bad software engineers. An in-depth discussion of the reasons for this trend is out of scope of this post. It is sufficient to say that HR departments are usually not trained to detect intricacies of the software engineering mindset to weed out the bad from the good; really good software engineers tend to stay away from jobs they consider to be boring (even if they are very important); and sometimes the bad software engineers are actually trained to look appealing to unsuspecting companies (résumé keywords and such).
So, here is the logic behind my digital trust issues. There are good software engineers and bad software engineers. Big corporations tend to, in one way or another, use bad software engineers to write their code. This code ends up on production systems, from corporate portals to online banking websites to SCADA systems. When such systems become popular (or critical) enough, they start attracting hackers. The worse the code on these systems, the easier it is for hackers to exploit it. And I don't even want to think about what can happen if an autonomous car is hacked. The possibility of remotely hacking future "connected" or networked cars is scarier yet. You get the idea. And if you don't, watch some Ghost in the Shell. It paints a pretty realistic picture of a future world of connected machines and connected humans - and the scary things that hackers could potentially do in such a world.
Along the same lines, this is also why I've yet to enable auto-pay on any of my bills. Giving multiple companies my banking information to store for use every month to automatically withdraw funds sounds like a recipe for disaster. If even one of those companies has its data compromised, then there is suddenly a very real possibility of my bank account being emptied. You can usually escape liability for fraudulent credit card transactions, but it's not that painless with checking and savings accounts. Okay, so another reason why I don't enable auto-pay is so that I actually look at my bills to see if there are any discrepancies; otherwise, I just wouldn't bother looking at them at all. But the point stands - the code running all these systems is of unknown quality.
What can be done to improve this situation? Honestly, I don't know. There are automated systems like McAfee Secure that scan for vulnerabilities remotely and then display a trust seal on their clients' websites to let end-users know that everything is okay. Of course, such systems can only detect very basic issues, and only to a limited extent. Poking randomly at the public endpoints of systems can only yield so much information. In order to truly be sure of code quality to a reasonable extent, you have to actually look at said code. But what company is going to let random people look at their source code? I suppose one option to verify code quality would be to bring in a trusted and unbiased third party that specializes in source code analysis. But I don't know of any such entities, and I doubt that many companies would hire them if there's the possibility of those companies' code being publicly labeled as insecure or otherwise bad.
This is an interesting situation, and I don't have any amazing revolutionary ideas to improve it. But until something radical happens, my digital trust issues will not go away.