This is a thought exercise as much as anything. I do think that software security is massively important and more should be done about it.
What if security in software isn't really an issue?
Yes, a few big names have been in the news recently but they must be the exception or it wouldn't be newsworthy. It's not like everyone is getting hacked all the time. Is it?
I don't worry too much about the security of my car. It doesn't have a fancy alarm and satellite tracking. There are many other cars that are easier to steal or are worth more to someone who wanted to steal a car. Isn't it the same with software?
Even if I, as a business owner, suffered a security breach there aren't any real consequences.
- There may be some negative press for a short time--but all press is good press, isn't it?
- There may be a small (relatively) financial consequence in terms of fines or legal bills.
- No one of any note who has been hacked previously has suffered any major, long term, negative consequences.
It's said that education is the answer to solving software security issues but where's the motivation?
- If there's no real consequence to security breaches, then why spend time and money educating people to prevent it.
- If security isn't an issue, then we can get more developers into the industry faster as that's one less thing new developers have to be taught.
- It's not just a developer education problem. Even if developers knew how to make more secure software they won't always be given the time and resources to do that if their superiors don't think it's important so you need to persuade the whole business on the importance of software security.
Trying to sell a solution to a technical problem (software security) that someone might not have, yet, to a non-technical stakeholder (someone higher up in the business than a developer) can be tricky. In trying to persuade them to fix a problem they don't have now you're selling risk/insurance.
Let us spend more time now to prevent an issue that we might have at some point in the future.
This may or may not work based on political, financial or other business constraints.
Then there are issues of accountability, liability and due diligence.
If there is a security breach who's responsible? The developer? Or the more senior person(s) in the company who didn't ensure developers had the time, knowledge and resources to do what's best for the company?
There's also no way to be certain you're secure. So how much effort should be put into having more security? When do you stop taking more time and expense to increase security, for an uncertain return?
Even the systems we have in place to try and ensure some level of security aren't brilliant. A few years ago (yes noting that things may have changed in the intervening time) I was working on a website that had to go through a PCI compliance check. I was shocked at how little the check actually covered. Yes, it meant the site was not doing some bad things but it doesn't mean it was doing only good things. The checks potentially left a lot of what I saw as possible security vulnerabilities--which I ensured were addressed.
Let's just forget about all this though. Software security doesn't really matter as there are no real consequences to the business and the only people who seem to talk about it are developers pointing out what they think are the things the other developers didn't do or did wrong.
But Wait, Could Capitalism Solve This Problem for Us?
Education (of developers) is largely claimed to be the solution here but is capitalism, not education, the way to get change? - If more companies get hacked then insurance claims and therefore premiums will go up--eventually to a level which makes a difference to the company. At which point there will be incentives for being more secure - and even proving it. If a company could do things to prove it was serious about preventing software security issues it might then be able to get a discount on the related insurance.
What if a business could get cheaper insurance for software related security issues by signing up to a service from the security company which would continuously be checking for breaches?
- The insurers would benefit if they didn't find anything as they'd be less likely to face related claims.
- The insurers would benefit if they did find something as they could put up the premium and hopefully the company could implement a fix before it is exploited and so not have to make a claim.
- The company would benefit if no vulnerabilities are found as they'd pay lower premiums. Plus their user data and business continuity would be protected.
- The company would benefit if something was found as they'd have the opportunity to fix it before it being exploited.
Those doing the testing would be incentivized to find exploits and disincentivized from missing something that is later exploited by another party.
Could this be done now?
Unfortunately, I think not. It depends on the costs of having security experts work for the insurers and paid for (either directly or indirectly) by the companies taking out insurance.
Sadly, I think we'll need more exploits, pushing up insurance premiums further, before this becomes financially viable.
Things look like they will get worse before they get better.