So, recently, I was asked to do a small amount of consulting work for a group that had an application they were just about to release. The application was a straightforward, data-driven web application, essentially providing read capabilities over a fairly large dataset. It was a single page application, implemented using Angular, with a REST API used for AJAX calls from the angular components.
The project manager had arranged to have the application scanned for security vulnerabilities. Certainly, a good thing, more than many managers do today. After the scan, she told me the application only had a couple of small issues they’d fixed, and it was ready to be released to production. It was secure.
Scanning is not sufficient. Look, I get it—security may be more on our minds today, but nobody’s getting raises for releasing the most secure application. And if it isn’t secure, and it’s compromised? Let’s be honest here—the timeframe between the release of an application and it’s eventual compromise is usually large enough that the original engineers aren’t around anymore anyway. So, from an engineer or managers point of view, security is something that at best has no real repercussions for them, and at worst, probably doesn’t either. So they really don’t want to spend time on it.
I get it.
But look, we really need to be responsible for what we engineer and the companies we work for, and we need to do real due diligence with systems we create. Security, today and into the future, is a big part of that. So doing things like claiming that a scan report is sufficient security validation is just, well, either incompetent or unethical. Take your choice.
The report didn’t flag any of this as problematic, so the manager felt the application was ready to go. It wasn’t. It may have been close, but it wasn’t ready to release.
Pentesting isn’t either. The thing is, penetration testing isn’t a reliable barometer of system security either. Sure, it’s somewhat in vogue right now, but at the end of the day it just allows a team to examine some parts of the external attack surface of a system (depending on engagement rules, of course). And your pentesters aren’t going to test everything, either. They’re going to be limited by their individual skillsets and biases, just like anyone else. Pentesting, really, should just be a confirmation that you, as a software engineer, did your job.
Necessary, but again, not sufficient.
Application security is holistic. So what should you do? Well, you need to integrate security into your development process from the beginning. This includes security input to your architecture, to your developed and developing code, code reviews, and yes, pentesting and fuzzing. The security analysts need to be a part of the development team. This allows them to find potential security flaws early, when they’re easier (or possible) to fix. This also helps the engineers and security staff develop stronger relationships. When I’m asked to review a system just prior to release, I’m not a very popular guy. I’m used to managing that resentment, but it doesn’t make my job any easier.
So what should you do? Take security seriously, get analysts involved early, continuously monitor your code for security flaws, and enjoy your smooth release to production.