Developer-Led Code Security: Why False Positives Are Worse than False Negatives
Too many false positives make it too hard to detect real vulnerabilities.
Join the DZone community and get the full member experience.Join For Free
Most SAST tools target security compliance auditors. Their goal is to raise an issue for anything even remotely suspicious. There's no fear of false positives for those tools because the auditors will figure it out; after all it's the auditors' job to sort the wheat from the chaff and the signal from the noise. But the industry should rally around efforts to kill all that noise. There's little tolerance among developers for crying wolf. SAST players should listen to developers and follow the guiding principle to prefer "reasonable" false negatives to raising false positives.
What does that mean in practical terms? Well, let's play with some numbers. Let's say you have a codebase with 12 Vulnerabilities. That's 12 things that absolutely need fixing. A typical SAST analysis might raise 500 issues in total, and then the auditors will spend X weeks sorting through that to bring you, the developer, the audit result maybe a month or so after you've moved on to other code.
Not an appealing scenario, is it?
So okay, let's eliminate the lag time by pivoting to developer-led code security. Now, instead of taking weeks to sort through the SAST report, the auditors dump it on your desk. They expect you - as the developer - to find and fix the true vulnerabilities. This scenario's even worse, both for you and for the security of the codebase. Because let's be honest, it won't take many false positives for you to throw up your hands and declare the whole thing a waste of time. Now, nothing gets fixed.
This is why is preferable to accept reasonable false negatives. Instead of raising 12 real vulnerabilities that are ultimately lost and ignored in a sea of false positives, it’s better to raise only 10 real vulnerabilities that actually get fixed and miss the other two.
Don't misunderstand. SAST providers should not miss those other two (theoretical!) issues because they’re sloppy or lazy. Sometimes in implementing a rule you have to strike a balance between catching every single issue…and also getting a few False Positives in the net, or tuning the rule sensitivity down to eliminate False Positives…and missing a few real issues at the same time. Ultimately, it’s about striking a delicate balance. Generally speaking, it’s better for developers to lean toward false negatives in most cases.
It's an issue of credibility. As I said earlier, we know that developers don't have patience with False Positives. SAST providers should make sure that when they raise an issue, there's something to fix. That doesn't mean SAST platforms should never raise False Positives. But their mission should be to give developers an accurate SAST analysis, and killing the noise. That makes all the difference.
Published at DZone with permission of G. Ann Campbell. See the original article here.
Opinions expressed by DZone contributors are their own.