WikiLeaks, Vault 7, and Vulnerability Disclosure: Is It Blackmail?
Is Wikileaks blackmailing the tech industry? Learn how common sense security techniques can protect your company from such a vulnerability.
Join the DZone community and get the full member experience.Join For Free
If you're hit by ransomware, you're presented with a difficult choice. Do you pay the ransom and hope the crooks follow through and return your ransomed data and files? Or do you refuse to pay, and say goodbye to that data forever?
Tech companies including Microsoft, Google, Apple, and Samsung are facing a similar dilemma after WikiLeaks published information allegedly showing that the CIA possesses hacking tools that can be used to exploit vulnerabilities in the products of those companies. As Motherboard reported, WikiLeaks approached those companies with an offer to share technical details about the CIA's "Vault 7" hacking tools, but only if the companies meet WikiLeaks demands. That sounds a lot like holding files to ransom.
Now, we don't know the full details of the alleged vulnerabilities. Are these zero-days that can be exploited today? How old are the vulnerabilities? Has the CIA, or anyone else, used this information? WikiLeaks also demanded that the companies patch the vulnerabilities within 90 days. Would WikiLeaks expose the information when the 90 days run out? The lack of answers to these questions makes it difficult to assess the gravity of the situation. But it could be the start of an ugly trend if others follow WikiLeaks' lead and hold back information about vulnerabilities in a vendor's products in order to blackmail them.
The issue of vulnerability disclosures has been a controversial one for some time. Even Google, whose Project Zero research team has discovered scores of vulnerabilities in a wide range of products, has courted controversy with its policy of giving vendors 90 days to respond to a vulnerability before going public.
Just last month, Project Zero released details of a critical remote code execution vulnerability in the Windows Graphics Component GDI library before Microsoft had released a patch. Project Zero gave Microsoft exactly 90 days and then automatically "derestricted" details of the vulnerability, on February 14 (Microsoft only issued a patch on March 14, a full month later). Again, Google exposed another Microsoft vulnerability, in Edge browser and Internet Explorer, before Microsoft had released a patch. Microsoft has repeatedly stated that it does not agree with Google's disclosure policy and believes in "coordinated" vulnerability disclosure.
It is reasonable to expect the vendor to maintain an open line of communication with bug finders. But only the vendor can evaluate and recommend a remediation timeframe. ISO, the International Standards Organization, has made recommendations (ISO/IEC 29147) about vulnerability disclosure that may help companies craft policies about responding to disclosures.
What about bug bounty programs, which reward legitimate researchers who work with the vendor? Unfortunately, you can't count on security researchers to do the right thing and give vendors time to fix problems before releasing information about a vulnerability in their software products. Even companies with a generous bug bounty program may have trouble getting researchers to play by the rules. And although bug bounty programs and responsible disclosure policies can help mitigate some bugs that have made it into production, it's more of a band-aid approach to security that can leave software vendors and their customers dangerously exposed.
As we discovered in our research report on bug bounty programs, the vast majority of IT decision-makers we surveyed (77 percent) said companies rely on bug bounty programs too much. More than 90 percent said they believe "most" flaws discovered through a bug bounty program could have been prevented by security assessments and developer training. And more than half (59 percent) said they believe it's more cost effective to fix bugs identified in testing than through a bug bounty program.
We think those survey respondents are mostly right – according to the National Institute of Standards and Technology, it is 30 times more expensive to fix a security vulnerability in post-production than in the architecture and design stages, and 6 times more expensive than in the development stage.
There's a saying that an ounce of prevention is worth more than a pound of cure. In the ransomware analogy mentioned at the top of this post, backing up your files means you won't have to pay a ransom. Prevention is a better policy for securing software, too. Test early and often during development, and fix the flaws you find. You don't want to count on the kindness of strangers.
Published at DZone with permission of John Zorabedian, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.