The point of attack on your data systems is now more likely to be at the app level rather than at the network level.
Organizations take great pains to prevent data breaches by securing their network perimeters, but the vulnerabilities targeted most often by hackers these days are at the application layer rather than the network layer. Programming practices have not kept pace with the profound, fundamental changes in data networks – from isolated, in-house silos to distributed, cloud-based virtual environments with unpredictable physical characteristics. Securing apps requires more automated development environments that incorporate continual self-testing.
Data security concerns have gone mainstream. On one side, you have people who believe Internet communications should be granted the same protected status as private conversations on the street: You can’t surreptitiously listen in without a warrant. On the other, you have the people who believe government authorities must have the ability to access private conversations, with or without a warrant, to combat such serious crimes as terrorism, human trafficking, and child abuse.
The Battleground is End-to-end Encryption: Should We, or Shouldn’t We?
In a July 25, 2015, article, the Atlantic’s Conor Friedersdorf cites advocates of encryption back doors as warning of the dangers of an Internet in which all data communications are encrypted. They compare it to a physical space in which authorities have no ability to observe crimes in progress.
Friedersdorf refutes this argument by pointing out that the crimes the authorities are investigating take place in the physical world for the most part. He compares the encrypted communications to any group of people whispering to each other on a street corner. Without a reasonable suspicion and a warrant, the government can’t eavesdrop.
Whether end-to-end encryption becomes universal, or governments retain the ability to tap into email, text messages, phone calls, and other Internet communications, the fundamental approach to data security is changing from one focused on the network, to one focused on applications.
In the two-step end-to-end encryption model, the client and server first exchange keys, and then use the keys to encrypt and decrypt the data being communicated. Source: RiceBox & Security.
Baking Security into Code Makes the Programmer’s Job Easier, But…
MIT data security researcher Jean Yang points out the disparity between the rapid pace of change in data technology, and the slow pace of change in the way programs are created. Yang is quoted by TechCrunch’s Natasha Lomas in a September 27, 2015, article as stating that legacy code acts as an impediment to adoption of the types of structural programing changes required to protect data in today’s distributed, virtualized networks.
The Jeeves programming language Yang has created encapsulates and enforces security and privacy policies “under the covers” so that programmers don’t need to be concerned with enforcing them via library function calls or other methods. However, Yang and her colleagues took pains to accommodate the way programmers actually work to avoid requiring that they change their current favorite methods and tools.
Bad Code Called a Primary Cause of Data Breaches
Organizations to date have focused on securing their networks from attack, but SAP’s Tim Clark asserts that 84 percent of all data breaches occur at the application layer. (Note that SAP is a leading vendor of application security services). CSO’s Steve Morgan quotes Clark in a September 2, 2015, article. Morgan also cites Cisco Systems’ 2015 Annual Security Report, which found that increased use of cloud apps and open-source content management systems has made sites and SaaS offerings vulnerable at the application level.
Viewing the application layer as a separate stack helps reconcile the different perceptions of security by the dev side and the ops side. Source: F5 DevCentral
Programmers’ tendency to incorporate code written by others in their apps contributes to the prevalence of vulnerabilities in the programs. The borrowed code isn’t properly vetted for vulnerabilities beforehand, so securing programs becomes an after-the-fact operation. Writing secure code requires that security become the starting point for development projects, according to McKinsey & Co. partner James Kaplan, who co-authored the report Beyond Cybersecurity: Protecting Your Digital Business.
Recently hackers have shifted from attacking not only vulnerable applications, but also the tools developers use to create the applications. As CSO’s George V. Hulme reports in a September 29, 2015, article, CIA security researchers reportedly created a modified version of Apple’s Xcode development tool that let the agency place back doors into the programs generated by the toolkit.
The recently announced breach of thousands of products in Apple’s App Store was blamed on Chinese developers who downloaded compromised versions of Xcode, called “XcodeGhost.” The apps created using XcodeGhost implement a cascading effect that infects all apps developed subsequently, accord to Hulme. That’s one of the reasons why experts recommend that the systems used in organizations for development work be isolated from the systems used to actually build, distribute, and maintain the apps.
Much of the added security burden for app developers can be mitigated by automated self-test functions integrated in the development process. CIO’s Kacy Zurkus writes in a September 14, 2015, article that automated testing gives organizations more insight into risks present in both their home-brewed apps and their off-the-shelf programs.
Raising awareness among developers about the risks of vulnerable applications, and about their increasingly important role in ensuring the apps they develop are secure from day one, is the first step in improving the security of your organization’s data. The next steps are to automate application security practices, and to make enforcement of security policies more transparent.