Unlike Lasagne, in InfoSec, There Is No Layer Eight
Traditional ways of doing software security engineering are proving more antiquated every year. It's time to move security into the runtime of an application.
Join the DZone community and get the full member experience.Join For Free
In 1984 the International Standards Organization (ISO) published a conceptual model to promote interoperability between computing systems. Known as the Open Standard Interconnection model, the OSI set out a construct of standard network protocols divided into seven layers that still govern how all internal and external networks communicate and function, including how they are secured.
While each layer is independently developed, the OSI model anticipates that each layer only communicates with the layer above and below it as information passes through the layers. Layer One is the Physical Layer, followed by the Data Layer (2), Network Layer (3), Transport Layer (4), Session Layer (5), Presentation Layer (6), and, finally the Application Layer (7).
Take an email, for example. A user writes an email, hits send and propels the message from the Application Layer (7) of their own network, down through the layers until it is physically moved along by wire or wirelessly to the recipient’s network. There the email works its way back up the stack until it reaches Layer Seven and is handed off to the person receiving the message [see How OSI Works].
Every transaction, be it small or large, extremely trivial or vitally important, follows the same protocol. Top to bottom; bottom to top.
Since Layer One security protocols were first introduced to physically secure network hardware, we have steadily climbed the Layers to add more protections to more functions. Using variations on the same approaches that have been used for decades, the latest iteration of network security for the application layer— “NextGen” web application firewalls—are even referred to as Layer Seven WAFs.
Even though Layer Seven is known as the application layer, current protections still largely reside outside the applications themselves and rely on the network as envisioned by the OSI model. In traditional application security approaches, network traffic destined for an application is routed to an additional appliance to intercept and evaluate traffic using heuristics before continuing to the application itself for execution.
Under the OSI model, Layer Seven puts security closest to the end-user as a transaction begins and ends its journey. While the theory makes sense, the reality is this approach increasingly does not work. If you need proof, look no further than the volume and scale of recent successful cyberattacks.
2016 saw a record-setting number of cyberattacks, resulting in the most records stolen in the seventeen years that breaches have been tracked. Hospital, school, and transit system networks were held for ransom by criminals who exploited known but unpatched software flaws in applications.
Cybersecurity Ventures predicts that the annual cost of cybercrimes will grow globally from the $3 trillion racked up in 2015 to $6 trillion by 2021. That does not include the estimated $1 trillion that will be spent by organizations between now and 2021 defending against cyberattacks.
No matter the source or the timeframe in question, trends are all pointing in the wrong direction unless you are a malicious hacker. There appears to be no improvement or even a slowing of the negative trends in sight.
So, if we’re at the top of the OSI model and traditional cybersecurity approaches are not stopping or slowing the number and severity of attacks, where do we go from here to improve security?
There Is No Layer Eight
The quantum leap that is required to secure applications going forward is obvious – move security into the runtime of the application itself.
Let’s start with the proposition that security professionals have a wealth of tools available to them. At the recent RSA Conference in San Francisco, there were more than 230 end-point security providers. Code scanners offering SAST, DAST, and IAST were there, too, along with countless WAF and IDS vendors.
Security professionals do not lack information about vulnerabilities. What is lacking is the ability to remediate code flaws in a timely fashion. The find-to-fix ratio in most organizations I talk to is between 5 and 10 to 1. Even a modest quarterly patching program can take months to complete a cycle and millions of dollars a year to operate.
Incidentally, the same is true of security alerts, including false positives generated by WAFs and other network monitors. The Cisco 2017 Security Capabilities Benchmark Study found that among the organizations contacted, only 56% of security alerts are routinely researched. Of those alerts, only 28% are deemed legitimate and less than half of the legitimate alerts are remediated. The Ponemon Institute puts the average cost of investigating false alerts at $1.3 million per year.
Runtime security solutions do exist today. There are two basic approaches: taking the WAF model of instrumentation/filters and applying them in the runtime or by virtualizing the runtime and wrapping the application in a secure container (Full disclosure: Waratek offers a virtualization-based runtime application security platform). However, a more robust runtime protection market is needed to address the cybersecurity issues facing the global economy.
The primary barrier to securing the runtime is not found in science or technology, but rather education and experience. Most cybersecurity professionals simply do not have the background or training to allow them to work within the runtime of an application without breaking the app.
The career path most cybersecurity engineers take includes a heavy, if not exclusive, dose of academic and practical application in network security. They are rooted in the advantages and limitations of the OSI model. Instruments, filters, patterns, signatures and regular expressions are the tools of the trade for securing networks but are barriers to successfully protecting the runtime of an application.
A separate, but not unrelated issue is the continued belief that application developers should also be responsible for application security. That is a universal problem not limited to moving security into the runtime. Trying to hire rock star developers who are also security experts is at the core of the current cyber talent shortage.
To move into the runtime, we need more cybersecurity professionals whose primary skills are those of compiler engineers. People who know how to deliver tools that are free of the limitations of heuristics and can deliver robust security features that do not break an application or grind its functions to a halt because of heavy instrumentation. Engineers who understand the advantages of compiling code “just in time” to execute an operation.
Shifting to a runtime protection approach will require a bit of retooling or expansion of academic programs as well as the skills development that comes from the experience of doing. It won’t happen overnight, but the end-result will be—finally—slowing the attacks that threaten every organization, every day.
Published at DZone with permission of John Matthew Holt, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline
Alpha Testing Tutorial: A Comprehensive Guide With Best Practices
Comparing Cloud Hosting vs. Self Hosting
TDD vs. BDD: Choosing The Suitable Framework