How to Avoid Vulnerabilities in Your Code
3 of 4 applications have issues with security vulnerabilities. In this article, we'll cover security in a nutshell, get the basics and tools to avoid code vulnerabilities in your application.
Join the DZone community and get the full member experience.Join For Free
In recent times, we have witnessed several information security breaches worldwide: vulnerabilities, ransomware, Man-in-the-middle, among other problems that become headaches not only for the engineering team but for the whole company and even for customers of your product.
What if we say that most security problems could be avoided? This article will cover the importance of information security and how to include it throughout the development process: the so-called DevSecOps.
The 3 Myths Around Security Information
There are three taboos in the area of software development that we need to talk about to start addressing information security more consistently:
Security Breaches Are Often Seen as an Issue Only for Large Companies
The first of them is the simplest to say: most security flaws and their respective impacts and damages are commonly related to large companies, such as Facebook, LinkedIn, among others.
However, this is not a problem unique to large companies, so much so that a study by codegrip reports that three out of four applications currently in use have some vulnerability.
Vulnerability Does Not Harm Businesses
According to IBM's latest vulnerability research report, this idea is far from true. To give you an idea, about 38% of business was lost by companies that had a security breach. In addition, there are some average loss estimates:
Average of 180 USD for each record of personal information.
4.62 million dollars was the average ransomware breach.
$3.61 million for every breach of hybrid cloud environments.
In practice, vulnerability also impacts brand trust and credibility. A recent example was Target, which had to shell out nearly $200 million to repair its credibility.
Security Problems Come From Big Mistakes
Of all the myths, this is the biggest fallacy of all! In general, the "minor problems" are precisely those that result in the most extensive security disasters, and we can say that failures usually present two gaps:
We encountered flaws such as insecure code design, injection, and configuration issues through a code vulnerability intentionally or unintentionally: within the TOP 10 demonstrated by OWASP.
Through the vulnerability of operations: among the most common problems, we can mention the choice for "weak passwords", default or even the lack of password. A second common failure is the mismanagement of people's permission to a document or system. These types of problems, unfortunately, are pretty standard. Not by chance, 75% of Redis servers have issues of this type.
In an analogy, we can say that security flaws are like the case of the Titanic. Considered one of the biggest wrecks, most people are unaware that the ship had a "small problem": the lack of a simple key that could have opened the compartment with binoculars and other devices to help the crew visualize the iceberg in time and prevent a collision.
Knowing the Security Triad
We understand the importance of information security and the tremendous impact of its absence. With that, the question remains: how to define if an application is safe?
In short, there is an acronym for a secure application based on ISO 27002, which is CIA, focused on three properties:
Confidentiality: The guarantee that a message will be delivered to the person who needs to receive that information.
Integrity: Ensuring that information is delivered unchanged along the way.
Availability: The guarantee that data must always be available to the legitimate user.
To illustrate, an analogy. If the Titanic was a "failed application", what happened was a problem in property A (Availability), since the key that guaranteed access to the binoculars was not available to the team ahead of the vessel.
Understanding Information Security as a Methodology: Devsecops
Once we understand the security principles, we need to understand the "how" to incorporate them into the routine of our applications. After all, how are we going to ensure that each delivery/deployment has no vulnerability issues?
One of the solutions is to increase the integration between teams, avoiding feedback delays and, mainly, making information security happen in a proactive and non-reactive way, as it usually happens.
The answer to this is DevSecOps, which focuses on adding a layer of security to the development operation. Thinking about this methodology is another step in the history of integration between teams.
Agile: The cone that comes from the agile manifesto of 2001, this methodology focuses on integrating the development team and the product team.
DevOps: in the next integration step, the operations team is added within the already integrated development and user team.
DevSecOps: finally, the security team is added to the groups that already exist in DevOps.
As another methodology terminology, it is natural that there are several definitions for the same term. Some of them:
DevSecOps: A leader's guide to producing secure software: DevSecOps is DevOps made securely.
Hands-On Security in DevOps: DevOps offers speed and quality benefits with ongoing development and deployment methods, but it does not guarantee the security of an entire organization.
DevSecOps by RedHat: Whether you call it "DevOps" or "DevSecOps", it's always been ideal for including security as an integral part of the entire application lifecycle.
But regardless of which term is preferred, the significant point is that with DevSecOps, we need to incorporate information security as part of the process. In other words: layered security, thinking from the end-user, through the engineering team, to database and operating-system configurations.
In addition to layered security, another core of the methodology is security automation. When we're in the middle of a development treadmill, we tend to forget what we inevitably had to remember.
Information security tools can be integrated from CI/CD. That is, if there is any flaw, it can break the build. Within the categories of security tools, we can list some:
SAST (Static application security testing) is a tool that scans the code statically and analyzes the code before compilation.
DAST (Dynamic Application Security Testing): a tool that performs code validations at runtime.
IAST (Interactive Application Security Testing): Runs application tests from a running application. The most significant difference between IAST and DAST is that the tests are performed within the application.
How Horusec Can Help You Adopt the Devsecops Methodology in Your Project
In general, these tools help detect vulnerabilities, monitor bugs, and, above all, prevent security problems.
And among the tools that exist on the market, we have Horusec, one of the Open Source projects maintained by Zup and which focuses on making your application safer and preventing code vulnerabilities.
One of its differentials is its orchestration security feature tools. That is, it allows the integration of more tools. The main philosophy of the product is As much possible security: better!
Horusec also has a dashboard, which helps teams understand security issues within their application.
It is possible to integrate with Horusec in several ways, for example, with a CI like Github Action, and its might exports to the dashboard.
Prevention is the key, which means that if a PR has a vulnerability, the build "breaks", preventing a future headache or vulnerability disaster from being merged into the production environment.
Currently, Horusec has three components:
CLI, or the classic terminal, allows the analysis and visualization of vulnerabilities directly through the terminal.
The Dashboard, or Horus-Manager, is an optional feature that allows you to view these vulnerabilities more intuitively than the terminal.
An extension to IDE is also optional, making it easier to analyze vulnerabilities from the IDE itself. Today, we have the extension to use Horusec in Visual Studio Code.
An excellent first experience installing both the CLI and the dashboard, but don't worry; you can find all the information in the official project documentation. Then you can analyze this repository which contains several vulnerabilities and in several programming languages.
It's good to rely on tools like Horusec because they allow us, as development professionals, to have greater confidence in creating more secure code without vulnerabilities, thus preventing us from being the next responsible for the security of our project/product.
Opinions expressed by DZone contributors are their own.