Secure Coding Challenges Faced by Every Software Developer in 2021
As thousands of exploits are discovered each year, devs are trying to deliver code that is as secure as possible. Actually doing that is easier said than done.
Join the DZone community and get the full member experience.Join For Free
Software vulnerabilities have become a problem for people in almost every single industry, which has led many specialists to suggest that code needs to be made secure from the ground up. Since almost every type of business and governmental organization now uses at least some form of software in its day-to-day operations, even the least connected agencies could end up dealing with cyberthreats. The tragedy is that changes to the underlying structure of some popular software packages could assuage this problem to at least some degree.
The overall number of vulnerabilities in an average piece of production software is continuing to grow. According to one report, over 22,000 new potential exploits were discovered in 2019 alone. Approximately a third of these exploits already have a proof of concept published, which helps illustrate just how dramatic the problem has become. This translates into around 20 new vulnerabilities on a daily basis.
If programmers were required to write secure code when projects first came around, then they wouldn't have to deal with these problems at a later date. While this might sound self-explanatory, the issue is far more complicated than it might seem initially.
Designing Applications with Security in Mind
Software developers have to consider a variety of issues when they first start to work on a project. The application's design requirements normally constitute the paramount concern of any engineer, but it's also important to think about how to best optimize the algorithm that powers the interface. It's easy to forget about security when juggling all of these concerns.
Doing so, however, is a major mistake since it's easy for insecure practices to creep into every part of a design. Once these practices first start to make it into a program, they become embedded into its code and can be very difficult to get rid of. Computer scientists have identified several areas where coders would need to take a holistic secure software development approach to prevent this kind of creep from occurring.
Systems have to be based around the idea of everyone using the least privilege necessary at any given time. Restricting all access to what's essentially a need-to-know basis will prevent the malicious execution of any sort of arbitrary code. While this can be tricky for firms that outsource software development to another organization, it should be among the easiest steps for those doing in-house design to take.
Standard procedures like code reviews and pen testing is always a good idea, though any firm that wants to deploy software on a regular basis needs to make sure that they actually stick to whatever protocol they put into place. One of the biggest reasons that insecurities creep into an extended codebase is the lack of regular testing. Layering defensive strategies is also a good tactic, especially as code gets compiled and promoted to a production environment. Unfortunately, it can be hard to make runtime locations secure.
There are a few tricks that engineering teams can employ to make sure that their code remains as secure as possible even if the execution environment proves very unpredictable.
Securing an Inherently Insecure Environment
Implementing some form of data input validation is a chore that should be handled by the programming language technicians are working with, but this isn't always the case. Java, for instance, uses a static strong typing framework that shouldn't allow code executed inside of a JVM instance to escape its sandbox. Other languages, though, run all of their native code on metal so there's a risk that a misbehaving application could take control of a machine's system software.
That's especially problematic in the mobile sphere, where users may have absolutely no idea that their equipment has been compromised until it's too late. Data validation checks can help to ensure that all of a program's inputs actually make sense. While this has largely been used to secure web applications from exterior cyberattacks, there's no reason why it can't be used to protect native apps that are written for a platform that lacks type safety verification.
Authentication protocols, like those that use passwords or biometric checks, are probably the single most common type of security verification anyone performs. When developing these routines, it's important to keep in mind that the storage of credentials is just as vital as taking them. All inputs and outputs have to be sanitized to reduce the risk of someone using a packet sniffer to capture credentials whenever they're entered into the system.
Credentials also need to be properly hashed, which will ensure that bad actors can't simply start to make up bogus information that could serve as a password when they otherwise wouldn't be able to get into a machine. Two-factor authorization and facial recognition technologies have been promoted as a stronger alternative to passwords, but even these are only ever as secure as the hashing algorithms present in the underlying code.
Building an Inherently Secure Cryptographic Practice
Passwords and other credentials should never be stored in plain text format no matter what kind of encryption algorithm is used to secure them. While they might be safe on a physical RAID array, they have to be decrypted for access and at this point, they'd be exposed to the outside world. As a result, most computer scientists have recommended the use of hashing technology to check digital fingerprints instead of unsalted passwords.
Unfortunately, coders still select hashing algorithms based solely on optimization. While the md5sum command is fast, for instance, the 128-bit hashes that it generates can't be considered secure. The ongoing development of quantum computing technologies has exasperated the problem, in part due to the fact that these machines can simply generate a massive number of potential conflicted hash numbers every second.
Guidelines published by the Open Web Application Security Project have suggested that all cryptographic modules used be either based around FIPS 140-2 or exceed the standards spelled out in it. Even this isn't enough, however. Data leaks can occur via a simple HTTP GET request or a problem related to TLS connections. Faulty databases filled with obsolete records may even pose a security risk, though these are normally viewed more as a performance problem than a safety one.
Most of these problems will only ever become obvious at runtime, so it's vital that infrastructures are built as secure as possible to limit the potential attack surface of any application. Fortunately, there's one relatively simple way to do this.
Climbing the Dependency Rabbit Hole
Each time that a programmer includes a library in a project, they're introducing at least one dependency to it. Some libraries are going to bring over dozens of other dependencies as well, which is sometimes know as descending a rabbit hole. Changes to any of these dependencies can leave applications vulnerable, especially if it isn't always clear what kind of changes were made. Self-taught coders have a tendency to include more outside libraries than others, but the problem is equally prevalent in the enterprise.
Code reuse is something that's very useful and it certainly shouldn't be discouraged. However, it's important to reduce the number of dependencies that each project has in order to slash its attack surface and harden it as much as possible. It's not always possible to cut away at the number of libraries needed to compile an application, but most engineers should find that at least some of the features included in their project aren't really necessary.
Though it might lead to a few painful decisions, cutting away unnecessary functions in this way can go a long way toward building more secure applications.
Opinions expressed by DZone contributors are their own.