A major factor for companies, and even industries, failing to develop robust security programs is the perceived start-up cost to carry it out. It can be daunting to take large organizations as a source of inspiration when considering how to implement security measures into the software development life cycle (SDLC). Many of these mature security programs have security teams wholly integrated within the SDLC—they also often have multi-million dollar budgets. This budgetary reason is why many believe a secure SDLC is out of reach.
Instead, engage security as a cohort to the quality assurance process that nearly all software goes through today. This is a relatively streamlined and cost-effective process for implementing an effective software security initiative. Let’s examine why that’s the case.
Understanding the Differences Between Quality and Security
While it may seem appealing to file software security into the same category as quality, there may also be some potential pitfalls in doing so.
An important aspect of this consideration is that security starts at the very beginning of the SDLC. That is, even before quality is a factor. The software architecture and the ideas that go into creating the project documentation must be security aware. Architecture also requires the same scrutiny as finished applications do when undergoing penetration tests to locate bugs, in order to check for application design flaws. The generation of secure design standards and secure coding standards is a key factor in this consideration.
Let’s work through an example. If Example Corp. wants to develop their new social cryptocurrency banking application, they must first develop a set of standards. Treat these standards as considerations and requirements for the design and planning of all aspects of this application. Perhaps they strive to have a server-side trust model.
Additionally, perhaps they always utilize output encoding, input sanitation via whitelisting and perform monthly library updates to ensure they aren’t using vulnerable software. With this set of security requirements in hand, the firm is better able to avoid common pitfalls resulting in flawed architecture.
To further prepare for the secure development of an application, security as a mindset must be instilled in each developer. After all, sometimes all it takes to compromise a corporate environment is one injection flaw in a mobile application made by a junior engineer, for example.
At the End of the Day, Who Is Responsible for Security?
Security is everyone’s responsibility. There are ways that every employee can do their part to secure a firm—no matter how large or small. However, security begins with developers.
To prevent an inexperienced developer from writing unreadable and/or unmaintainable code, most companies implement coding style guidelines. Rules within this document should focus on the prevention of mass assigned variables (e.g., “var_a1”). Other items of note to avoid within the codebase include inconsistent tabs and spaces, and quickly changing bracketing, among others.
Thus, it’s critical for organizations to put secure coding guidelines in place. Within this document, examine common pitfalls (application language specific), complete with remediation advice, an exploration of dangerous coding patterns, and basic security testing advice.
Also within the guidelines, include strategic measures such as Synopsys’ preventative SAST solution within the IDE to dynamically remediate common vulnerabilities. Once developers undergo training (and understand what tools and resources are at their disposal), with your organization’s guiding documents in hand, they’re well-equipped to implement quality code. In other words, code without common security vulnerabilities.
Once developers are done writing their code, it moves into QA where it is tested for unexpected behavior and design issues. Software security implementation during this step should occur together with existing processes.
What About Implementing Security Into the CI/CD Lifecycle?
In the age of Agile design, continuous integration, and continuous deployment, existing platforms and test cases can be tweaked to include security measures. In a CI/CD platform, such as Jenkins, there may already be a variety of test cases to promote a standard QA baseline. Adding modules and test cases that perform security scans and check for common vulnerabilities are relatively simple and low-cost ways to add security measures into the SDLC.
Inspect how the application responds to 0, 1, -1, “abc”, NaN, etc. It’s also important to check for common injection payloads like <script>alert(XSS)</script> or “drop table bobby;–“.
Perform comprehensive static analysis as a pre-deployment scan. Examine medium severities and above for validity. Jenkins already supports a plugin for Synopsys’ comprehensive static analysis solution. Configure the tool to suit the needs of your organization.
How Do You Secure an App Post-Deployment?
You can still test a deployed application in a variety of ways. From utilizing the Burp Scanner to hiring penetration testers, it’s important to ensure that previously implemented controls are effective within the application. Software composition analysis identifies components used that may have known vulnerabilities or licensing issues. Fuzz testing determines whether APIs and/or protocols are implemented correctly. You can also perform interactive application security testing (IAST) to identify sources of potential data leaks.
Security Is More Than a Checkbox
Security measures are relevant throughout the SDLC—even before quality is a consideration. Design application architecture and requirements with security in mind. As the process moves into the development and implementation phases, flaws that often result in an application re-design have been squashed.
During development, engage an engineering workforce that understands that security is as much their responsibility as it is the developers’. This care, in coordination with relevant tools and upholding the secure coding guidelines, ensures that hardening code against common attack patterns and bugs is more effective and efficient.
With a bug- and flaw-resistant SDLC, the verification phase of a CI/CD or traditional QA-driven process allows for extra time to run static code analysis tools on the back-end. This fine-tuning pass over the code catches many mistakes and oversights through automation.
What’s the Key Takeaway?
The key here is to treat all phases within the SDLC as opportunities to implement both security and quality.
Reduce the number of vulnerabilities present at the end of the application’s life cycle by taking measures along the way to secure a specific application. Implement security features alongside quality measures to mature application design. This lowers costs while raising an application’s security stance.