SSDL at a Glance
Learn more about the Secure Software Design Lifecycle (SSDL): the infusion of security with each step of software design, from architecture to end of life.
Join the DZone community and get the full member experience.Join For Free
The Secure Software Design Lifecycle (SSDL) is the infusion of security with each step of software design, from architecture to end of life. By doing so we protect our customers from threats and reduce our risk and attack surface. There are no specific tools or frameworks mentioned in this article. You should choose the ones that work best for your context, based on compliance needs, cost, and labor.
The Software Lifecycle
The software design lifecycle (SDLC) comprises multiple steps. For the purpose of this article we’ll settle on the following definition of the SDLC:
Implementation and Test
End of Life
Below, we’ll review what each of these steps looks like when you consider security and provide common tools and frameworks to integrate.
Security begins with design, whether you are starting a brand new project, creating a new feature or component, or integrating a new vendor. There are two critical documents to produce as part of this step: data flow diagram and threat model. In addition to these two documents, you will review third parties (libraries, SDKs, APIs, etc.) and ensure they meet your minimum security standards. This is also where you would begin to involve your security organization (if you have one) to review your design and the information below.
As I’ve mentioned before (rule #2 of The Glue that Binds Us), you should trust no one (not even yourself). This “zero trust” model of security is critical when considering the implementation of third parties of any kind. This should include code reviews where open source software (OSS) is involved, benchmarking, known CVEs, and whether the third party indicates the use of a standard security framework (e.g., alignment with NIST).
Data Flow Diagram
A data flow diagram (DFD) is intended to capture the flow of data both within and external to your architecture. Most importantly, it identifies where data crosses trust boundaries, or where data's level of “trust” is (e.g., from an untrusted source, the internet, to a trusted source, post-sanitization within our network).
This diagram helps to identify critical connection points and potential risks on which to focus attention with regard to security.
A threat model, as it sounds, is used to enumerate potential threats to your service, feature, etc. As an example, when designing a web application, you should consider sources such as the OWASP top 10 when reviewing your risk, as well as the data flow diagram mentioned above.
With the threat model in hand, you can account for any gaps or risks in your implementation plan. This should mean either planning to address these gaps or risks, identifying mitigations for them, or, in some cases, documenting the reasons behind not resolving or mitigating them. An example would be if the risk and impact is low or presents an extremely small or unlikely attack surface.
2. Implementation and Test
During the implementation and test phase rubber hits the road. Ensure you are (manually) updating your DFD and threat model appropriately as you learn more during this phase. Automation during this phase will reduce risk and labor, and is necessary in order to meet security compliance standards (such as SOC compliance).
A change repository is where you will go to update your code, configuration, policies, and infrastructure. It may contain Terraform, Ruby, or anything in between, but ultimately the way you enforce, log, and audit these changes should be consistent across all of your assets.
A number of configuration options must be employed to meet minimum security standards in your repository:
2FA: All contributors to your code must have accounts with 2FA enabled.
Commit signing: All commits must be signed.
No one may merge directly into your main/release branch.
Multiple (2+) approvers who know the product/feature/component must approve your change before the merge.
All required checks must pass (see CI below).
No one (even an administrator) is exempt from these rules.
Continuous Integration (CI)
Public threats can be identified and eliminated during the build process (a.k.a., continuous integration, or CI). Using out-of-the-box tooling for your language or framework (like npm audit) or vendors to perform dynamic and static application security testing (DAST and SAST, respectively) goes a long way towards reducing your risk. DAST can also be used to identify “bad licenses,” something I recommend to protect your business from library misuse, which could open your business to potential litigation or fees. You will make these steps required checks to pass in order to merge code, ensuring that known vulnerabilities and bad coding patterns and practices never enter your main or release branches.
You will also ensure some minimum level of unit test and integration test coverage. The goal of this is to thoroughly test critical portions of code (like authentication/authorization, payments, etc.), so while there is no standard designated number, you should carefully consider the context in which you are developing and set some reasonable limit (e.g., 80% coverage). Unit test coverage will focus on functions and edge cases, whereas integration test coverage will focus on workflows and edge cases.
At the end of this build process, the CI will create versioned, signed artifacts (binaries, Docker images, etc.) that represent release candidates, and are written to a secure location using a secure service account(s).
All of this information is written to a system of record, one that cannot be modified and is visible to the right audience.
Continuous Deployment (CD)
Once you’ve completed development and are ready to move your code to a production environment, an automated and restricted continuous deployment (CD) mechanism will enable a select group of people, with approval, to do so. Access to deploy to production (and potential pre-production) environments must be restricted to the necessary only, and the deployment itself must be executed by a secure service account(s).
When a deploy is requested, a number of checks are built into CD:
Validate that the release request pertains to a valid release target from a previous CI build.
Validate that tests run against that release target one final time and write tests to a system of record (as above).
Validate that the release has been approved.
This is (probably) a different set of people from your repository approvers above. In a very mature process, this would be someone in your peer group, followed by someone representing the business, and potentially even someone representing a change management group. This depends on the risk associated with a change, and the urgency. Mature organizations classify risk by project/service and have different paths for regular deployments vs. emergencies.
During deployment, the person performing the deployment should be able to monitor the outcomes of each deployment. Your deploy strategy may vary by context (e.g., a blue/green deploy vs. a rolling deploy). You will use some kind of canary and infra/app/service isolation to minimize the impact of something going wrong. You will also build an easy rollback mechanism where possible that can be executed as a one-click follow-on to a failed deploy.
Success, everything went well! Canaries and smoke testing and whatever else came back green and you’ve completed your rollout. Now, you must ensure your observability is up to the task of assessing immediate changes to your performance, resource utilization, etc. If you discover such a change, refer to your rollback strategy.
Software is a garden, or a car, or whatever your favorite analogy is. It requires tending, and tuneups, and once in use you must provide your customers updates that keep them safe.
Patching and Vulnerability Management
Having a patching plan in place is critical for security. You might discover a new vulnerability through your regular CI builds, through a pen test, or through bug bounty; but however you identify a vulnerability in production, you must have a well-defined process to patch and release an update.
In this vein, having a well-defined priority for reported issues, and well-defined SLAs in which to resolve those issues is part of a mature SSDL. You may have a team dedicated to governance, risk, and compliance (GRC), or maybe it’s just you. Either way, you will commit to resolving all security issues in a timely fashion.
Some compliance standards require you to perform an external pen test and incorporate the results into your project plan. This may mean performing them quarterly, or yearly. Even if you have the luxury of working directly with an internal red team, you will hire a vendor to perform pen-testing at the cadence prescribed by your compliance context.
You will also hire a bug bounty program, and provide an environment and the requisite setup so that security researchers can identify and report issues in your projects.
4. End of Life (EOL)
At some point, all software goes away (even Internet Explorer 6). Whether you are moving a particular version of your software out of support or killing an entire project, you will create a communication plan, and potentially a migration path that ensures your customers remain protected from threats.
As you create this plan, consider the time and cost associated with a migration and what you can do to ensure your customers do so on the timeline you set.
In this article, you learned about the SSDL, and I hope you’ve considered how it can apply to your context. By infusing security with each step of the software design lifecycle you have eliminated or mitigated a large portion of threats against your business.
Opinions expressed by DZone contributors are their own.