There are several key methods and techniques that can be used to incorporate DevSecOps into the SDLC. Less mature DevSecOps programs use a few approaches, while more mature DevSecOps programs may use several.
Code Signing for Software Integrity
Code signing is a foundational DevSecOps practice that ensures that software artifacts have not been tampered with and originate from a trusted source. By cryptographically signing executables, containers, scripts, and Infrastructure-as-Code (IaC) templates, organizations can enforce trust, integrity, and provenance across the SDLC.
Code signing helps:
- Ensure software integrity and authenticity
- Build trust across internal and external software supply chains
- Block unsigned or malicious artifacts from being deployed
- Meet compliance standards
- Reduce risk of malware injection through compromised or spoofed releases
Code signing provides irrefutable proof of authorship and content integrity, making unauthorized modifications detectable. It also supports legal non-repudiation by ensuring you can prove, cryptographically, that a specific version of software was (or was not) created and released by your organization.
To be effective at scale, code signing must be integrated into the SDLC and automated through CI/CD pipelines as manual signing processes are slow and error prone. Integrated workflows using tools like Sigstore (Cosign) or GitHub Actions enable continuous signing, verification, and artifact validation without interrupting release velocity.
Private signing keys must be protected using secure, access-controlled environments such as hardware security modules (HSMs) that are FIPS 140-2 Level 2 or EAL4+ certified, or cloud-native services like AWS KMS, Azure Key Vault, or GCP KMS. Never store keys in plaintext files, source code repositories, or local machines.
Implement automated key rotation policies to reduce the window of opportunity for key misuse. Rotate keys regularly, and ensure proper key revocation procedures are in place. It's also crucial to go beyond key rotation:
- Define clear policies for key generation, usage, storage, expiration, renewal, and revocation
- Enforce role-based access controls, separation of duties, and least privilege to minimize the risk of key compromise
Governance policies should guide developers and release engineers on compliant practices, audit readiness, and regulatory requirements.
Success Metrics
Like any business initiative, a DevSecOps program should have objectives and measurements to determine if those objectives are being met. And every DevSecOps practitioner needs to know how to optimize their unique program using data, metrics, and risk management objectives. For example, a risk management objective for DevSecOps could be to "Reduce the probability of attackers causing critical applications to stop functioning."
A typical metric that organizations use to help measure DevSecOps success is defect density, meaning the number of vulnerabilities divided by lines of code (lines of code are typically indicated by the 1000s). Additional integrity-focused metrics can include:
- Percentage of signed builds/releases
- Time since last key rotation
- Number of unauthorized artifacts blocked due to signature mismatch
These metrics help track the maturity of secure software delivery practices and supply chain protection.
A security metric often measures activities to provide decision support to perform functions better in the future. This data can help answer questions that an executive or operator might have about a particular area (e.g., source code review) using evidence-based information instead of opinion or anecdotes. It is indisputable that measuring results and performance is crucial to an organization's effectiveness, and this definitely applies to DevSecOps.
Defect Discovery and Testing
Penetration tests (also known as "pen tests") are a type of manual security testing that provides insight into an application's security by systematically reviewing its features and components. This type of exercise improves the security coverage because the test is intended to explore the complete app rather than just focus on one type of vulnerability or particular section. Pen tests follow methodologies related to topics like input validation, authentication, and access controls to identify flaws in the app's implementation.
Pen Testing as a Service (PTaaS) provides on-demand manual penetration testing for web applications, mobile applications, and APIs. Findings are delivered through a platform that integrates with developer tracking systems like JIRA and GitHub. A SaaS platform also facilitates collaboration between pen testers, security team members, and development teams to not only find but also fix issues.
Security scanners can be programmed to automatically identify certain vulnerabilities. DevSecOps scanners come in two flavors:
- Static application security testing (SAST) scanners examine an application's source code, binary, or byte code.
- Dynamic application security testing (DAST) scanners examine the application from the outside when it is running.
These scanners often support the creation of software bills of materials (SBOMs), which list all software components and their versions. SBOMs provide visibility into dependencies — especially third-party and open-source ones — and help assess risk across applications. They are essential for identifying vulnerabilities tied to widely used libraries and enable faster response to CVEs or zero-days. SBOMs also support compliance and are often required during vendor assessments, alongside penetration test results or policy documentation. DevSecOps teams can generate SBOMs using integrated scanner outputs or standalone tools.
Code review is the manual review of one developer's code by another developer. It's intended to find mistakes and improve code quality. Similarly, secure code review is a manual code review by a security expert, which is intended to find coding errors that may introduce security vulnerabilities. Secure code review is a manual process that often leverages SAST technology. Every so often, a security researcher not directly associated with an organization will discover and report a security vulnerability, which is called vulnerability disclosure.
Table 3: AI tools that support code reviews
Tool |
AI Capability |
Use Case |
CodeQL Semantic code analysis tool to find vulnerabilities using custom queries |
Query logic simulates reasoning (not pure ML/AI) |
Secure code review and vulnerability detection |
DeepSource (Community) Static analysis tool for security and performance insights with AI-based heuristics |
Uses trained models for pattern recognition |
Automated code review for Python, Go, and JavaScript |
SonarQube (Community) Static code analyzer for bugs, code smells, and security issues |
Heuristic analysis; limited AI in community edition |
Code quality and security scanning in CI/CD |
Semgrep Lightweight static analysis tool with customizable rules for secure coding |
Supports context-aware rule writing; AI extensions available |
Fast, customizable secure code scanning |
Bearer Scans source code for privacy and data security risks, especially around PII |
Heuristic-based detection with privacy-focused rules |
Data flow and privacy security scanning |
A bug bounty is a type of vulnerability disclosure program that leverages a crowd of globally sourced researchers in competition. In a public bug bounty, anyone in the world can submit a potential security vulnerability to an organization, and the first to find a valid bug will be paid a "bounty."
Teamwork
Once security testing has identified potential issues, the next step is to collaborate across teams to prioritize, address, and prevent them. The development team is a critical stakeholder when it comes to prioritizing fixes, remediating issues, and ideally, preventing the same issues from coming up again.
Known vulnerabilities often have threat rankings already that can be used with your own internal criteria to set severity and time frame for changes. There are also useful resources on how to mitigate or fix issues for known vulnerabilities from the National Vulnerability Database (NVD) and dev communities.
Now that you have inventoried your software and identified security issues, you need to keep track of what has been tested, by what means, and when. Monitor the findings from each security test and prioritize any necessary bug fixes or feature enhancements. In doing so, use business context to better understand which issues matter the most and work with development teams to fix those first.
Make sure you always know which issues are open and which have been addressed and can be closed. Then, report summary information to the relevant stakeholders so that everyone is always aware of the current status.
Proactive Techniques
The best DevSecOps training for developers is based on real security findings, whether these trainings are demonstrated during an actual security incident or found via manual penetration testing. The OWASP Top 10 contains a list of common DevSecOps risks; however, each organization will have its own unique top 10 list. Within your own organization, use this information to prevent entire categories of security vulnerabilities by implementing developer-focused training.
Threat modeling is a type of design-level security assessment that is intended to examine the way an application system works to identify potential flaws. The process involves analyzing assets, security controls, and threat actors within an application system. When flaws are detected using threat modeling before software implementation, some security problems can be avoided.
A few examples of preventive security controls informed by threat modeling:
- Cross-site request forgery tokens prevent cross-site request forgery attacks.
- A content security policy defines open-list assets that the browser allows to load and execute, thus minimizing the impact of cross-site scripting exploits.
- HTTP Strict Transport Security encrypts data in transit and prevents fallback to non-HTTPS traffic.
Other security issues can be avoided by securely configuring the software environment, for example, by following the Amazon CIS benchmark to harden AWS accounts and cloud services. Similarly, code signing acts as a preventive control by making unauthorized modification of software detectable.
- Mandate digital signing of executables, containers, and IaC artifacts.
- Use signed commits and verified authorship in version control systems (e.g., Git with GPG or Sigstore).
- Educate teams on the importance of signature trust chains and how to validate them.
There are various tools meant to protect an application by identifying and stopping malicious activity while the application is running:
- Web application firewalls (WAF) examine web traffic to identify and block suspicious activity, such as comment spam, XSS, and SQL injection attacks.
- Runtime application self-protection (RASP) operates in the runtime environment to monitor, detect, and alert in real time.
- Interactive application security testing (IAST) works inside an application, typically in a QA environment, to analyze code and report vulnerabilities.
Both WAF and RASP can be run in either "detect and alert" or "detect, alert, and block" mode. They are most effective at preventing security issues when running in "detect, alert, and block" mode. However, this forces the business to risk blocking legitimate application activity and malicious activity.
The "Three Ways" of Security
For decades, both software and security have struggled with poor-quality results, cost overruns, and processes that require experts. While DevOps has shown promise on the software side, security is still practiced in very traditional ways. DevSecOps is not just shoving traditional security practices and tools into DevOps.
Instead, we must rethink safety work and will need new practices and technologies to do this. We can give this transformation structure using the "Three Ways" from The Phoenix Project. By framing the problem this way, we can see that we need to:
- Get security work flowing – Most security work is monolithic and attempts to cover all risks in a single task, like a complete security architecture or security scan.
- Ensure instant security feedback – Security is one of the most common causes of technical debt, and the cost of this work increases dramatically the further it progresses across the SDLC. Several reasons include a lack of knowledge and limited security specialists.
- Create a security culture – Many organizations have a security culture of blind trust, blame, and hiding that prevents developers and operations from working with security.
How to tackle the "Three Ways" of security:
- Get your security work flowing
- Make the work visible
- Work a single security challenge at a time
- Limit work in progress and reduce handoffs
- Automate everything in your CI/CD pipeline
- Ensure instant security feedback
- Increase awareness about the importance of security
- Identify potential problems
- Make problems instantly visible
- Swarm on the problem and seek the cause
- Ensure security "findings" are designed for easy consumption
- Focus on providing a solution rather than exaggerating the problem
- Encourage a culture of security
- Empower everyone to challenge security design and implementation
- Take every opportunity to make security threats, policies, architecture, and vulnerabilities visible
- Allow everyone on the team to participate in security
- Trust that engineering teams want to do the right thing
- Celebrate the knowledge gleaned from security issues rather than blaming those involved
- Dedicate more time and effort on upgrading practices and preventive measures than on vulnerability remediation and incident response
- Plan trainings and conduct workshops to maintain continuous security throughout all teams
By following these core approaches, teams will see security as a concrete output from the development process. It is a combination of security features and assurance captured in a tangible way. By applying DevOps concepts, we can produce this concrete security continuously and effectively as a part of standard software development.
The "Five Ideals"
Six years after The Phoenix Project was released, The Unicorn Project was published in 2019. The Unicorn Project is not the sequel to The Phoenix Project. In fact, the stories of both novels take place along the same timeline and provide two different perspectives. The Phoenix Project introduces the "Three Ways" of security, whereas The Unicorn Project introduces "The Five Ideals."
Gene Kim, the author of both books, introduces "The Five Ideals" to frame today's modern business and engineering challenges:
- Locality and simplicity relates to the degree to which a development team can make local code changes in a single location without impacting various teams.
- We need to design things so that we have locality in our systems and the organizations that build them. We need simplicity in everything we do.
- The last place we want complexity is internally, whether it's in our code, organization, or processes.
- Focus, flow, and joy is all about how our daily work feels.
- Is our work marked by boredom and waiting for other people to do things for us?
- Do we blindly work on small pieces of the whole, only seeing the outcomes of our work during deployment when everything blows up, leading to firefighting, punishment, and burnout? Or do we work in small batches, ideally single-piece flow, getting fast and continual feedback on our work?
- These are the conditions that allow for focus and flow, challenge, learning, discovery, mastering our domain, and even joy. This is what being a developer means.
- Improving daily work addresses paying down technical debt and improving architecture.
- When technical debt is treated as a priority and paid down, and architecture is continuously improved and modernized, teams can work with flow, delivering better value sooner, safer, and happier.
- The business ultimately wins when developers can meet enterprise performance goals.
- Psychological safety is one of the top predictors of team performance.
- When team members feel safe to talk about problems, problems can not only be fixed but also prevented. Solving problems requires honesty, and honesty requires an absence of fear.
- In knowledge work, psychological safety should be treated with the same importance as physical safety is treated in manufacturing.
- Customer focus relates to the difference between core and context as defined by Geoffrey Moore.
- Core is what customers are willing and able to pay for, the bread and butter of your business, while context is what they don't care about and what it took to get them that product, including all of an organization's back-end systems (e.g., HR, marketing, development).
- It's critical to look at these context systems as essential and fund them appropriately. Context should never kill the core.