Securing a Continuous Delivery Pipeline
Securing a Continuous Delivery Pipeline
An article from DZone's new Guide to Continuous Delivery Volume III, out now!
Join the DZone community and get the full member experience.Join For Free
Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.
Security officers have an acute sense of duty. A duty to protect the organization from malicious attacks; to find holes in the organization’s software and infrastructure assets and track their resolution. That often requires saying no to new release deployments, refusing to change network or firewall configurations, or locking down staging environments to avoid last minute changes creeping in. What happens when you tell them that from now on software delivery is being automated all the way from source control to production, and that deployments will take place on a daily basis? How can they possibly keep up with these new challenges?
Understand the Human Needs of the Security Team
Start by understanding how they get their job done today; go through their workflow to really grasp the limitations and pressures they endure.
Next, explain how a deployment pipeline works and what controls are in place — such as ensuring functional adherence, no regressions, and any quality attributes you might be checking for (performance, reliability, etc). Explain how these controls are visible to everyone and how the pipeline stops when problems are found. Relevant security controls can be incorporated in a similar manner, actually reducing the security officer’s workload and providing early insight on the software’s compliance with security rules.
With a collaborative mindset, security officers, developers, and testers can come together and help each other with their respective gaps in knowledge and experience. Popular techniques that can be adopted for this purpose include pair programming (between a developer and a security person) or the “three amigos” conversation (focused on a feature’s security concerns and testing approach).
Introduce Early Feedback From Security Tests
Moving from manual to automated processes has enabled operations (via infrastructure as code) to fit in the agile lifecycle. Implementations can take multiple forms, but the core benefit is to have everyone involved in delivery tuned to the same flow of delivery, receiving timely and relevant feedback.
Note that this doesn’t imply the deployment pipeline needs to be fully automated, including security controls. In fact, at an early stage it might be advisable to keep manual security controls in the delivery pipeline. Security people get to keep familiar procedures and a sense of control, but there are immediate gains in terms of visibility of their work in the delivery process.
Shared visibility on security issues should be the holy grail for security officers. Security decisions no longer need to be made between a rock (cancel or rollback a release) and a hard place (release with recently found vulnerabilities). Instead security decisions can be made proactively instead of just piling up the security debt release after release.
In this example pipeline, we’re following the recommendations from Jez Humble (co-author of the book Continuous Delivery) to keep deployment pipelines “short and wide.” We can run a subset of indicative security tests as part of the Automated Acceptance Test phase, leaving the bulk of the security tests to an optional phase later on.
We have found with clients in multiple industry sectors that taking a subset of indicative “weathervane” security tests and running these early in the deployment pipeline really helps to build confidence and trust in automated security testing; the weathervane tests show which way the longer suite is likely to go, yet we gain the ability to “stop the line” as soon as one of these early tests fails, leading to faster feedback
Use Lightweight Security Tools to Enable Greater Focus
In the last years a number of lightweight, command-line security tools have gained traction. We can distinguish between static analysis tools, security testing (dynamic analysis) tools, and security testing frameworks.
Static analysis tools have long been part of the developer’s toolbelt for checking code quality. Oldtimer SonarQube and newcomer Code Climate are popular examples. SonarQube includes a number of language-specific security rules around protecting code from unwanted usage, hardcoded credentials, etc. Custom checks can be added or, alternatively, you can find language/framework specific security tools such as Brakeman for Ruby on Rails or Find Security Bugs for Java. These kinds of tools require little configuration to get started, but they can also generate many false positives initially — until you “groom” the rules.
Dynamic analysis tools typically launch a series of mechanic (but configurable) checks with varying degrees of complexity, from ports that shouldn’t be open (for instance using Nmap) to SQL injection exploits (for example with sqlmap). They require a controlled environment where infrastructure, network, and application configuration is known (codified) and repeatable. These kinds of tools require more upfront effort to set up, but the results tend to be more accurate and cover a wide range of (attack) use cases. More importantly, results from both types of analysis can be fed into and visualized in the deployment pipeline fairly easily
Other security checks and tools that can be incorporated in the pipeline include security scanning (e.g., Zapr or Arachni); inspecting build artifacts for viruses (e.g., ClamAV); or checking external dependencies for known vulnerabilities (e.g., bundler-audit for Ruby, NSP for Node, or SafeNuGet for .NET).
Finally, security testing frameworks provide a common way to specify and validate security scenarios. They abstract away the security tools being run, allowing higher level security discussions between developers, testers, security officers, and anyone else interested. Instead of debating results a posteriori (often requiring high cognitive effort for developers to remember their changes), a healthy pre-coding discussion takes place, uncovering potential weaknesses/attacks.
Examples of Security Tests in Action
Here we’ll look at an example in action where security tests are run in a pipeline. In this particular case we’re using sqlmap to check for SQL injection vulnerabilities. The pipeline is defined in GoCD, the Continuous Delivery tool from ThoughtWorks Studios: imgur.com/StV5Ecw
In the above linear example unit and acceptance tests pass but the pipeline stops when the security tests fail. If we drill down into the security test’s job execution, we can see why it failed (sqlmap identified injection vulnerabilities): imgur.com/B9pLGNd
At this point we already have a centralized view of how any release candidate ranks in terms of the security controls that were baked in the deployment pipeline. No more compiling results from a number of security tools into a 50 page document that no developer will (voluntarily) read.
Let’s look at another example, now using the gauntlt security framework (another example is BDD-Security) to specify the tests: imgur.com/MaWOJJT
The first scenario uses Nmap to check if a given port is closed as expected. The second scenario uses sqlmap to check for SQL injection vulnerabilities, as in the previous example.
A set of security/attack scenarios is defined and a color association (red/yellow/green) with the expected semantics: red means scenario failed (or attack succeeded if you will); yellow means it was not conclusive; and green means scenario passed (or attack failed).
Favor Communication and Feedback Over Tool-driven Practices
So now we not only have a centralized view of the security test results, but the results also follow a standardized format that’s easy to understand by less technical stakeholders; we can clearly see which scenarios failed without having to understand how the underlying analysis tools work or how they report results.
Are we done once security controls are codified and integrated in the delivery pipeline? No, definitely not. What we have is a basis for trust, a set of validated assumptions about the application’s security.
There are still many grey areas, unknown weaknesses that an attacker will be actively searching for. Security officers’ expertise putting themselves in the role of an attacker, profiling threats, and poking the system for holes specific to the application’s workflow are still required. They too need to be part of the delivery stream, either with manual gates in the deployment pipeline or scheduled at regular intervals, as long as the delivery flow is sustained.
The good news is that we moved the largely mechanical checks early in the pipeline, freeing up more time to explore those grey areas and avoiding time waste on subpar release candidates.
To recap: automating security checks triggers conversations between developers, testers, and security folks. They get everyone on the same page, promote knowledge sharing, and establish common grounds and testable criteria for moving a candidate release through the deployment pipeline.
Throughout this article we’ve presented several practices and tools for bringing together and (re-)building trust between development and security. Each organization should adopt the ones that best fit their needs (and maturity level) to avoid regression to a blame culture. Tools and practices alone don’t lead to more secure systems; we also need frequent and open communication, feedback, and empathy for other people.
Opinions expressed by DZone contributors are their own.