DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Streamlining AWS Lambda Deployments
  • AppOps with Kubernetes and Devtron - The Perfect Fit
  • Continuous Integration and Delivery With AWS Code Pipeline
  • Set Up a CI/CD Pipeline for An Angular 7 Application From Azure DevOps to AWS S3 - Part 2

Trending

  • GitHub Copilot's New AI Coding Agent Saves Developers Time – And Requires Their Oversight
  • Rust, WASM, and Edge: Next-Level Performance
  • Enforcing Architecture With ArchUnit in Java
  • Monolith: The Good, The Bad and The Ugly
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Automating Penetration Testing in a CI/CD Pipeline: Part 3

Automating Penetration Testing in a CI/CD Pipeline: Part 3

The final part of a series on using OWASP ZAP to integrate penetration testing into your continuous delivery pipeline using AWS and Jenkins.

By 
Nick DeClario user avatar
Nick DeClario
·
Jul. 01, 16 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
5.5K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

In this third and final post we’ll cover fully integrating the work we did in the first two posts into a CI/CD pipeline for maximum efficiency. In the first post, we discussed what OWASP ZAP is, how it’s installed and automating that installation process with Ansible. The second post followed up with how to script our penetration tests and manage the results. Strap in for this final article where everything now comes together (cue fanfare).

Wrapping Pen Tests in a CI/CD Pipeline

If you recall the diagram from the first post, we will now be taking each of these steps and wrapping them up into our pipeline. We have the necessary scripts to run the tests and manage the results. What’s left is managing a ZAP server and fetching the necessary information to run our penetration testing against.

ZAP-Basic-CI_CD-Flow - New Page (1)

ZAP will need to be running on a server or a server must be spun up during the penetration testing phase of your pipeline. If using AWS, an AMI can exist that will make provisioning and destroying a ZAP instance extremely easy. For this we’ll be using Packer. We’ll briefly discuss how to deploy the Packer-based AMI provided in the Stelligent Zap repository, though a detailed look into Packer is a bit beyond the scope of this post.

Start by editing the ‘zap-ami-packer.json’ file to match your environment. This includes configuring the AWS region, EC2 key pair, EC2 instance type, and source AMI image, among others. (Please note, Amazon Linux was used in these examples; changing the distribution may require changes to the Ansible playbook.)

Assuming Packer is installed, building the image is as simple as calling the supplied script ‘create-image.sh’. This does require your AWS credentials set in your environment before calling the script. 

export AWS_ACCESS_KEY_ID=<your_aws_access_key>
export AWS_SECRET_KEY=<your_aws_secret_key>
./create-image.sh

This will dump an AMI image ID which could be called from your automation scripts to build an EC2 instance from.

To bring this all together in an effective manner we need to have the penetration testing script triggered as a step in our CI/CD pipeline. To do this the script must know where the ZAP server resides, where the target application is, report results in an easily accessible manner and trigger Jenkins to report a correct ‘pass’ or ‘fail’ status for the job itself.

To achieve this we’ll wrap everything up in a fairly simplistic BASH script that will simply be called from Jenkins.

To start, we must first determine the ZAP host and target host/url for the application to be penetration tested. This will be specific to the environment. If running in AWS, the EC2 or CloudFormation resources could be queried, or if the environment is pre-set this can be passed to the workspace in variables.

Whatever your chosen method for discovering these servers, they’ll need to be passed to the ‘pen-test-app.py’ as follows:

python pen-test-app.py --zap-host <ZAP_HOST:PORT> --target <TARGET_URL>

If using the Packer method above to build your AMI, an EC2 Instance could be launched to host ZAP.

INSTANCE_ID=$(aws ec2 run-instances --image-id <AMI_ID_from_Packer>) --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups my-sg sed -n 's/.*InstanceId": "\(.*\)".*/\1/p')
ZAP_HOST=$(aws ec2 describe-instances --instance-ids ${INSTANCE_ID} --query 'Reservations[*].Instances[0].NetworkInterfaces[0].PrivateIpAddresses[0].PrivateIpAddress' --output text)
python pen-test-app.py --zap-host ${ZAP_HOST} --target 
aws ec2 terminate-instances --instance-ids ${INSTANCE_ID}

After the script has run it will create ‘results.json’ which will then be read by Behave. Behave will return a failed exit status if any of the tests fail. This will in turn trigger the Jenkin’s job to return failure and the pipeline to stop. The developer must now login into Jenkins, access the job and view the console output to determine what has failed. An alternative is to allow some kind of reporting handle the output of Behave, so that the reports can be easily accessed by the developer. To prevent Jenkins from reading the failed exit code from Behave and capture the output we want to run something like this:

behave_results=$(behave > behave_results.txt; echo "$?")

What this does is write the behave results to ‘behave_results.txt’ and captures the exit code to the ‘behave_results’ variable. Now we can run commands to manage the output before exiting with the ‘behave_results’ status. In the example below we can simply upload the reports to S3:

behave_results=$(behave > behave_results.txt; echo "$?")
aws s3 cp behave_results s3://our-application-pipeline/reports/pen-test/
exit ${behave_results}

This will upload the resulting report to the s3 bucket and then exit with the Behave exit code triggering the Jenkins job to ‘pass’ or ‘fail’ accordingly.

Lastly, in place of a simple S3 upload, a more complicated reporting script can be put in place that can capture additional data such as Jenkins’ build information and perhaps format it for json or yaml to be consumed upstream. To make life easier, Behave can be told to dump its output as json. Replace the behave line above with:

behave_results=$(behave --no-summary --format json.pretty > behave_results.json; echo "$?")

It’s also worth noting that an AWS Lambda function can be created to watch for changes in the S3 bucket. The Lambda function can create a pretty HTML report that it pushes back up to a website enabled S3 bucket for viewing.

Summary

This tutorial scratches the surface of what OWASP ZAP is capable of when integrated into a full CI/CD pipeline. As this post attempted to prove, it’s not too difficult to implement automated penetration testing into your own CI/CD pipelines. While it is not meant to fully replace penetration testing of your software, it does aid in the tedious portions of testing and provide fast results, allowing developers to quickly attend to any issues.

Continuous Integration/Deployment Pipeline (software) AWS

Published at DZone with permission of Nick DeClario, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Streamlining AWS Lambda Deployments
  • AppOps with Kubernetes and Devtron - The Perfect Fit
  • Continuous Integration and Delivery With AWS Code Pipeline
  • Set Up a CI/CD Pipeline for An Angular 7 Application From Azure DevOps to AWS S3 - Part 2

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!