Deployment Pipeline in Amazon Web Services – Bootstrapping
Join the DZone community and get the full member experience.
Join For FreeIn the introduction to this series on deployment pipelines in Amazon Web Services (AWS), I covered the nine “stages” that we use as a starting point for our Continuous Delivery in AWS implementations at Stelligent. This first “stage” I’ll be describing is the bootstrap stage. It’s not what I think of as a typical stage in that it’s not necessarily connected to a single path to production that’s delivering a software system to production. However, the steps that occur in this bootstrap stage are often a necessary prerequisite to having a functioning deployment pipeline.
You should seek to make all of the automation in bootstrapping to be executable from a single command. Some dismiss this notion as they think it’s something you typically do one time and never modify again. In our experience, this isn’t the case and is a carry-over idea from old-world computing and prior to services such as AWS when infrastructure assets were treated as scarce commodities with gatekeepers and minimal modifications – leading to inflexible behavior. This belief is a relic of the past when only certain people were responsible for making changes to certain assets. Often people hear this and think “anarchy”. When done well, this couldn’t be further from the truth, particularly when incorporating automated systems governance into your processes.
The steps I’ll be describing in this article are:
- Configure Local Environments
- Configure Networking
- Deployment Pipeline Bootstrapping
- Configure Support Infrastructure
- Self-Service Deployment
- Establish System Security

Each of these steps is described in more detail.
Configure Local Environments - This is a scaled-down version of the full environment run in production. This environment is provisioned and configured with the operating system, web, app and database servers and is something in which you will deploy your application(s)/service(s) to in order to test locally prior to committing code changes. Tools like Vagrant can help define these types of environments. The launching of these local environments should be capable of running as a single-command operation per node (e.g. running on a laptop). Our objective is to make the whole process a single command – downloading, installing and configuring the full local environment.
Configure Networking - We’re defining the network to include the network and domain configuration. This includes provisioning Virtual Private Clouds (VPCs) – which consists of multiple networking configurations including those for production/non-production, subsets, VPN Gateway, security groups and NACLs. Moreover, it includes network patterns such as bastion host and NAT configurations. It also includes the configuration of Direct Connect and the Route 53 DNS configuration. We aim to get all of these services provisioned and configured from a single command through tools like CloudFormation. We also run a suite of infrastructure tests to verify that the configuration is working correctly.
Deployment Pipeline Bootstrapping - This step you run a single command to launch a fully-coded CI server; the CI server generates the steps/jobs that make up the deployment pipeline along with other CI server configuration such as polling repos and sending emails. Furthermore, the CI server should have logging of all pipeline activity and tagging of AWS resources for complete traceability. In addition, a series of infrastructure tests are run to verify that the CI server is operating properly. These days, we tend to use Jenkins so all of these activities are relevant, but when we use SaaS-based tools (such as the – yet to be released – AWS CodePipeline), some of these activities will not be applicable.
Configure Support Infrastructure - Includes monitoring (system, app and cost), email, auditing, logging, version-control system, IAM, governance, security/key management, binary dependency manager (storing required libraries and application/service distributions). In AWS, tools that get launched or configured as part of the support infrastructure might include CloudWatch, SES, CloudTrail, CloudWatch Logs, CodeCommit, IAM, KMS and S3. Other tools outside of AWS’ services might include tools like New Relic, Janitor Monkey, Conformity Monkey, etc.
Self-Service Deployment - Provide capability for authorized team members to launch their own transient environments (that contain the application/server deployment). The environment will contain the scaled-down version of the software system running on the AWS infrastructure. At the beginning of a project, this might not consist of anything useful, but having the capability is crucial.
Establish System Security - Ensuring that security practices and policies are being adhered. This might include checks for Part of single-command support infrastructure.
These steps can be incorporated into a pipeline of their own so that you know that team members can launch these resources at any point based on recent code changes. They tend not to get rolled into the deployment pipeline that’s creating the software system being built and released to production, but they are crucial resource for that pipeline’s ongoing operations.
In the next article of this series, I explore the image stage of a deployment pipeline in Amazon Web Services.
Published at DZone with permission of Paul Duvall, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
What Is TTS and How Is It Implemented in Apps?
-
A Data-Driven Approach to Application Modernization
-
Which Is Better for IoT: Azure RTOS or FreeRTOS?
-
How To Scan and Validate Image Uploads in Java
Comments