How to Implement Continuous Delivery for Node.js Using AWS – Screencast
Check out this awesome AWS Continuous Delivery demo screencast by the experts at Stelligent.
Join the DZone community and get the full member experience.Join For Free
See the YouTube screencast and transcript below of a Continuous Deployment pipeline demonstration for a Node.js application using AWS services such as EC2, DynamoDB, Route 53, ENI and VPC and tools such as AWS CodePipeline, Jenkins, Chef and AWS CloudFormation. Open source code is available at https://github.com/stelligent/dromedary
In this screencast, you’ll see a live demonstration of a system that uses Continuous Deployment of features based on a recent code commit to Github.
You can access this live demo right now by going to demo.stelligent.com. All of the code is available in open source form by going to https://github.com/stelligent/dromedary.
This is a simple Node.js application in which you can click on any of the colors to “vote” for that color and see the results of your votes and others in real time. While it’s a simple application, it uses many of the types of services and tools that you might define in your enterprise systems such as EC2, Route 53, DynamoDB, VPC, Elastic IP and ENI, and it’s built, deployed, tested and released using CloudFormation, Chef, CodePipeline and Jenkins – among other tools.
So, I want you to imagine there’s several engineers on a team that have a dashboard like this on a large monitor showing AWS CodePipeline. CodePipeline is a service released by AWS in July 2015. With CodePipeline, you can model your Continuous Delivery and Continuous Deployment workflows – from the point at which someone commits new code to a version-control repository until it gets released to production.
You can see it shows there’s a failure associated with the infrastructure test. So, let’s take a look at the failure in Jenkins.
Many of you may already be familiar with Jenkins as it’s a Continuous Integration server. In the context of this demo, we’re using CodePipeline to orchestrate our Continuous Deployment workflow and Jenkins to perform the execution of all the code that creates the software system.
Now, the reason for this failure is because we’ve written a test to prevent just anyone from SSH’ing into the EC2 instance from a random host. So, I’m going to put my infrastructure developer hat on and look at the test that failed.
This is an RSpec test to test whether port 22 is accessible to any host. This test gets run every time someone commits any code to the version-control repository.
Based on this test failure, I’m going to look at the CloudFormation template that provisions the application infrastructure. The app_instance.json CloudFormation template defines the parameters, conditions, mappings and resources for the application infrastructure. With CloudFormation there are over 100 built-in AWS resources we define in code. Here, I’m looking at the resource in this template that defines the security group for the EC2 instance that’s hosting the application.
I’m going to update the CIDR block to a specific host using the /32 notation so that only I can access the instance that hosts the application.
Normally, I might wait for the changes made by the other infrastructure developer to successfully pass through the pipeline, but in this case, I’m just going to make my application code changes as well.
So, putting my application developer hat on, I’m going to make an application code change by adding a new color to the pie chart that gets displayed in our application. So, I’ll add the color orange to this pie chart. I’m also going to update my unit and functional tests so that they pass.
Now, you might’ve noticed something in the application. I’m not using SSL. So you go to http, instead of https. That’s insecure. So, I’m going to make some code changes to enable SSL in my application.
Ok. So, I’ve committed my changes to Git and it’s going to run through all the stages and actions in CodePipeline.
…and commit it to Git
So, CodePipeline is polling Github looking for any changes. Since I just committed some new changes, CodePipeline will discover those changes and begin the execution of the defined jobs in Jenkins.
Now, you’ll notice that CodePipeline picked up these changes – the security/infrastructure, SSL, changes along with the application code changes will be built, tested, deployed as part of this deployment pipeline.
You’ll see that there are a number of different stages here, each consisting of different actions. A stage is simply a logical grouping of actions, and is largely dictated by your application or service. The actions themselves call out to other services. There are four types of built-in actions within CodePipeline: source, build, test and deploy. You can also define custom actions in CodePipeline
Ok, now CodePipeline has gone through all of its stages successfully.
You can see that I added the new color, orange, and all my unit and infrastructure tests passed.
It spun up a new infrastructure and used the Blue/Green Deployment pattern to switch to the new environment using Route 53. With a blue-green deployment, we’ve got a production environment (we’ll call this “blue”) and a pre-production environment that looks exactly like production (and we’ll call this “green”). We commit the changes to Github and CodePipeline orchestrates these changes by spinning up the green environment and then uses Route 53 to move all traffic to green. Or, another way of putting this is that you’re switching between production and pre-production environments. Anyway, with this approach you can continue serving users without them experiencing any downtime. You can also potentially rollback to that blue environment can become production again if anything goes wrong with the deployment.
So, let’s summarize what happened…I made application, infrastructure and security changes as code and committed them to Git. It automatically went through a deployment pipeline using CodePipeline and Jenkins – which was also defined in code – It built, ran unit and functional tests, and stored the distro, launched an environment from code and deployed the application – using CloudFormation and Chef. As part of this pipeline, it also ran infrastructure tests as code and deployed to production without anyone lifting another finger. This way you get feedback as soon as an error occurs and only release to production when it passes all these checks. You can do the same thing in your enterprises. From commit to production in minutes or hours instead of days or weeks. Just think about what’s possible when you’re releases become a non-event like this!
Published at DZone with permission of Paul Duvall, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.