Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

AWS Adventures: Infrastructure as Code and Microservices (Part 1)

DZone's Guide to

AWS Adventures: Infrastructure as Code and Microservices (Part 1)

Your DevOps infrastructure fits in nicely to AWS. See how Amazon can help you maintain source control while making testing easy as you deploy microservices.

· Cloud Zone
Free Resource

Deploy and scale data-rich applications in minutes and with ease. Mesosphere DC/OS includes everything you need to elastically run containerized apps and data services in production.

In the old days, you’d write code and allow another team called Operations (or Ops for short) to deploy it to various servers for testing, and eventually production. Quality Assurance teams would be testing your code from a few days to a few weeks ago on another server.

Developer tooling, infrastructure as a service, and shorter development cycles have changed all that. The Amazon practice of “you build it, you own it” has started to filter out to other companies as an adopted practice. Teams are now expected to build, deploy, and maintain their own software.

Today, I wanted to cover what I’ve learned about automated deployments around AWS. You’ll learn why you don’t need Ansible, Chef, or even Serverless, and instead, you can use AWS APIs to do everything you need.

Code

The source code for this article is on GitHub.

Infrastructure As Code

Servers As Pets

In the old days, you treated servers as a part of the family. You named them, constantly monitored their health, and were extremely careful when updating code or applying security patches to them.

As time went on, hardware got cheaper, and green blue deployments came about. You still had physical servers at your place of work or nearby, but you had 2 instead of 1 production boxes that you switched between.

Disposable Infrastructure

The cloud being cheap changed all that. Eventually, servers became so cheap and easy to make, you just destroyed and created new ones at the first sign of trouble.

Infrastructure == Code

The Infrastructure As Code movement, in short, is that all your servers and API’s and builds… are code. Not JSON, not YAML, not even XML (lol, wtf). If you want to update a server, deploy new code, or update a library version, you simply change the code and check into version control.

As a coder, the reasons I like this are:

  1. I’m a coder, I write code, and that makes it something I can more easily understand.
  2. The source code is put into version control. Our releases are the same thing as our code versioning.
  3. Version control has a record of all changes.
  4. Teams can peer review server/deployment changes increasing quality.
  5. I can unit and integration test my infrastructure code more easily because why? It’s code.

Infrastructure as an API

You can build and destroy a lot of things using the AWS console, the web site you use to access your AWS account. However, everything on AWS is an API. While Lambdas are only JavaScript, Java, C#, and Python, you have more language choices, both official and non, to manage your AWS infrastructure such as Shell and Haskell. We automate these processes using code around these API’s AWS gives us.

As a JavaScript Developer, it’s perfect for me because my local Node development environment is in JavaScript, my front-end Angular is written in JavaScript, my REST APIs are written in JavaScript, and my infrastructure deployment code is now written in JavaScript. Some of the Jenkins job creation/deletion/starting is in JavaScript. Daddy has Thor’s hammer Mjölnir, some FullStack gauntlets of power, and everything in software is suddenly an orcish nail.

Immutable Infrastructure

When disposable infrastructure came into fashion, there was this concept of destroy and build. Rather than debug and nurse a particular server back to health, it was faster and more predictable to just destroy it and build a new one. As this process matured, it eventually became known as “immutable infrastructure”.

Just like how nowadays you try to avoid mutable state in your code, you treat your servers and API’s the same way. Since they are now a result of your code creating it, this is easier.

Mutable State Lives On

However, there are some stateful things you can’t really remove. URL’s for one thing. While API Gateway will give a brand new, unique URL each time you create a REST API, the URL to access your website doesn’t change. Mine is jessewarden.com. If I were to destroy my website’s server and code to update it to the latest code on a new server, I’d still want to use the same URL.

In programming, we use constants for these types of things, and that’s no different here. For statefull things that do not change, or perhaps are updated later, you’ll create variables for these.

Don’t Update. Instead Destroy, Then Create a New One

Everything else, assume it’s immutable. If you update your Lambda code in GitHub, don’t update it in AWS manually. Destroy and then create a new one with the latest code which takes seconds.

Treat your infrastructure like you would your state in Redux:

const newLambda = Object.assign({}, currentLambda, {updated: latestGitCommit});

Destroy, Build, and Test Your Lambda

While larger applications are made up of many API’s, databases, storages, and front ends, we’ll pick a small piece that should be able to be built & deployed independently. Let’s destroy, build, and test our newly built Lambda. Afterwards we’ll update some code in our Lambda to show you how you update over time.

Before we start, you should know that AWS Lambda has a nice alias and versioning functionality. I don’t use any of it and I don’t consider it useful. Same with API Gateway stage deployments. Once you start automating things and synching with your Git Feature Branch or Gitflow workflow, the Lambdas, API Gateways, and Buckets you create are then result of your code versioning. I’m only 3 months in so perhaps I’ll change my mind next year.

Versions are custom names such as “v1.0.0”, while “$LATEST” which you can’t delete is basically equivalent to Git’s “master”. They can be treated like Git tags. Aliases are names such as “dev”, “qa”, and “prod”. They don’t have to be environment based, but it helps because you can say “dev is pointing to $LATEST” and “qa is pointing to git tag 1.2.7”.

Also note, none of this has any rollback functionality. If you screw up, that should be considered OK. The whole point of using code to destroy and build is so you can have confidence that “rolling back” is simply switching to a stable Git tag and rebuilding your stack from scratch which takes seconds to minutes.

Easier said then done. For 2 weeks my API Gateway destroy functions were destroying the first api gateway it found that matched a substring. Whoops. #WithGreatPowerComesGreatResponsibility

Crash Course Into AWS Terminology

If you already know these, skip to Step 0.

AWS: Amazon Web Services. Amazon provides hard drives in the cloud, virtual servers, and all kinds of other amazon tools to build applications in the cloud.

AWS Console: The website you use to access your AWS account and build, destroy, and configure your various AWS services.

Lambda: A function that Amazon hosts and runs for you. Hot serverless action.

Environment: Arbitrary name given to a “different server”. Typically “dev”, “qa”, “staging” and “prod”. Ensures code can be moved between servers and not break.

Region: AWS hosts data centers in physical locations throughout the world. Each has a name such as “us-east-1” for my neck of the woods and “ap-southeast-2” for Sydney, Australia. You can have the same code and files run on those regions to be faster for your users.

API Gateway: Create URL’s that are typically used for REST API’s, but you can create them for mocks for simpler integration testing, or simple proxies to existing REST API’s.

ARN: Amazon Resource Names. Everything in AWS has a unique ARN. They’re like URL’s for websites. They look intimidating at first, but creating them yourself becomes pretty trivial once you learn the format.

Step 0: Authenticate

The good news about Step 0 is you only have to do this crap once.

Although we’ll be using Node for this tutorial, you’ll need your credentials to authenticate with your AWS account (if you don’t have one, go sign up, button on the top right). For security reasons, you should create an IAM user that doesn’t have access to all the things. However, to keep this tutorial simple, we’ll use a core credentials.

In the AWS console, click your name, choose security credentials. Choose Access Keys, then click the “Create New Access Key” button.

Image title


In the popup, click the tiny blue arrow link to “Show Access Key.”

Image title


Create a new JSON file called “credentials.json” on your computer and paste in the following code. If you’re using source control, ensure you git ignore the credentials.json file so it’s not checked into source control, yet can still remain in your code project.

{
    "accessKeyId": "your key here", 
    "secretAccessKey": "your secret access key here", 
    "region": "us-east-1"
}


With that out of the way, we're going to go ahead and stop here before this gets too long. You know your way around AWS, you've got your AWS credentials and authentication set up, and you're ready to go. Next time, we'll dive into testing, TDD, and getting your Lambda functions in order.

Discover new technologies simplifying running containers and data services in production with this free eBook by O'Reilly. Courtesy of Mesosphere.

Topics:
aws lambda ,cloud ,infrastructure as code ,microservices ,blue-green ,tutorial

Published at DZone with permission of Jesse Warden, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}