Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

AWS Adventures: Infrastructure as Code and Microservices (Part 4, Final)

DZone's Guide to

AWS Adventures: Infrastructure as Code and Microservices (Part 4, Final)

In this series finale, build a command line tool for your project, then see how you can handle and set up environmental barriers using AWS.

· Cloud Zone
Free Resource

Deploy and scale data-rich applications in minutes and with ease. Mesosphere DC/OS includes everything you need to elastically run containerized apps and data services in production.

Today's the day! We're going to finish up our adventure into infrastructure as code and microservices by taking an in-depth look at how to set up AWS for your various environments, including Dev, QA, Prod, and even green/blue deployment. If you're just joining us, check out the setup, the building, and the testing. Now, we're going to build a quick command line tool for easy use, then dive into the environmental work.

Step 8: Build a Command Line Tool

To make our build.js more flexible, we’ll have to create our first command-line program in Node. There are a few command-line parsing libraries out there, but we’ll just create our own predicates so you learn how to do it yourself, this’ll be commonplace in creating command-line tools for yourself, and we won’t have weird test errors (hence the image).

Open build.js, and delete our buildFunction and createFunction calls at the very bottom. We’re removing that hardcoded; instead of constantly modifying build.js, we’ll make it do what we want when we run it. Next, open up package.json, and let’s include both our tests now vs. switching between index or the build.

"scripts": {
    "test": "mocha *.test.js",


If you’ve never used globs before, that’s what the star is. It says “anything that has a .test.js suffix in the filename”. We have two tests, so if you run npm test now, you’ll get:

Image title

Green is gorrrrrrrgeous.

Let’s create a stub since this’ll be a constant create, add, test fail, test succeed workflow. Open build.js, and add:

const getProgram = (argsv)=>
{
};


Then, at the bottom, export him:

module.exports = {
    ...
    getProgram
};


In build.test.js, import our getProgram function:

const {
    ...
    getProgram
} = require('./build');


We’re going to give this function our command line parameters and expect him to interpret what we meant.

describe.only('#getProgram', ()=>
{
    it('should not build with no parameters', ()=>
    {
        const program = getProgram();
        expect(program.action).to.equal(ACTION_NOTHING);
    });
});


Now re-run npm test, and she’ll fail properly:

Image title

To make it pass, we just need to return a no-op (meaning, no operation, don’t do anything).

const getProgram = (argsv)=>
{
    if(_.isArray(argsv) === false)
    {
        return {action: 'nothing'};
    }
};


Re-run tests and it should pass:

Image title

There are the three3 things our program should do: nothing, build, and destroy.

const ACTION_NOTHING = 'nothing';
const ACTION_BUILD   = 'build';
const ACTION_DESTROY = 'destroy';


And export ’em out at the bottom of build.js:

module.exports = {
    ...
    ACTION_BUILD,
    ACTION_DESTROY,
    ACTION_NOTHING
};


To know which one, we’ll have to parse process.argsv, an array that holds all the arguments passed when starting a Node program.

Image title

The first, the location to the Node binary running it, and second, the build that’s being run, we don’t care about. Everything AFTER is fair game… or there could be dragons there; command-line programs are easy to fat finger (mis-type) so our Array could have some cray text in it.

We’ll strip out those first 2 items, and create some predicates that search for their items. The following predicates will get an Array, and see if -b or –build is in there.

const hasBigBuild   = (array) => _.includes(array, '--build');
const hasSmallBuild = (array) => _.includes(array, '-b');
const hasBuild      = (array) => hasBigBuild(array) || hasSmallBuild(array);


And these for destroy:

const hasBigDestroy   = (array) => _.includes(array, '--destroy');
const hasSmallDestroy = (array) => _.includes(array, '-d');
const hasDestroy      = (array) => hasBigDestroy(array) || hasSmallDestroy(array);


If you have both, we’ll make the assumption now that you want to destroy your stack first, then build a fresh new one. This is a pretty common practice we’ll be doing, so we’ll bake in this assumption now vs. coding error logic for telling the developer they provided competing instructions. So let’s add that instruction to avoid ambiguity, and the predicate to test for it.

const ACTION_DESTROY_AND_BUILD = 'destroy and build';
...
const hasDestroyAndBuild = (array) => hasBuild(array) && hasDestroy(array);
...
module.exports = {
    ...
    ACTION_DESTROY_AND_BUILD
};


Let’s create a new test detecting our build instructions. In build.test.js, import the action up top:

const {
    ...
    ACTION_BUILD
} = require('./build');


Then the test:

it('should build if we tell it to do so', ()=>
{
    const parameters = [
        'node is rad',
        'testing bro',
        '--build'
    ];
    const program = getProgram(parameters);
    expect(program.action).to.equal(ACTION_BUILD);
});


Run your test and she should boom:

Image title

Add this to the bottom of the getProgram function:

if(hasBuild(argsv) === true)
{
    return {action: ACTION_BUILD};
}


Now re-run your tests:

Image title

… and the destroy test in build.test.js, import destroy action:

const {
    ...
    ACTION_DESTROY
} = require('./build');


And the test:

it('should destroy if we tell it to do so', ()=>
{
    const parameters = [
        'node is rad',
        'testing bro',
        '--destroy'
    ];
    const program = getProgram(parameters);
    expect(program.action).to.equal(ACTION_DESTROY);
});


Running it should show the failure:

Image title

To make it pass, we need to check for a destroy below (or above) the build:

if(hasDestroy(argsv) === true)
{
    return {action: ACTION_DESTROY};
}


And re-running the tests should show passing:

Image title

Finally, let’s test for both. Import the action:

const {
...
ACTION_DESTROY_AND_BUILD
} = require('./build');


And copy paste the test:

it('should destroy and build', ()=>
{
    const parameters = [
        'node is rad',
        'testing bro',
        '--destroy',
        '--build'
    ];
    const program = getProgram(parameters);
    expect(program.action).to.equal(ACTION_DESTROY_AND_BUILD);
});


To make it pass, let’s add the final piece to our build.js:

const getProgram = (argsv)=>
{
    if(_.isArray(argsv) === false)
    {
        return {action: ACTION_NOTHING};
    }
    if(hasDestroyAndBuild(argsv) === true)
    {
        return {action: ACTION_DESTROY_AND_BUILD};
    }
    if(hasBuild(argsv) === true)
    {
        return {action: ACTION_BUILD};
    }
    if(hasDestroy(argsv) === true)
    {
        return {action: ACTION_DESTROY};
    }
    return {action: ACTION_NOTHING};
};


Re-run your tests:

Image title

Now that we can parse command-line parameters, let’s write the code that will perform those actions. No tests for this function, just copy paste this into build.js:

const performActionIfPassed = (callback)=>
{
    const program = getProgram(process.argv);
    if(program.action === ACTION_BUILD)
    {
        createFunction(lambda, fs, callback);
    }
    else if(program.action === ACTION_DESTROY)
    {
        deleteFunction(lambda, callback);
    }
    else if(program.action === ACTION_DESTROY_AND_BUILD)
    {
    deleteFunction(lambda, (err, data)=>
        {
            createFunction(lambda, fs, callback);
        });
    }
};


Then at the bottom, we’ll use a snippet to identify if we’re running through node build.js then run our build code; if not, our build.js is being required, so do nothing.

if(require.main === module)
{
    performActionIfPassed((err, data)=>
    {
        log("Done.");  
    });
}


Let’s take her for a spin. Open up your Terminal, cd to the code directory, and run node build --build --destroy. If you left your logging uncommented, she may just sit there for a bit as it uploads the zip file (remember, she’s big currently). Also, deleteFunction may throw an error if the function isn’t created, and that’s ok.

Image title

My deleteFunction threw an error, but that’s ok, the create worked. Now that we have a function, let’s try just a destroy via node build --destroy:

Image title

And looking at our Lambda function list again:

Image title

Yay! Lastly, let’s just do a single build via node build --build:

Image title

The last step in this adventure is to make this all one command. Open your package.json, and let’s add a new script:

"scripts": {
    ...
    "deploy": "npm run deletezip && npm run makezip && node build --destroy --build"
},


Now, anytime you change code and want to re-test it, simply run npm run deploy.

Step 9: Multiple Environments & Green Blue Deployment

Background on Environments

We use multiple environments for a lot of reasons. Here are the 2 most important:

  1. Your code can continue serving customers while you work on it.
  2. Your code often breaks when it moves to a new environment. You create at least three so there is no penalty of breaking production when you move your code from a Dev to QA. Once you’ve practiced this a couple times, you can make the hopefully more confident move from QA to Prod.

Environments are a higher level term, but usually mean a different server with a different URL or IP address. Even servers that are the exact same hardware, OS, versions, and software still can have code break. Some issues you know about, others are part of the discovery process of moving environments.

There is no standard. The most typical is Ddev for “developers to play with code”. QA for “quality assurance people to test the most solid code.” Staging as the last environment change check before prod. Prod is production.

Some teams use more. Some use less.

While there is a strong push towards immutable infrastructure, prod still remains special. In immature software organizations, prod is often the only thing users see. There is a trick to ensure prod remains working when you push new code to it called green/blue deployment. You create two virtual servers on prod. They are identical with one difference: one has a name of green, and one of blue. Users are using a URL to see the green server. You move code to blue. Once you validate blue is working, you switch the URL to point to blue. This is really fast. Sometimes. This is also really easy to undo if something goes wrong. Sometimes.

Environments for Lambdas?

Given that Lambdas are “always up”, yet you’re only charged for when they actually run, do you really care about environments? What about green/blue deployment?

Sadly, yes.

Security

You’ll still potentially have different security in place for each environment. In AWS’ case, this is VPC (virtual private cloud), subnets, and security groups (things that determine what you can do, what ports are open, etc).

Dev’s Use Dev

You’ll have users and other developers in the case of microservices, actively depending upon certain environments to be accessible and behave a certain way.

Dev’s Use QA for their QA

Microservices additionally are often a chain in a larger application, many they intentionally don’t know about. As such, other teams may use your QA service in their QA environment to mirror how you’re developing software.

No Downtime

Updating a Lambda, while super quick (seconds if just updating the function code without publishing a new version), you’ll still experience downtime if someone’s hitting that server.

Rollback Window

If you update a Lambda and break the code, you increase that break window until your rollback to old code is finished. Lambda versions can potentially help here. (I don’t use them)

Green/Blue With Lambdas

Finally, the easiest way to update production Lambdas is still green/blue. Users can actively be using them, or they can be being used in an activate AWS Step Function process, and you can ensure you don’t cause them downtime. The muddy ground here is “What is the URL?”

For example, not all Lambdas are triggered by an API Gateway URL. While you can change the Lambda function an API Gateway is aimed at by just changing the ARN, things get a tincy bit more complicated with S3 buckets, SNS, and CloudWatch notifications. For example, at the time of this writing S3 buckets only allow 1 bucket notification, per bucket. You can remove that notification and add it back, but if you’re dealing with extremely low latency systems, you can miss Object:put events to the bucket while your infrastructure code is running. There are remedial steps here, no doubt, just recognize that green/blue still is a useful deployment pattern to minimize downtime, regardless of environment.

For use, we’ll use the name, such as “myLambda-green” and “myLambda-blue”. The S3 bucket policy, or API Gateway URL, or whatever, points to that named Lambda.

Where Does Environment Go?

Two places. There are 2 people who care about the environment: You, and the Lambda. You put it as part of the Lambda name so can easily scan the function list in the Lambda console, debug in CloudWatch easier, and the Lambda itself can help log. You’ll end up with many lambdas in a list that have the name “myLambda-dev-green”, “myLambda-dev-blue”, “myLambda-qa-green”, etc.

The Lambda consumes what environment it’s in from environment variables which can be encrypted if you wish. This ensures your code has no state, and just adopts whatever is there, regardless of its name. Given many enterprises have separate AWS accounts for both different parts of the organization, and production environments, you can help minimize the pain by making your code stateless and configurable by the environment variables.

Image title

Regions

Regions get a bit more complicated, but the short version is you just follow the same pattern of explicitly naming them and use environment variables. Many AWS functions require a region parameter, so avoid typing ‘us-east-1’ directly into your code.

So… What Do I Do Again?

Same Lambda, just a different name for it to differentiate between Dev, QA, and Prod as well as green and blue. Your createFunction just has a different string. For example, “myLambda-green” and “myLambda-blue”. If you use environments, you can put those in the name too: “myLambda-qa-green” and “myLambda-qa-blue”.

In our code above, we’ve defined dev without a color, but you could simple add more strings:

const DEFAULT_ENVIRONMENT = 'dev';
const COLOR               = 'green';
const FUNCTION_NAME       = NAME_PREFIX + '-' + DEFAULT_ENVIRONMENT + '-' + COLOR;


Then run node build --build fro each color and environment.

Wait, Why Not Aliases and Versions?

Lambda Alises (dev, qa, and prod, green, blue, staging, live), and the Versions they point to (1, $LATEST, Git tag v2.3.7) are great. They allow for potentially easier management of environments and code versions to be managed in a centralized place if you use the AWS Console heavily. However, configuration and environment variables are global and can only be modified in $LATEST version.

Given I work in a high security environment where I test multiple configurations, and unlike S3 buckets, we currently do not have a cost in creating multiple Lambdas, it’s more flexible for the workflow my team has.

Step 10: Integration Test

We unit tested her locally. We manually tested her locally. We manually tested her remotely. In Step 7 we added an echo to more easily test if our Lambda is working without having it do actual work. Let’s automate all that.

Our first integration test invokes the Lambda function. It’s almost exactly the same thing as clicking the Test button, except Node is doing it instead of your finger. Create a new file called index.integrationtest.js and copy pasta this code into it:

const AWS      = require('aws-sdk');
AWS.config.loadFromPath('./credentials.json');
const lambda   = new AWS.Lambda();
const log = console.log;
const expect = require("chai").expect;
const should = require('chai').should();
const {
    handler
} = require('./index');
describe('#index integration', function()
{
    this.timeout(10 * 1000);
    describe('#echo', ()=>
    {
        it('responds to an echo', (done)=>
        {
            var params = {
                FunctionName: "datDJMicrodev", 
                Payload: JSON.stringify({echo: true})
            };
            lambda.invoke(params, (err, data)=>
            {
                log("err:", err);
                log("data:", data);
                done(err);
            });
        });
    });
});


This will send the echo to our Lambda, and she’ll send the pong back. Run it by mocha index.integrationtest.js and she should report a true:

Image title

Conclusions

Creating a microservice for AWS Lambda is easy and straightforward. We’ve built one here in this article. Typically, though, you build many of them. Even if you’re building a monolith application, and your one microservice is pretty large, all the deployment problems still exist. As you’ve hopefully seen, AWS API’s give you powerful tools to put both the deployment and testing of those deployments in your hands.

As a developer, you can now write the microservice, tooling, testing, and deployment all in the same language in the same code repository. This code that handles your infrastructure follows the same peer review and unit/integrating testing best practices. Your infrastructure is now testable, high-quality code controlled by you, the developer. While we’ve created multiple Lambdas that represent their environment and version, AWS provides powerful tools to handle Lambda Aliases and Versions, both in code and in the AWS console.

Frameworks like Serverless help you manage multiple services, have a nice way to test locally, and help you if you’re not familiar with AWS. YAML isn’t code, however. Writing your service, and infrastructure, in the same language helps you adopts the same testing and debugging practices for it.

Discover new technologies simplifying running containers and data services in production with this free eBook by O'Reilly. Courtesy of Mesosphere.

Topics:
cloud ,aws ,infrastructure as code ,tutorial ,environments ,blue-green

Published at DZone with permission of Jesse Warden, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}