Serverless Delivery: Architecture (Part 1)
If your application tech stack doesn’t need servers, why should your continuous delivery pipeline? Serverless applications deserve serverless delivery!
Join the DZone community and get the full member experience.Join For Free
the software development discipline of continuous delivery has had a tremendous impact on decreasing the cost and risk of delivering changes while simultaneously increasing code quality by ensuring that software systems are always in a releasable state. however, when applying the tools and techniques that exist for this practice to serverless application frameworks and platforms, sometimes existing toolsets do not align well with these new approaches. this post is the first in a three-part series that looks at how to implement the same fundamental tenets of continuous delivery while utilizing tools and techniques that complement the serverless architecture in amazon web services (aws).
here are the requirements of the serverless delivery pipeline:
- continuous – the pipeline must be capable of taking any commit on master that passes all test cases to production
- quality – the pipeline must include unit testing, static code analysis, and functional testing of the application
- automated – the provisioning of the pipeline and the application must be done from a single cloudformation command
- reproducible – the cloudformation template should be able to run on a new aws account with no additional setup other than creation of a route53 hosted zone
- serverless – all layers of the application must run on platforms that meet the definition of serverless as described in the next section
what is serverless?
what exactly is a serverless platform? obviously, there is still hardware at some layer in the stack to host the application. however, what amazon has provided with lambda is a platform where developers no longer need to think about the following:
- operating system – no need to select, secure, configure, administer or patch the os
- servers – no cost risk of over-provisioning and no performance risk of under-provisioning
- capacity – no need to monitor utilization and scale capacity based on load
- high availability – compute resources are available across multiple azs
in summary, a serverless platform is one in which an application can be deployed on top of without having to provision or administer any of the resources within the platform. just show up with some code, and the platform handles all the ‘ilities’ .
the diagram above highlights the components that are used in this serverless application. let’s look at each one individually:
- lambda – this is where your application logic runs. you deploy your code here but do not need to specify the number of servers or size of the servers to run the code on. you only pay for the number of requests and amount of time those requests take to execute.
- api gateway – the api gateway exposes your lambda function at an http endpoint. it provides capabilities such as authorization, policy enforcement, rate limiting, and data transformation as a service that is entirely managed by amazon.
- dynamodb – dynamic data is stored in a dynamodb table. dynamodb is a nosql datastore that has both infinitely scalable storage capacity and throughput capacity which is entirely managed by amazon.
the diagram below compares the pricing for running a node.js application with lambda and api gateway versus a pair of ec2 instances and an elb . notice that for the m4.large, the break even is around two million requests per day. it is important to mention that 98.8% of the cost of the serverless deployment is the api gateway. the cost of running the application out of lambda is insignificant relative to the costs in using api gateway.
this cost analysis shows that applications/environments with low transaction volume can realize cost savings by running on lambda + api gateway, but the cost of api gateway will become cost prohibitive at higher scale.
what is serverless delivery?
serverless delivery is just the application of serverless platforms to achieve continuous delivery fundamentals. specifically, a serverless delivery pipeline does not include tools such as jenkins or resources such as ec2, autoscaling groups, and elbs.
the diagram above shows the technology used to accomplish serverless delivery for the sample application. let’s look at what each component provides:
- aws codepipeline – orchestrates various lambda tasks to move code that was checked into github forward towards production.
- aws s3 artifact bucket – each action in the pipeline can create a new artifact. the artifact becomes an output from the action in the pipeline and is stored in an s3 bucket to become an input for the next action.
- aws lambda – you create lambda functions to do the work of individual actions in the pipeline. for example, running a gulp task on a repository is handled by a lambda function.
- npm and gulp – npm is used for resolving all the dependencies of a given repository. gulp is used for defining the tasks of the repository, such as running unit tests and packaging artifacts.
- aws cloudformation – cfn templates are used to create and update all these resources in a serverless stack.
serverless delivery for traditional architectures?
although this serverless delivery architecture could be applied to more traditional application architectures (e.g., a java application on ec2 and elb resources) the challenge might be having pipeline actions complete within the 300 second lambda maximum. for example, running maven phases within a single lambda function invocation, including the resolving of dependencies, compilation, unit testing, and packaging would likely be difficult. there may be opportunities to split up the goals into multiple invocations and persist state to s3, but that is beyond the scope of this series.
the pricing model for lambda is favorable for applications that have an idle time and the cost grows linearly with the number of executions. the diagram below compares the pricing for running the pipeline with lambda and codepipeline against a jenkins server running on an ec2 instance. for best performance, the jenkins server ought to run on an m4.large instance, but just to highlight the savings, m3.medium and t2.micro instances were evaluated as well. notice that for the m4.large, the break even happens after you are doing over 600 builds per day and even with a t2.micro, the break even doesn’t happen until well over 100 builds per day.
in conclusion, running a continuous delivery pipeline with codepipeline + lambda is very attractive based on the cost efficiency of utility pricing, the simplicity of using a managed services environment, and the tech stack parity of using node.js for both the application and the pipeline.
next week, we will dive into part two of this series, looking at what changes need to be made for an express application to run in lambda and the cloudformation templates needed to create a serverless delivery pipeline. finally, we will conclude with part three going into the details of each stage of the pipeline and the lambda functions that support them.
Published at DZone with permission of Casey Lee, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.