Over a million developers have joined DZone.

Building a Start-Up Using Serverless Technologies — Part 1

DZone's Guide to

Building a Start-Up Using Serverless Technologies — Part 1

See how you can use serverless technologies to set up a whole startup, including serverless functions and a DynamoDB table.

· Microservices Zone ·
Free Resource

Record growth in microservices is disrupting the operational landscape. Read the Global Microservices Trends report to learn more.


In this series, we'll be looking at how to create a start-up using only serverless technologies.

We'll be using AWS Lambda, and other AWS technologies, the Serverless framework, and of course, Go!

In this part of the series, we'll start by creating our serverless project, and explore some of the basic concepts when creating serverless API's. By the end of the part, we will have a couple of serverless functions and a DynamoDB table.


First you will need to create an AWS account, and you will need to generate some security credentials and ensure those are active on your machine. You can do this with the AWS CLI tool $ aws configure.

You will also need to install serverless $ npm install -g serverless.

What Will We Be Building?

We will be building a start-up which allows self-employed people to keep track of work they've done for each client, and to automatically send them an invoice at a set date of the month. Pretty simple, also kinda useful. I wanted us to build something with actual value, that we could get something out of. Also, I'm aware this probably isn't some new innovative business idea, if it was, I wouldn't be using it as a tutorial, but it's useful enough to serve as a real-world example.

Why Serverless?

If you're a start-up, even in a big business, dealing with infrastructure is a huge cognitive and financial overhead for any team. Typically you'd have DevOps engineers or ops people constantly tweaking and maintaining the health and performance of technologies running on servers, even in the cloud, servers have to be maintained and updated.

One of the great things about microservices, is how your domain models are contextually bounded by each service, each service represents a single entity and its functionality. This makes software easier to reason about, especially in large systems. However, the downside to that is that now you have to worry about networking, service discovery, and communication.

Serverless treats your services as a group of functions, like functional programming, functions are created and combined in order to create something bigger. Part of the power of Serverless, is this functional basis. Your system becomes a mesh of functions, which, when combined, create a system you can reason about, that you can test and update with a greater degree of confidence.

Finally, from a business point of view, with servers, you're paying for the entire time they're running, which in all probability is 24/7. Sure, you can utilize auto-scaling to cut down some of those costs when traffic is quiet. but you're still paying for a constant resource.

With serverless, that model is flipped on its head. Instead of paying for a constant resource, you only pay when your code is being used, in other words, you're not paying for your code when it's sat on a server doing nothing. This means you can really reduce your costs.

So what are the down-sides? Well, sometimes your function can take a little longer to fire up as when your function hasn't been used for a short period (seconds), it's powered down again. So it has to be fired back up again. This is "cold-start," and can typically take about a second sometimes, but once it's fired up, it's kept in the background for a while in case any more requests are soon to follow. But you only ever pay per 200ms of run time. The cold start performance is getting better, and I'd say was still acceptable for most use-cases.

Another use-case serverless probably isn't a good idea is if you have a very hot service, something that's dealing with hundreds of thousands of requests per second for example, i.e a constant considerable load of traffic. In which case cold-starts would probably be less acceptable, and they would likely become far more expensive than standard servers are.

Let's Begin!

First of all, let's create our serverless project:

$ serverless create -t aws-go-dep -p invoicely 

This will create a new serverless project, using the AWS + Go + Dep template and do a bunch of set-up for us. You should have a new directory which looks like this:

We'll start by making a few small tweaks. First up, we'll create a new directory called functions, it feels a little nicer not to just have all of our functions in the root, especially as we add a user interface, etc.

Then delete the two functions the template has created for us, hello and world, we won't be using those. Though, feel free to take a quick look at the code to see how a basic Lambda is set up.

package main

import (

type Response struct {
	Message string `json:"message"`

func Handler() (Response, error) {
	return Response{
		Message: "Go Serverless v1.0! Your function executed successfully!",
	}, nil

func main() {

It's incredibly simple; Lambda starts a handler function, which returns a response, just like any old HTTP request. Ours are going to be a little different as we'll be using the AWS API Gateway in-front of all our Lambdas, but we will go into that more later.

For now, let's create two new directories within our new functions directory: create-client and get-clients. In each of those, create a main.go file. Then open functions/create-client/main.go, let's create our first function and start storing some data. But first, let's add a test. One other great thing about Lambda functions, they're so, so easy to test. You effectively pass in the spoofed request, then you just compare what you expect to get out of the Lambda as a response. Easy!

// functions/create-client/main_test.go
package main

import (


// A fake repository we dependency inject into
// our handler
type fakeRepo struct{}

func (repo fakeRepo) Store(*model.Client) error {
	return nil

func TestCanStoreClient(t *testing.T) {
	request := events.APIGatewayProxyRequest{
		Body: `{ "name": "test client", "description": "some test", "rate": 40 }`,
	h := &handler{fakeRepo{}}
	response, err := h.Handler(request)
	assert.NoError(t, err)
	assert.Equal(t, http.StatusCreated, response.StatusCode)

Pretty easy, right? We're just passing in a request, and testing that nothing breaks. Now the function:

package main

import (



type repository interface {
	Store(*model.Client) error

type handler struct {

// Handler is our lambda handler
func (h handler) Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
	var client *model.Client
	// Marshal request bodt into our client model
	if err := json.Unmarshal([]byte(request.Body), &client); err != nil {
		return helpers.ErrResponse(err, http.StatusInternalServerError)

	// Call our repository and store our client
	if err := h.repository.Store(client); err != nil {
		return helpers.ErrResponse(err, http.StatusInternalServerError)

	// Return a success response
	return helpers.Response(map[string]bool{
		"success": true,
	}, http.StatusCreated)

func main() {
	conn, err := datastore.CreateConnection(os.Getenv("REGION"))
	if err != nil {
	repository := &model.ClientRepository{Conn: conn}
	h := handler{repository}

So here we have our first function, fully tested and ready to go! But you'll notice a few libraries I've created in a directory named pkg. Pkg, according to the recommended go project structure, is a place to house re-usable Go code. I've created a library for connecting to the datastore, a helper for building responses (this can be a little tedious without), and finally, I've created a package for our core data models, as multiple functions will use the same models.

Here's our datastore function:

// pkg/datastore/dynamodb.go
package datastore

import (

// CreateConnection to dynamodb
func CreateConnection(region string) (*dynamodb.DynamoDB, error) {
	sess, err := session.NewSession(&aws.Config{
		Region: aws.String(region)},
	if err != nil {
		return nil, err
	return dynamodb.New(sess), nil

Pretty straightforward — it takes a region as an argument, and creates a connection to AWS DynamoDB and returns the datastore instance, and of course an error if applicable.

Now, our helper functions:

// pkg/helpers/handler.go
package helpers

import (


// Response is a wrapper around the api gateway proxy response, which takes
// a interface argument to be marshalled to json and returned, and an error code
func Response(data interface{}, code int) (events.APIGatewayProxyResponse, error) {
	body, _ := json.Marshal(data)
	return events.APIGatewayProxyResponse{
		Body:       string(body),
		StatusCode: code,
	}, nil

// ErrResponse returns an error in a specified format
func ErrResponse(err error, code int) (events.APIGatewayProxyResponse, error) {
	data := map[string]string{
		"err": err.Error(),
	body, _ := json.Marshal(data)
	return events.APIGatewayProxyResponse{
		Body:       string(body),
		StatusCode: code,
	}, err

These just allow us to pass our response data in, in any format and have the JSON marshaling, etc handled in a consistent way every time. This allows our functions to focus on wrangling our data and actually doing the important stuff!

Finally, our first model and repository:

// pkg/model/model.go
package model

// Client model
type Client struct {
	ID          string `json:"id"`
	Name        string `json:"name"`
	Description string `json:"description"`
	Rate        int32  `json:"rate"`
// pkg/model/repository.go
package model

import (

// ClientRepository stores and fetches clients
type ClientRepository struct {
	Conn *dynamodb.DynamoDB

// Store a new client
func (repository *ClientRepository) Store(client *Client) error {
	id := uuid.NewV4()
	client.ID = id.String()
	av, err := dynamodbattribute.MarshalMap(client)
	if err != nil {
		return err
	input := &dynamodb.PutItemInput{
		Item:      av,
		TableName: aws.String("Clients"),
	_, err = repository.Conn.PutItem(input)
	if err != nil {
		return err
	return err

// Fetch a client
func (repository *ClientRepository) Fetch(key string) (*Client, error) {
	var client *Client
	result, err := repository.Conn.GetItem(&dynamodb.GetItemInput{
		TableName: aws.String("Metrics"),
		Key: map[string]*dynamodb.AttributeValue{
			"ID": {
				S: aws.String(key),
	if err != nil {
		return nil, err
	if err := dynamodbattribute.UnmarshalMap(result.Item, &client); err != nil {
		return nil, err
	return client, nil

As you may have guessed, we're using AWS DynamoDB as our data storage engine. DynamoDB is Amazon's answer to NoSQL, such as MongoDB. It's very powerful, and it's already scaled for you, so you don't have to worry about maintaining a datastore cluster or scaling it. This is the beauty of using AWS technologies: a lot of the hard work is done for you. If you do opt for a traditional database model, not a serverless database, just be wary of how many open connections your database can support. If you suddenly fire up a thousand concurrent Lambda functions, each of which set up their own connections, this can cause problems. Just remember that Lambdas are highly concurrent.

Now we have some code to make our functions nice and neat, and most importantly the function itself, it's time to deploy our function using the serverless CLI.

First, take a look at the serverless.yml file in the root of our project, and ensure you have something like this:

service: invoicely

# Provider relates to your cloud provider here, so AWS, Google etc
  name: aws
  runtime: go1.x
  region: eu-west-1
    REGION: "eu-west-1"
  # Sets access and permissions for these functions
  # In this case, we're allowing our function to talk
  # to a DynamoDB instance within the same region.
  - Effect: Allow
      - dynamodb:Query
      - dynamodb:Scan
      - dynamodb:GetItem
      - dynamodb:PutItem
      - dynamodb:UpdateItem
      - dynamodb:DeleteItem
    Resource: "arn:aws:dynamodb:eu-west-1:*:table/*"

   - ./**
   - ./bin/**

    handler: bin/create-client
    # Events in our case is http, which will create an API Gateway
    # we pass in a few more options, such as allowing permissive CORS options
    # the method type and the path. The path is the uri.
    # There are many types of events you can use other than http, such as
    # SNS, S3, Alexa, Kinesis, etc. Really cool!
      - http:
          path: createClient
          method: post
            - '*'

# These are Cloudformation resources, in our case, we're creating
# a DynamoDB table with a key of `id`, which is just a string.
      Type: AWS::DynamoDB::Table
        TableName: Clients
          - AttributeName: id
            AttributeType: S
          - AttributeName: id
            KeyType: HASH
          ReadCapacityUnits: 1
          WriteCapacityUnits: 1

Please read the comments I've left in the above example, as they explain a little about what they're for. The gist of this is, we're telling Serverless to use DynamoDB, giving it some information about what access our function should have, we're creating our function, referencing our function code (which will be a compiled binary), and then we're giving some additional Cloudformation config, which will create us our DynamoDB table.

Now all we need to do is compile our binary and deploy our serverless function!

I've updated our Makefile:

	dep ensure
	env GOOS=linux go build -ldflags="-s -w" -o bin/create-client functions/create-client/main.go

Now run $ make && sls deploy. You should see something like this:

If you take a look in the AWS console under the Lambdas, you should be able to see your new function. If you also check under DynamoDB, you should also be able to see your new Clients table. Great! That was easy!

Take that URL and post some data to it:

Now check your DynamoDB table:

There we have it! Our first serverless function, passing data into a datastore. In the next part in this series, we'll hook up a few more functions and start building out some more functionality. We'll also look at authentication using JWT and custom authorizers. Authorizers are like middleware functions which we can proxy all of our other functions through to ensure they have valid access tokens.

You can check out the repo here.

Sponsor me on Patreon to support more content like this.

Learn why microservices are breaking traditional APM tools that were built for monoliths.

microservices ,tutorial ,serverless ,go ,golang ,dynamodb

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}