Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Docker in API Gateway and Microservice Development

DZone's Guide to

Using Docker in API Gateway and Microservice Development

· Microservices Zone ·
Free Resource

The new Gartner Critical Capabilities report explains how APIs and microservices enable digital leaders to deliver better B2B, open banking and mobile projects.

As you progress in your education as a developer, you will sooner or later understand the benefits that a container system like Docker has to offer: you can specify your environment in code, without all the Slack messages to system engineers or the headaches that go into standing up consistently provisioned servers. Likewise, you have probably understood the appeal of microservices as a solution to the problems of a monolithic application mired in its own debt-ridden smell.

This article offers some insights into how you can leverage Docker in the development flow of your microservices.

Docker to Support an Isolated Microservice

Development of a microservice itself ought to be fairly straight-forward; from an environment perspective, developing one should not be all that different than developing a more traditional application. Perhaps your microservice needs to support an API endpoint or two - you need it to connect to a couple data models like MySQL or Redis, and you can get off to the races pretty quickly. This is Docker 101 stuff. You can tap into well-supported existing Docker projects like Laradock or NoDock (for PHP and Node.js respectively), which offer developers an integrated Docker environment that supports an array of common technologies networked together via docker-compose.

A Quick Heads-Up

The first heads-up I'd like to give anyone working with Docker is that its pace of development has been pretty fast: even fairly recent courses may refer to commands or utilities that have already been deprecated ( eg, docker-machine). Be prepared to grit your teeth a bit, scratch your head, and navigate some unfamiliar error messages. Once you get past the bouncers, however, membership in the Docker club is worth it.

Docker Standalone

Of course, before we begin, make sure you've got the Docker toolbox on your computer. See Docker.com to download the client for your host operating system (the CE Community Edition version is fine for our purposes).

If you need to run a specific technology such as a scripting language or operating system, chances are good that someone has already created a Docker image for it. DockerHub is your friend when it comes to reusing code that others have so generously shared. Remember: do not reinvent wheels! Note that for some reason, the site is labeled to search for containers, when you are in fact searching for images. Remember: containers are templates - multiple container instances can be created from a single image template.

In a nutshell, your interactions here should revolve around cloning the container (using the clone command) and then running an instance of it (using the run command). For example, this is all you need to do get a working copy of Postgres:

docker pull postgres docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

If you look at the corresponding Git repository for any container, its Dockerfile contains the steps required to build a container instance from an image, eg, for the Postgres container.

Usually the first instruction in a Dockerfile uses the FROM command: this extends the named container, so you can see from the get-go that there is a massive incentive for authors to reuse existing images.

Docker Compose

In many scenarios, you will find it useful to connect separate Docker containers together. If your application requires a specific version of PHP and a specific version of Postgres, no problem: find the Docker images for each and reference them in a docker-compose.yml file.

For many use-cases, docker-compose will be the single most important tool for tying your containers together. For each microservice, you will be able to reference new and existing Docker images and define their relationships via your docker-compose.yml file.

For example, here is how we might define an environment to support PHP 7 and Postgres on a NGINX web server. Let's assume our repository root here has a composer.json and a folder for public web files named public/.

# PHP + Postgres microservice docker-compose.yml version: '2' services: nginx: image: nginx:1.13-alpine ports: - 3000:80 volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf web: # This build command is what # builds the neighboring Dockerfile build: . ports: - 9000:9000 volumes: - .:/var/www - /var/www/vendor depends_on: - postgres environment: DATABASE_URL: postgres://todoapp@postgres/todos postgres: image: postgres:9.6.2-alpine environment: POSTGRES_USER: todoapp POSTGRES_DB: todos

Most of this is port- and volume-mapping. Where is PHP? It is somewhat obscured by the build: . command. Stated more verbosely, that command runs docker build ., so it is expecting there to be a Dockerfile sitting next to this docker-compose.yml file. The PHP Dockerfile might look something like this:

FROM php:7.1-fpm-alpine RUN apk update && apk add build-base RUN apk add postgresql postgresql-dev \ && docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \ && docker-php-ext-install pdo pdo_pgsql pgsql RUN apk add zlib-dev git zip \ && docker-php-ext-install zip RUN curl -sS https://getcomposer.org/installer | php \ && mv composer.phar /usr/local/bin/ \ && ln -s /usr/local/bin/composer.phar /usr/local/bin/composer # References the virtual path COPY . /var/www WORKDIR /var/www RUN composer install --prefer-source --no-interaction ENV PATH="~/.composer/vendor/bin:./vendor/bin:${PATH}"

Depending on your needs, you might be able to get away with having no Dockerfile at all. Instead of a build command, your docker-compose.yml might instead reference an image, but since PHP is the server-side language being used, it is pretty likely that it will need some customization. That would happen most easily inside a Dockerfile, so referencing it via a the build property is probably the best way to go.

The last thing you need is an NGINX config file. Something like this should suffice:

server { listen 80; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/public/; location / { try_files $uri /index.php$is_args$args; } location ~ ^/.+\.php(/|$) { fastcgi_pass web:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

If you're paying attention to the mapping being done in the docker-compose.yml where your repository root corresponds to /var/www/ inside the container, you can see that as written, it expects the nginx.conf file to be at the root of the repository.

It can be easy to get confused by the virtual paths in your nginx.conf file, so you have to compare it with your docker-compose.yml. Specifically, it mapped . (the repository root) to /var/www on the virtual machine. So NGINX picks up from that point and defines its web root as /var/www/public/ - which is the public/ folder from your repository.

To download these images and build them into containers, you would run docker-compose up. It may take a while to download and build the images, but if all goes well, you should be able to hit your new PHP app on http://localhost:3000

Seed Data

As you develop your microservice, you will want to write tests. When using a technology like Docker that can so easily and consistently bring up the related services, you should recognize a nice opportunity to create thorough integration and functional tests that are predicated on a curated collection of seed data.

Why rely solely on unit tests and mocked services when you can hit a real database and get real responses? Although more laborious to set up, the advantage to integration tests is that they are more thorough - there are sometimes surprises and nuances that can mocks do not cover.

A test run in this scenario will begin with restarting your containers and loading them with your curated seed data. This does take longer than executing simple unit tests, but it shouldn't be as slow as browser-automation or other end-user tests.

One of the easiest ways to perform the seed action is by using docker-compose's exec function, which executes a command in the named container. For example, if our PHP application is a Laravel application, we can make use of its artisan command-line tool to migrate and seed our database.

docker-compose exec web php artisan migrate

No matter which language you are using, there should be a viable way to support your database migrations and seed the database with some viable seed data for your integration tests.

!Sign up for a free Codeship Account

Docker for the API Gateway

When you take a step back and start working on the development of the API Gateway itself, or if you need to work on a more complex service that interacts with multiple sources of data, you can end up pulling your hair out trying to come up with working versions of all the inter-related applications in your ecosystem.

A Docker image for your API Gateway application might not be terribly different than what we have discussed for an individual micro service. It will need some environment to handle requests and responses via server-side code (perhaps Go or Elixir), and it will often be attached to an authentication/authorization service so it can validate requests before proxying them to a micro service.

This might be enough: you can test any permissions logic or error handling inside your API Gateway in pretty much the same way as you would inside any of your micro service applications. If you seed your authentication service, you can test that the proper permissions are enforced for each route. You could also verify that an incoming request was proxied to a particular service, and you could mock a response if needed.

This does not represent end-to-end testing, however. We will have to do a bit better if we want to be sure that specific requests yield specific responses. And that's where we come back to Docker and Docker Compose.

If we were to use docker-compose.yml as our "document of record" regarding our microservices, you can easily imagine that it could potentially list a large number of services ( eg, one for each micro service). If each service is built as a Docker image, then you can publish these images on Docker Hub as public (or private) repositories so other developers can easily clone and build the containers needed by your application.

Dedicated Testing Image

One solution for solving the problem of seeding data and running integration tests is to create a dedicated Docker image for the task. This Docker image will probably making good use of the depends_on keyword in your docker-compose.yml file. The language you use to write your tests can be the one best-suited for the task at hand: testing. As long as you can easily populate your data models with seed data and write tests that hit the API Gateway with HTTP requests, this will work.

It is totally possible to put these tests in the same image and codebase as the API Gateway, but for many use cases, it may make more sense to keep it separate. Most important, any changes to tests or seed data should not require an upgrade and deployment of the API Gateway itself. Second, the language of the API Gateway may not be ideal for writing tests or populating models with seed data. So having a dedicated image for the task should help isolate changes and provide the best tools for the job.

This arrangement may be more labor intensive at times, but it does promote test-driven-development and good test coverage, both of the code and services, but also of the environments themselves. If we think about each microservice as a somewhat disposal "cell" in the application's "body," then it makes a lot of sense to keep the integration tests and the seed-data that go along with them separate from the microservices.

In a way, this provides a strong contract between the gateway and its microservices. If you update a service or replace it entirely, your integration tests will provide you with solid assurance that the changes are compatible.

Conclusion

This only scratches the surface of the complexities that one may encounter with networking, populating, and testing interconnected microservices. The approaches outlined in this article have hinted at some of the shortcomings that may arise at in certain scenarios, so you may already have an idea of how other technologies such as Kubernetes may be useful to you. Hopefully it has given you some ideas on how to tackle some of these problems for your own application environment.

The new Gartner Critical Capabilities for Full Lifecycle API Management report shows how CA Technologies helps digital leaders with their B2B, open banking, and mobile initiatives. Get your copy from CA Technologies.

Topics:
microservices ,docker ,api gateway

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}