Welcome to part 3 in this series. If you missed part 1 and 2, go back and check out Part I of the series to learn how to install the environment we are working with, and take a look at Part II while you’re at it to learn about Docker Swarm’s experimental rescheduling feature.
Deploying a NodeJS and MongoDB Microservice With Docker Swarm
In this last part of the series, we’re going to show you how to deploy a 3 microservice app that has 2 stateless services and one stateful service.
First, a bit on stateless vs. stateful. Stateful-ness and persistence, in general, can mean “the continuance of an effect after its cause is removed” meaning we care about what “has” happened and would like to access it later. Stateless, on the other hand, means that the “continuance of an effect” no longer matters and that whatever service is stateless doesn’t need to worry about retaining that information.
In the example we will show, there are two services that do not need to retain state when requests to access them are handled. On the other hand, there is one stateful service, a database which stores state after the request, to save information that comes in. It’s expected that the lifecycle of the stateless containers doesn’t matter, but the lifecycle of the stateful service does insofar as the data (“continuance of effect”) remains accessible.
The stateless services are:
Tweet Streamer: This NodeJS container streams tweets bases on a hashtag filter from the Twitter API.
Tweets Web: This NodeJS container requests and serves the most recent filtered tweet from the database.
The stateful service is:
MongoDB: This database stores tweets that are streamed in from the Tweet Streamer service.
The architecture looks like this:
The nice part about microservices is the ability to mix and match your service languages because the interaction points are typically over something like HTTP(S). The stateless services, in this case, are written using the MEAN stack with NodeJS, Express, and MongoDB (minus the Angular).
The point here is that we could have written the web service and the streaming service and put them in one container, but then we couldn't scale the services individually if needed, nor could we assign them to different developers or small teams to work on or fix bugs and individually roll out patches. Even though this is a simple example of microservices with Docker Swarm and Flocker, it applies to bigger and more complex use cases as well. To learn more, check out this Microservices article by Martin Fowler which does a really good job at explaining these concepts and more.
To deploy these services we can use Docker Compose against Docker Swarm. Part 1 goes through installing and configuring the environment we use for these examples so if you’re following along you can go back and read what is installed.
As a quick overview, we have a Swarm Cluster enabled for overlay networking using Flocker for volume management.
One of the nice parts of overlay networking in Docker is the ability to access containers by name over the entire Swarm cluster. To show you how much easier it is to manage your containers, we will show the example with and without overlay networking and explain the differences along the way.
Deploying without Overlay Networking
Here is our Docker Compose file. There are a few items to note.
Note: your IP Address for your constraint and MONDODB_SERVICE_SERVICE_HOST will be different depending on your environment.
constraint:node==ip-10-0-57-22 - We are using constraints to place our Mongo database on a specific host because we care where we put it so our stateless services know where to point.
MONGODB_SERVICE_SERVICE_HOST: "10.0.57.22" - We are pointing our services that need the database at a specific IP address for MongoDB which is the same as the constraint node for MongoDB.
We are using an external volume for MongoDB that’s managed by Flocker.
We are using bridge (default) networking for the application.
The volume that we need hasn’t been created, so let’s create it.
Now we can bring up our services with Docker Compose.
Now that our application is running, what if the host that MondoDB is running on fails? What if it needs to be upgraded? First, we can’t tell Swarm to place it somewhere else without changing the constraint, and second, if we change the constraint we need to also change the IP addresses that the other stateless containers are configured to use.
Yes, we could use linking and have them be deployed to the same host and that way they can all move around together and act as a “group” or “pod” but both of these approaches still seem limiting and inflexible in many ways.
This is where Docker overlay networking comes in to shine.
Go ahead and delete the app.
Deploying with Overlay Networking
To deploy this application with overlay networking taken into account, we can change our Compose file to the following:
A few things to note.
We gave our MongoDB service a name. container_name: "mongodatabase1"
We replaced our IP addresses with the MongoDB name mongodatabase1
We removed the constraint from MongoDB. Now it can be deployed on any node.
We added the overlay network blue-net
Next, create the network and storage resources needed for this.
Start up the services again with Docker Compose.
Notice our mongodatabase1 is now accessed by its name, no matter what Docker host the other services land on, with any given IP address.
Again ask yourself, what if the MongoDB container moves? What if the web service gets rescheduled? The answer is that they can without any issues, and as long as they are part of the network blue-net they will be able to access each other by name. This gives us much more flexibility for our environment!
We can double check to see that our MongoDB container is using our Flocker volume.
We can also log into our mongodatabase1 to view the records.
For those wondering, yes, this configuration of the app does filter tweets with the hashtag #DonaldTrump and present them to the browser like so. :)
We’d love to hear your feedback!
Learn how the world’s first NoSQL Engagement Database delivers unparalleled performance at any scale for customer experience innovation that never ends.