Communicating With Couchbase via a Dockerfile Script and Docker
We have a look at what is involved in working with Docker and a Dockerfile script to talk to Couchbase on an AWS EC2 instance.
Join the DZone community and get the full member experience.
Join For FreeI've been working on a project with my team that involves creating a selection of microservices. These microservices populate a Couchbase Server bucket with data on an interval of our choosing and operate via an AWS EC2 server. The problem is that everyone on my team has a preferred programming language. If we were to install every server side language to this server it would probably become expensive. Instead we decided to use Docker containers for each microservice. Since we are only running them on an interval, we can remove the container after every run keeping our core server lightweight and uncluttered with software.
We're going to walk through some of the stuff we've been doing to make this solution possible.
Configuring Docker on an EC2 Instance
Let's assume that our Amazon EC2 host is Ubuntu 14.04 because as of now that is the long term support release of Ubuntu. Log into the EC2 host and execute the following as a sudoer:
sudo apt-get update
sudo apt-get install docker.io
The above will install Docker on the host and not as a virtual machine like you might experience in a Mac or Windows environment.
At this point in time, Docker can be used. Our intention is to use a build script, but you can use it manually as well. If you haven't already installed Couchbase on the EC2 instance, you can do that as well. Download the latest Debian file from the Couchbase Downloads section and execute the following:
dpkg -i download-file-name.deb
Alternatively, you can choose to spin up an Amazon EC2 instance that is already running Couchbase Server. More information on that subject can be found here.
Designing a Build Script
The Dockerfile is our build script for creating Docker images. In it, we define what operating system to use, what packages it will download, and what files it will contain.
I'm more of a Node.js guy, so my microservices on this project are Node.js. Some of the others used Java and Python, although this article isn't specific to any language. The build file for one of my microservices looks like the following:
FROM ubuntu:trusty
MAINTAINER Nic Raboy
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
RUN apt-get update
RUN apt-get install -yq libcurl3 curl
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN apt-get update && apt-get install -yq nodejs
COPY package.json app.js config.json mock_data.xml /
COPY models /models
COPY routes /routes
RUN npm install
CMD ["node", "app.js"]
To break the above file down, let's look at all the pieces.
First, we determine that this image will be Ubuntu 14.04 and non-interactive. This will be a base image so before we can start running Node.js projects, we need to install the dependencies. We update the operating system repository list and install curl because Ubuntu doesn't ship with it by default, and we'll need it to install the Node.js repository. Once curl is installed, we can add the Node.js 4.x repository and install the software.
This is where things become a little more specific to my project. My project has the following files and directories:
- models/
- routes/
- package.json
- app.js
- config.json
- mock_data.xml
The Dockerfile we're creating sits within the root of this project structure.
What needs to happen is we need to copy everything into our image. This is done by using the COPY
commands. Once everything is copied over we can tell the Node Package Manager (NPM) to install all the dependencies found in the package.json file.
The CMD
line is what will be run when we deploy our built image.
Building and Running a Container
As of right now the Dockerfile should exist in the same directory as your script or application files. Although unimportant, my application files are in a Node.js project that will be automatically run.
Before running our container we must first create it based on the blueprint of our Dockerfile. Execute the following via the Docker Command Line Interface:
docker build -t my_project .
The above will go through each step of the script and tag our image with the name my_project. Once built, at any time you can execute the following from the Docker Command Line Tool to run our image:
docker run --rm --add-host="localhost:10.0.2.2" my_project
Wait a second though. Why are we doing --add-host="localhost:10.0.2.2"
? In this particular example, we are assuming Couchbase is running on our local machine with the information localhost:8091. In the Node.js application, let's assume we try to establish a connection to Couchbase Server via http://localhost:8091. By default, Docker will be under a different subnet making this impossible. Localhost won't be what you think it is. This is why we added a host with a mapping.
Under certain scenarios, you may also find yourself doing --net=host
instead of --add-host="localhost:10.0.2.2"
. More information can be found in the official Docker documentation.
Because we copied our entire Node.js project and added CMD ["node", "app.js"]
, when the image starts, the command will run. The app.js in theory will contain our logic to interface with Couchbase Server via the Couchbase Node.js SDK. The same can be done for any programming language and Couchbase SDK. After the command finishes, the container will be removed.
Conclusion
You just saw how to get Docker running on an Ubuntu server and how to run a project in a container via a script. Although this method of doing things won't be for everyone, it accommodates our need to be able to run a project in any given programming language and have it store data in Couchbase Server running at the host level.
Published at DZone with permission of Nic Raboy, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments