Over a million developers have joined DZone.

An Introduction to CoreOS

CoreOS is not just a container management system but an entire (Linux-based) operating system designed to run containers.

· Integration Zone

Learn how API management supports better integration in Achieving Enterprise Agility with Microservices and API Management, brought to you in partnership with 3scale

If you’re reading this, then you have a rough idea of what containers are and why you want to use them. Docker has made it easy to experiment with containers, and is slowly making it easier to deploy and manage them in production environments. However, there are a still a lot of gaps in what Docker offers (for free), and others have stepped up to fill them.

CoreOS is one such option; it’s not just a container management system but an entire (Linux-based) operating system designed to run containers.

“CoreOS isn’t just a container management system. It’s an entire Linux-based OS.” (via @ChrisChinch)

Components of Coreos

CoreOS consists of three crucial components that perform specific functions.

Configuration and Service Discovery

etcd is a globally distributed key-value store that allows the nodes in a cluster to exchange configuration with each other and also be aware of what services are available across the cluster. Information can be ready from etc via a command line utility or via an HTTP endpoint.

Application Management and Scheduling

fleet is a cluster-wide init (the first process that runs all other processes) system that interacts with the systemd init system running on each individual node. This means you can initiate and manage individual processes on each node from a central point.


There’s no package manager in CoreOS; all applications run inside containers. These can be using Docker or CoreOS’s native container engine, rkt (rocket).

“There’s no package manager in CoreOS; all applications run inside containers.” via @ChrisChinch

Getting Started

Since it’s an entire operating system, getting started with CoreOS means you’ll need to install the OS on a couple of nodes to test it properly. If you’re a Mac or Windows user, you’ll need to use Vagrant or try a preconfigured cluster on a hosting provider such as AWS or Digital Ocean.

Once you have an installation of CoreOS, you need to define the cluster in a configuration file that conforms to the ‘cloud-config’ format; it offers a lot of configuration options that you can send to the cluster. I’m experimenting with the Vagrant images, for example. Once installed, you’ll find a config.rb.sample file that you can rename (to config.rb) and change to match the number of instances you would like in the cluster. For example:


You also need to un-comment and change the CoreOS update channel:


During the build process, the Vagrant script will write the default cloud-config to the user-data file, so make a copy of the supplied example file with cp user-data.sample user-data.

Start your cluster with Vagrant up, and when the cluster is ready, you can connect to a node withvagrant ssh core-01 -- -A.

Deploy an Application

The best way to see how CoreOS might be able to help you is to get started with a more real-world example. This is familiar territory with Docker images, containers, and commands.

docker run --name mongo-db -d mongo

This command will start one instance of a MongoDB container called mongo-db. While this is standard Docker practice, it doesn’t help you embrace the full power and flexibility of CoreOS. What if the Mongo instance crashes, or an instance restarts? This is where Fleet and its control of systemd comes to the rescue.

“The best way to see how CoreOS could help you is to get started with a real-world example.”

To make use of systemd, you need to create a service representing the application you want to run. This is called a unit file. On one of the machines in your cluster, create mongo.service inside/etc/systemd/system.

ini [Unit] Description=MongoService After=docker.service Requires=docker.service

[Service] TimeoutStartSec=0 ExecStartPre=-/usr/bin/docker kill mongo-db ExecStartPre=-/usr/bin/docker rm mongo-db ExecStartPre=/usr/bin/docker pull mongo ExecStart=/usr/bin/docker run --name mongo-db -d mongo

[Install] WantedBy=multi-user.target 

Enable and then start the service:

sudo systemctl enable /etc/systemd/system/mongo.service sudo systemctl start mongo.service

And now you can use the time-honored docker ps to see your running containers. This is still relevant to a single node in the cluster; to start the service on the cluster and not worry about where exactly it runs, you need to use fleet.

fleetctl start mongo.service

And check the container started:

fleetctl list-units

Read this document for more advanced ideas for unit files.

Spreading Availability

If you want to ensure that instances on your service run on individual nodes, rename the service file to mongo@.service and add the following line to the bottom of the file:

ini ... [X-Fleet] Conflicts=mongo@*.service

And now you can start multiple instances of the service, with one running on each machine:

fleetctl start mongo@1 fleetctl start mongo@2

And again use fleetctl list-units to check if your instances are running and where.

All machines in the cluster are in regular contact with the node elected as leader in the cluster. If one node fails, system units that were running on it will be marked for rescheduling, as when a suitable replacement is available. You can simulate this situation by logging into one node, stopping the fleet process with sudo systemctl stop fleet, waiting a couple of minutes, and restarting it with sudo systemctl start fleet. You can read the fleet logs sudo journalctl -u fleet to get more insight into what happens.

You can now add the new service to the bottom of your cloud-config file to make sure it starts when systemd starts on a machine.

units: - name: mongo.service command: start

Fleet offers more configuration options for advanced setups, such as running a unit across an entire cluster or scheduling units based upon the capacity or location of machines.

Beyond the Basics

The components outlined above are the basic components of CoreOS, but there are a couple of others useful to container-based applications that work very well with CoreOS.


The most popular container management system created by Google runs even better on CoreOS and offers a higher (and more visual) level of container management that fleet. Read the CoreOS installation guide for more details.


Discussing rkt is a full article in itself, but rkt is a Linux native container runtime. This means it won’t work on MacOS or Windows without using a virtual machine. It’s designed to fit more neatly into the Linux ecosystem, leveraging system level init systems instead of using its own custom methods (like Docker). It uses the appc standards, so in theory, most of your Docker images should work with rkt too.

Getting to the Core

If you’re experienced with Linux and the concepts of init systems, then you’ll likely find CoreOS a compelling tool for use with your Docker images. The project (and team) is growing, recently opening an office in Europe thanks to new funding sources. I’d love to know how you feel about working with it.

Unleash the power of your APIs with future-proof API management - Create your account and start your free trial today, brought to you in partnership with 3scale.

core os,integration,docker

Published at DZone with permission of Chris Ward, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}