DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Software Design and Architecture
  3. Containers
  4. An Introduction to CoreOS

An Introduction to CoreOS

CoreOS is not just a container management system but an entire (Linux-based) operating system designed to run containers.

Chris Ward user avatar by
Chris Ward
CORE ·
Sep. 26, 16 · Opinion
Like (2)
Save
Tweet
Share
3.61K Views

Join the DZone community and get the full member experience.

Join For Free

If you’re reading this, then you have a rough idea of what containers are and why you want to use them. Docker has made it easy to experiment with containers, and is slowly making it easier to deploy and manage them in production environments. However, there are a still a lot of gaps in what Docker offers (for free), and others have stepped up to fill them.

CoreOS is one such option; it’s not just a container management system but an entire (Linux-based) operating system designed to run containers.

“CoreOS isn’t just a container management system. It’s an entire Linux-based OS.” (via @ChrisChinch)

Components of Coreos

CoreOS consists of three crucial components that perform specific functions.

Configuration and Service Discovery

etcd is a globally distributed key-value store that allows the nodes in a cluster to exchange configuration with each other and also be aware of what services are available across the cluster. Information can be ready from etc via a command line utility or via an HTTP endpoint.

Application Management and Scheduling

fleet is a cluster-wide init (the first process that runs all other processes) system that interacts with the systemd init system running on each individual node. This means you can initiate and manage individual processes on each node from a central point.

Applications

There’s no package manager in CoreOS; all applications run inside containers. These can be using Docker or CoreOS’s native container engine, rkt (rocket).

“There’s no package manager in CoreOS; all applications run inside containers.” via @ChrisChinch

Getting Started

Since it’s an entire operating system, getting started with CoreOS means you’ll need to install the OS on a couple of nodes to test it properly. If you’re a Mac or Windows user, you’ll need to use Vagrant or try a preconfigured cluster on a hosting provider such as AWS or Digital Ocean.

Once you have an installation of CoreOS, you need to define the cluster in a configuration file that conforms to the ‘cloud-config’ format; it offers a lot of configuration options that you can send to the cluster. I’m experimenting with the Vagrant images, for example. Once installed, you’ll find a config.rb.sample file that you can rename (to config.rb) and change to match the number of instances you would like in the cluster. For example:

$num_instances=3

You also need to un-comment and change the CoreOS update channel:

$update_channel='stable'

During the build process, the Vagrant script will write the default cloud-config to the user-data file, so make a copy of the supplied example file with cp user-data.sample user-data.

Start your cluster with Vagrant up, and when the cluster is ready, you can connect to a node withvagrant ssh core-01 -- -A.

Deploy an Application

The best way to see how CoreOS might be able to help you is to get started with a more real-world example. This is familiar territory with Docker images, containers, and commands.

docker run --name mongo-db -d mongo

This command will start one instance of a MongoDB container called mongo-db. While this is standard Docker practice, it doesn’t help you embrace the full power and flexibility of CoreOS. What if the Mongo instance crashes, or an instance restarts? This is where Fleet and its control of systemd comes to the rescue.

“The best way to see how CoreOS could help you is to get started with a real-world example.”

To make use of systemd, you need to create a service representing the application you want to run. This is called a unit file. On one of the machines in your cluster, create mongo.service inside/etc/systemd/system.

ini [Unit] Description=MongoService After=docker.service Requires=docker.service

[Service] TimeoutStartSec=0 ExecStartPre=-/usr/bin/docker kill mongo-db ExecStartPre=-/usr/bin/docker rm mongo-db ExecStartPre=/usr/bin/docker pull mongo ExecStart=/usr/bin/docker run --name mongo-db -d mongo

[Install] WantedBy=multi-user.target 

Enable and then start the service:

sudo systemctl enable /etc/systemd/system/mongo.service sudo systemctl start mongo.service

And now you can use the time-honored docker ps to see your running containers. This is still relevant to a single node in the cluster; to start the service on the cluster and not worry about where exactly it runs, you need to use fleet.

fleetctl start mongo.service

And check the container started:

fleetctl list-units

Read this document for more advanced ideas for unit files.

Spreading Availability

If you want to ensure that instances on your service run on individual nodes, rename the service file to mongo@.service and add the following line to the bottom of the file:

ini ... [X-Fleet] Conflicts=mongo@*.service

And now you can start multiple instances of the service, with one running on each machine:

fleetctl start mongo@1 fleetctl start mongo@2

And again use fleetctl list-units to check if your instances are running and where.

All machines in the cluster are in regular contact with the node elected as leader in the cluster. If one node fails, system units that were running on it will be marked for rescheduling, as when a suitable replacement is available. You can simulate this situation by logging into one node, stopping the fleet process with sudo systemctl stop fleet, waiting a couple of minutes, and restarting it with sudo systemctl start fleet. You can read the fleet logs sudo journalctl -u fleet to get more insight into what happens.

You can now add the new service to the bottom of your cloud-config file to make sure it starts when systemd starts on a machine.

units: - name: mongo.service command: start

Fleet offers more configuration options for advanced setups, such as running a unit across an entire cluster or scheduling units based upon the capacity or location of machines.

Beyond the Basics

The components outlined above are the basic components of CoreOS, but there are a couple of others useful to container-based applications that work very well with CoreOS.

Kubernetes

The most popular container management system created by Google runs even better on CoreOS and offers a higher (and more visual) level of container management that fleet. Read the CoreOS installation guide for more details.

R(oc)k(e)t

Discussing rkt is a full article in itself, but rkt is a Linux native container runtime. This means it won’t work on MacOS or Windows without using a virtual machine. It’s designed to fit more neatly into the Linux ecosystem, leveraging system level init systems instead of using its own custom methods (like Docker). It uses the appc standards, so in theory, most of your Docker images should work with rkt too.

Getting to the Core

If you’re experienced with Linux and the concepts of init systems, then you’ll likely find CoreOS a compelling tool for use with your Docker images. The project (and team) is growing, recently opening an office in Europe thanks to new funding sources. I’d love to know how you feel about working with it.

CoreOS Docker (software) Kubernetes operating system

Published at DZone with permission of Chris Ward, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Container Security: Don't Let Your Guard Down
  • Introduction to Container Orchestration
  • Microservices Testing
  • Tracking Software Architecture Decisions

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: