DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Stories from KubeCon - Carmine Rimi of Canonical
  • Serverless vs Containers: Which Is Right for Your Business?
  • Keep Your Application Secrets Secret
  • Microsoft Azure Virtual Machine

Trending

  • The Stairway to Apache Kafka® Tiered Storage
  • Automated Testing Lifecycle
  • Wild West to the Agile Manifesto [Video]
  • Automate Migration Assessment With XML Linter
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Cross-Platform Hybrid Cloud with Docker

Cross-Platform Hybrid Cloud with Docker

FInd out how to build a cross-platform hybrid cloud on top of Docker by using a virtualized Docker engine, a bare-metal cluster, and a scalable architecture.

Chanwit Kaewkasi user avatar by
Chanwit Kaewkasi
·
May. 11, 15 · Tutorial
Like (1)
Save
Tweet
Share
35.27K Views

Join the DZone community and get the full member experience.

Join For Free

tl;dr

 what we're talking about here in this article is   a cross-platform hybrid cloud built atop docker   . with our modifications made into the   docker    tool set, here's the list of surprise:  

  • we successfully mix our 50 linux/arm nodes and another 50 public linux/x86-64 digitalocean nodes into a single virtual docker engine using  docker swarm  , with modifications.
  •  we run a bare-metal linux/arm (not linux/x86-64) cluster, and we manage to provision the cluster using  docker machine  with our machine's driver. 
  •   regardless of the hardware architecture behind (as it's a mix of arm and x86-64 nodes), we  successfully  use  docker compose  to scale nginx to 100 containers over this 100-node cluster.  

hybrid cloud and docker

with the docker tool set, we've been at ease already to setup a hybrid cloud. docker machine helps us create and manage container-centric machines. it comes with a large set of drivers, covering from local virtualbox to iaas such as amazon ec2 or digitalocean. docker machine just makes these machine instances docker-ready.

docker swarm helps us form a cluster of docker instances, which are provisioned by docker machine. it doesn't matter where the docker engines are running. if their ip addresses are reachable, swarm can form them to be a virtual single docker host. then it allows any docker client to control the cluster. with standard machine and swarm, we already have an easy way to build a docker-based hybrid cloud.

despite running anywhere, a limitation of docker currently is that it officially supports linux/x86-64 (aka amd64) architecture only. this is understandable as linux/x86-64 is the largest and prominant platform. as previously mentioned, we can run a linux/x86-64 hybrid cloud with docker.

 b  ut what if we'd like to bring more than one hardware architectures together as a single endpoint?  

arm and low-power clouds

 fact: the arm architectures have been emerging to the server-class market. 

arm-based servers and clouds have been gradually emerging.  appliedmicro delivers x-gene, the world first armv8 64-bit system-on-chip server. hp moonshot, based on x-gene soc, brings this kind of servers to data centers already. a manchester-based firm,  datacentred  , built an openstack cloud platform atop moonshot servers. in addition to arm 64-bit class,  scaleway  , a french startup, provides 32-bit arm-based iaas since last year.

our story

in april last year, we wrote about an aiyara cluster , a spark/hadoop cluster made with arm boards. its technical description was kindly published in the dzone's big data guide.

since then, we have encountered a new problem.

although we've been using ansible to manage our aiyara cluster with good success, managing software applications for a cluster is hard.

we concluded that we need a virtualization layer, even at the small scale.  however, the hypervisor approach is not an option for us because we use arm processors for the cluster. 

 is there any a better way? 

fortunately, during the same time we been working on aiyara, docker has been shining. we then was taking a look at docker and putting efforts to get it run on our arm-based cluster. at least, it's up and running. next, how should we do to manage docker in the clustering mode?

in december last year, docker  announced  a new set of tools to support its ecosystem namely swarm, machine and compose. we planned to adopt swarm for managing cluster-wide virtualization. so that's our starting point to  contribute to swarm  . later in february, we found that docker machine will be really useful for us to help provisioning nodes of the cluster. but machine has no driver to support provisioning a bare-metal hardware yet. so we decided to implement one, the machine's aiyara driver.

the result is fantastic. it enables us to control both our cluster and public clouds at the same time using a simple workflow similar to what suggested in docker orchestration. here's what it looks like:

$ machine ls
name             active   driver   state   url                       swarm
rack-1-node-11            aiyara           tcp://192.168.1.11:2376
rack-1-node-12            aiyara           tcp://192.168.1.12:2376
rack-1-node-13   *        aiyara           tcp://192.168.1.13:2376
rack-1-node-14            aiyara           tcp://192.168.1.14:2376
rack-1-node-15            aiyara           tcp://192.168.1.15:2376

the only remaining but biggest problem was that docker images are only available in platform-specific binary formats. if you build an image for the linux/x86-64 (amd64) architecture, it will not natively run on a linux/arm machine. although we can emulate it via  qemu  , but it's not going to be a good enough choice.

we are lucky enough to come up with a good solution. our versions of swarm and machine working greatly together to make the cluster running both x86-64 and arm images transparently.

starting small, a 2-node hybrid cloud

 here's the list of our smallest hybrid cloud. we have a node running on an arm board, and another is a digitalocean 512mb machine. an extra local   master   node is acting as swarm master. 

$ machine ls
name             active   driver         state     url                         swarm
master           *        none                     tcp://127.0.0.1:3376
ocean-1                   digitalocean   running   tcp://128.199.108.67:2376
rack-1-node-4             aiyara                   tcp://192.168.1.4:2376

docker machine makes orchestration easy. we can just use  machine config  to provide all configuration needed for the docker client. you can see  docker orchestration  for more information.

$ docker $(machine config master --swarm) info
containers: 2
strategy: spread
filters: affinity, health, constraint, port, dependency
nodes: 2
 ocean-1: 128.199.108.67:2376
  └ containers: 1
  └ reserved cpus: 0 / 1
  └ reserved memory: 0 b / 514.5 mib
  └ labels: executiondriver=native-0.2, kernelversion=3.13.0-43-generic, operatingsystem=ubuntu 14.04.1 lts, provider=digitalocean, storagedriver=aufs
 rack-1-node-4: 192.168.1.4:2376
  └ containers: 1
  └ reserved cpus: 0 / 2
  └ reserved memory: 0 b / 2.069 gib
  └ labels: architecture=arm, executiondriver=native-0.2, kernelversion=3.19.4, operatingsystem=debian gnu/linux 7 (wheezy), provider=aiyara, storagedriver=aufs

the above  docker info  command ran through swarm master. it showed that we have a cluster of 2 nodes, the first one uses the docker machine's digitalocean provider. the second one is our aiyara node. both uses aufs as their storage engine.

next we tried to pull the debian images to each node before running it.

$ docker $(machine config master --swarm) pull debian
rack-1-node-4: pulling debian:latest...
ocean-1: pulling debian:latest...
ocean-1: pulling debian:latest... : downloaded
rack-1-node-4: pulling debian:latest... : downloaded

well,  ocean-1  clearly pulls the image faster :)

next, we tested running a simple command  uname -a  twice through swarm. we used the  debian  image. swarm would choose a node for the first run, then another node for the second run because its default scheduling strategy is  spread  , an algorithm that will place containers as spread as possible.

$ docker $(machine config master --swarm) run debian uname -a
linux e75d5877493e 3.19.4 #2 smp mon apr 20 02:39:39 ict 2015 armv7l gnu/linux

$ docker $(machine config master --swarm) run debian uname -a
linux 6d6b9d406f88 3.13.0-43-generic #72-ubuntu smp mon dec 8 19:35:06 utc 2014 x86_64 gnu/linux

let see what's behind this. we ran  ps -a  to show all containers inside the cluster. you may find it interesting that actually our arm node ran a container with the different image used by the digitalocean node.  the  aiyara/debian:latest.arm  is our debian arm image. aiyara version of swarm is clever enough to know that we're going to place a new container on an arm machine, so it chose the correct platform-specific image for us.

$ docker $(machine config master --swarm) ps -a
container id        image                      command             created             status                      ports               names
6d6b9d406f88        debian:latest              "uname -a"          14 minutes ago      exited (0) 13 minutes ago                       ocean-1/focused_leakey
e75d5877493e        aiyara/debian:latest.arm   "uname -a"          14 minutes ago      exited (0) 14 minutes ago                       rack-1-node-4/modest_ritchie

a 100-node cluster

  well, a 2-node cluster is too simple. here's a 100-node cloud in action.  

as we already have 50 arm nodes in our cluster, we then create other 50 nodes on digitalocean. here's the command for creating a digitalocean node.

$ machine create -d digitalocean --digitalocean-access-token=token --digitalocean-region=sgp1 ocean-1

we ran the above command for node  ocean-1  to  ocean-50  . we used mostly the default parameters, so we've got  512mb  vms from the $5/mo plan.

it would be so harsh to scale number of nodes manually, but docker compose could help us handle it easily. we created a directory and named it  aiyara_cloud  . then we placed  docker-compose.yml  , a description file for our deployment unit, there. we would start a  web  nginx server on each node, and we'd like to bind the host's port 80 to the container's exposed port. here's our yml description:

$ cat docker-compose.yml
web:
  image: nginx
  ports:
    - "80:80"

we started the first node with the following command:

$ docker-compose up -d

to scale the  web  server to 100 node, we used:

$ docker-compose scale web=100

then you will see that other 99 containers would be created from each platform-specific image and placed correctly to their hardware, one-by-one. here's the list of all running containers over the hybrid arm/x86-64 docker-based cloud.

we thank digitalocean to let us run 50 droplets at a time so we can create a large 100-node hybrid cloud mixing aiyara cluster and digitalocean together.

conclusion

today, we have successfully built  the first-known cross-platform hybrid cloud  based on docker with little modification to machine and swarm. these changes allow us to transparently use docker over heterogeneous hardware.

 we envision that this kind of hybrid cloud is important. this architecture will help us: 

  •  balance performance and power consumption of the cloud 
  •   gradually migrate images to your  preferred  platforms  
  •   mix and match your hybrid cloud to utilizes the available resources  

microsoft is going to  support docker  for the next version of windows server, so there will be a kind of  windows/amd64  docker images available after that. also  linux/aarch64  is already coming to the market. fortunately, our cross-platform hybrid cloud architecture is ready today for them. if you're already using docker, this cross-platform hybrid cloud architecture will minimize changes of your devops workflow.

this concept of cross-platform hybrid cloud would never have happened without the rock solid foundation of   the docker ecosystem   . we thank you the docker, swarm, machine and compose teams for their all great works!  our team just did a small contribution on top of them. 

Docker (software) Cloud Cross platform clustering Machine

Opinions expressed by DZone contributors are their own.

Related

  • Stories from KubeCon - Carmine Rimi of Canonical
  • Serverless vs Containers: Which Is Right for Your Business?
  • Keep Your Application Secrets Secret
  • Microsoft Azure Virtual Machine

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: