DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Auto-Scaling a Spring Boot Native App With Nomad
  • How to Use Shipa to Make Kubernetes Adoption Easier for Developers
  • Lessons Learned Moving From On-Prem to Cloud Native
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies

Trending

  • Next Evolution in Integration: Architecting With Intent Using Model Context Protocol
  • Traditional Testing and RAGAS: A Hybrid Strategy for Evaluating AI Chatbots
  • Navigating and Modernizing Legacy Codebases: A Developer's Guide to AI-Assisted Code Understanding
  • Tired of Spring Overhead? Try Dropwizard for Your Next Java Microservice
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Docker Orchestration... What It Means and Why You Need It

Docker Orchestration... What It Means and Why You Need It

By 
Sharone Zitzman user avatar
Sharone Zitzman
·
Dec. 02, 14 · Interview
Likes (0)
Comment
Save
Tweet
Share
17.6K Views

Join the DZone community and get the full member experience.

Join For Free

[This article was written by Yaron Parasol.]
Docker Orchestration | Docker Containers | Cloud Orchestration
Docker containers were created to help enable the fast, and reliable deployment of application components or tiers, by creating a container that holds a self-contained ready to deploy parts of applications, with the middleware and the app business logic needed to run them successfully.  For example, a Spring application within a Tomcat container.  By design, Docker is purposely an isolated self-contained part of the application, typically one tier or even one node in a tier.

However, an application is typically multi-tier in its architecture and that means you have tiers with dependencies between them, where the nature of the dependencies can be anything from network connections and remote API invocations, to exchange of messages between application tiers.  And hence an app is a set of different containers with specific configurations.  This is why you need a way to glue the pieces of your app together.

While, Docker has a basic solution for connecting containers using a Docker bridge, this solution is not always the preferred one, especially when deploying the container across different hosts and you need to take care of real network settings.


Docker orchestration with TOSCA + Cloudify. Check it out.   Go


So, what role does the orchestrator play?

The orchestrator will take care of two things:

  1. The timing of container creation - as containers need to be created by order of dependencies and

  2. Container configuration in order to allow containers to communicate with one another - and for that the orchestrator needs to pass runtime properties between containers. 

As a side note here: With Docker you need a special tweak here, as you typically don’t touch config files inside a container, you keep the container intact, so there is an interesting workaround for cases that this is required.

One method to do this is by using a YAML-based orchestration plan to orchestrate the deployment of apps and post-deployment automation processes, which is the approach Cloudify employs.  Based on TOSCA (topology and orchestration standard of cloud apps), this orchestration plan describes the components and their lifecycle, and the relationships between components, especially when it comes to complex topologies.  This includes, what’s connected to what, what’s hosted on what, and other such considerations.  TOSCA is able to describe the infrastructure, as well as, the middleware tier, and app layers on top of these.  Cloudify basically takes this TOSCA orchestration plan (dubbed blueprints in Cloudify speak) and materializes these using workflows that traverse the graph of components, or this plan of components and issues commands to agents.  These then create the app components and glue them together. 

The agents use extensions called plugins that are adaptors between the Cloudify configuration and the various infrastructure as a service (IaaS) and automation tools’ APIs.

In our case, we created a plugin to interface with the Docker API.

Introducing the Docker Cloudify Plugin

The Cloudify-Docker plugin is quite straightforward, it installs the Docker API endpoint/server on the machine and then uses the Docker-Py binding to create, configure, and remove containers.  TOSCA lifecycle events are:

  • Create - installation of the app components

  • Configure - configuration of the component

  • Start - startup/running the component

  • There is also stop & delete - for shutdown and removal

We started by using the create - to create the container, we did not implement configure at the beginning, and start to run the application.  But then we realized that for containers with dependencies we need to have runtime properties, such as IP import of the counterpart container in order to create the container for example. When we create an app server container, we need the port and IP of the database container.  So, we pushed the creation of the container to the configure event, and used a TOSCA relationship pre-configure hook, to get the dependent container’s info at runtime.

Docker Orchestration - What Gives- (1).pngThe way to expose the runtime info to the container with the dependencies is by setting them as environment variables.

01.interfaces:
02.      cloudify.interfaces.lifecycle:
03.        configure:
04.          implementation: docker.docker_plugin.tasks.configure
05.          inputs:
06.            container_config:
07.              command: mongod--rest--httpinterface --smallfiles
08.              image: dockerfile/mongodb
09.        start:
10.          implementation: docker.docker_plugin.tasks.run
11.          inputs:
12.            container_start:               
13.              port_bindings:                
14.                27017: 27017
15.                28017: 28017

Nodecellar Example 

I’d like to explain how this works by using our Nodecellar app as an example. The Nodecellar app is composed of two hosts that, in this case, Cloudify didn’t create but just SSHed into and then installed agents on.  On one we have the MongoD container, with a MongoD process.  On the other we have the Nodecellar container with NodeJS and the Nodecellar app within it.  The Nodecellar container needs a connection to the MongoD container to run the app queries when the app starts.

Ultimately, an orchestrator should not be limited to software deployment, the whole idea behind Docker Is to allow for agility, so we’d also like to use Docker in situations of auto-scale out and auto-heal, CD.  In our next post we’ll show exactly that - how Cloudify can be used with Docker for post-deployment scenarios.

Docker (software) app application

Published at DZone with permission of Sharone Zitzman, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Auto-Scaling a Spring Boot Native App With Nomad
  • How to Use Shipa to Make Kubernetes Adoption Easier for Developers
  • Lessons Learned Moving From On-Prem to Cloud Native
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!