Over a million developers have joined DZone.

Using Docker to Build a Cross-Platform CLI

DZone's Guide to

Using Docker to Build a Cross-Platform CLI

Docker can simplify Docker host mounts and utilities, and at its core, Che is a server that runs in a Tomcat or equivalent engine.

· Integration Zone
Free Resource

Today’s data climate is fast-paced and it’s not slowing down. Here’s why your current integration solution is not enough. Brought to you in partnership with Liaison Technologies.

Docker has been a lifesaver for Eclipse Che. We originally made the choice to use Docker as the engine for workspaces starting in 2013, but its power has extended beyond offering a convenient runtime for developers.

We’ve applied Docker to our installation and product using a variety of container patterns. The one we get the most benefit from has been Docker’s cross-platform portability. Earlier this year, we rewrote our Che launch and management capabilities using Docker. We saw that Docker provided a common syntax that ran (mostly) the same on every operating system that Che needed to run on and we wondered if we could solve historical complexity problems we had with installation and operation of Che, by giving users a universal syntax.

The History

Eclipse Che has gone through a few different incarnations. It’s currently in its fourth generation and we are working diligently on the fifth generation due this Fall.

At its core, Che is a server that runs in a Tomcat or equivalent engine. Tomcat is well-tested and there are thousands of developers around the world comfortable with the intricacies of starting, stopping, and configuring Tomcat. As such, our first release of Che provided utilities that were Tomcat wrappers. Developers were still obligated to set certain OS-specific or Tomcat-specific environment properties. We figured that this would be well understood along with the breadth of materials available on Stack Overflow on how to configure Tomcat.

While Tomcat is fast to boot and easy to configure with different logging levels for debugging, we ran into a number of community challenges:

  1. Tomcat does not install using the same syntax on different operating systems. Providing a consistent installation sequence for new users required customized documentation and onboarding flows tied to each operating system.
  2. Tomcat configuration is different on different operating systems. While Tomcat does provide .sh and .bat files for the various flavors of operating systems, many of the configuration properties are directories that have to be formatted differently on different operating systems.
  3. Tomcat’s assembly does not provide a good user experience. Showing users who have just installed Che the base folders of a Tomcat assembly is misleading and confusing. Users of a new product benefit from having the installation directory reflect the nature of the product they are installing.
  4. Tomcat requires Java to be installed. In spite of Java’s 20-year history, corporate customers have challenges in getting a stable version of Java installed and then configured with a proper JAVA_HOME. It is not pleasant nor is it easy to communicate to a non-Java installer the nuances of getting Java installed and configured properly.
  5. On certain flavors of Linux, especially in some cloud providers, Tomcat has to be configured differently based upon the user and group that Tomcat will be run under.

These issues, all minor individually, add up to a collection of issues that cause new product users to pay attention to aspects that distract from the product’s core value proposition. We recognized that we needed to minimize the time between user awareness and product usage.

Che as a Docker Container

In 2014, we advanced further by packaging Che as a Docker container. The Docker container would package Tomcat and Java, configure necessary SSH keys, add the appropriate users, set the internal configuration, and provide utilities to start and stop Tomcat in accordance with our best practices.

FROM alpine:3.4

    JAVA_HOME=/usr/lib/jvm/default-jvm/jre \
    PATH=${PATH}:${JAVA_HOME}/bin \

RUN echo "http://dl-4.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
    apk upgrade --update && \
    apk add --update ca-certificates curl openssl openjdk8 sudo bash && \
    addgroup -S user -g 1000 && \
    adduser -S user -h /home/user -s /bin/bash -G root -u 1000 -D && \
    adduser user user && \
    adduser user users && \
    echo "%root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
    rm -rf /tmp/* /var/cache/apk/*

EXPOSE 8000 8080
USER user
ADD assembly/assembly-main/target/eclipse-che-*/eclipse-che-* /home/user/che/
ENV CHE_HOME /home/user/che
ENTRYPOINT [ "/home/user/che/bin/che.sh", "-c" ]
CMD [ "run" ]

At the end of the Dockerfile, we add a Che assembly into the Dockerfile, which includes an embedded Tomcat. And then we execute the correct Che startup script, which starts Tomcat.

This led to a common way to start and stop Tomcat, right? The only thing a user would need to have installed is Docker, so the underlying dependencies would be more consistently applied. We ended up with a new type of syntax:

# Run the latest released version of Che
docker run --net=host \
           --name che \
           -v /var/run/docker.sock:/var/run/docker.sock:Z \
           -v /home/user/che/lib:/home/user/che/lib-copy:Z \
           -v /home/user/che/workspaces:/home/user/che/workspaces:Z \
           -v /home/user/che/storage:/home/user/che/storage:Z \

The syntax is straight-forward, but there are four volume mounts, which can be daunting. The volume mounts are not necessarily symmetrical. Notice the difference between /lib and /lib-copy, which is a special mount that we use to allow Che to send files from inside the image to reside on the underlying host. And there is the necessary :Z syntax for SELinux. While the syntax is universal, this posed some new and interesting problems:

  1. Most people are still new to Docker syntax, so it would be easy for them to make mistakes, especially for a command that is so long that requires so many volume mounts.
  2. Docker poses challenges with volume mounts on different operating systems. If you are on Windows, you can mount your local drive using “c:\” syntax, as it will fail. There are also significant limitations between boot2docker and Docker for Windows on which directories are available for volume mounting and which ones are not.
  3. If volume mounts are typed incorrectly or do not map to an actual host directory, the error messages that are returned from Docker or Che do not provide friendly error messages.
  4. Most users do not need to set a custom location for Che storage, workspaces, or other customizable properties, so it’s unnecessary to have them set those volume mounts just to start the server.
  5. The Docker CLI was changing in syntax and structure frequently between different versions. The new dependency was for users to get Docker installed and then run Che. But there were users installing Docker versions from 1.6 through 1.12 across a variety of different operating systems and virtualization types. The Docker CLI does not behave similarly across all of these versions, and so many tickets appeared relating to different subtleties.
  6. Since the che-server container needs to launch containers of its own, which will be used for developer workspaces, we have to pass access of the Docker daemon to the che-server container using -v /var/run/docker.sock:/var/run/docker.sock. To a professional Docker user, this syntax is obvious, but to a new Che user, it’s mysterious and seemingly pointless.
  7. Some properties, such as changing a port, require customizing the Docker run syntax, but other properties require customizing an environment variables (such as telling Che where Docker is located with the DOCKER_MACHINE_HOST), and other properties are configured with a che.properties file that must be passed from the host into the Docker container. This then leads to the confusing syntax:
# Run the Che container with:
docker run --net=host \
           --name che \
           --restart always \
           -p 9001:8080 \
           -v /var/run/docker.sock:/var/run/docker.sock \
           -v /home/user/che/lib:/home/user/che/lib-copy \
           -v /home/user/che/workspaces:/home/user/che/workspaces \
           -v /home/user/che/storage:/home/user/che/storage \
           -v /local:/container \
           -v /home/my_assembly:/home/user/che \
           -e CHE_LOCAL_CONF_DIR=/container \
           -e DOCKER_MACHINE_HOST= \
           codenvy/che-server --remote:

8. Since users are not using Docker, if they are not careful and income --rm, then they can end up with a lot of leftover containers cluttering up the system, and they are not sure which ones they should keep or remove after they have stopped running Che.

This syntax was not sustainable.

Universal Docker

Earlier this year, we decided to rewrite the Che CLI and we wanted to solve these basic problems with incompatibilities between different operating systems, different versions of Docker, and different tactics for volume host mounting. Also, this was an opportunity to convert Che into a 12-factor application where there is a single, consistent way for configuring all properties, whether they are system or Che properties.

Docker to the rescue! Why not create a specialized terminating Docker container dedicated to invoking and managing the Che server Docker container with all of its nuances? We could create a container that provides its own Docker CLI, accepts only environment variables as properties to be configured, and then has the internal logic to understand the underlying host operating system, the location of the Docker daemon, and how to launch the Che Docker container, which would be non-terminating.

This basic architecture allowed us to simplify the launching capabilities for any operating system.

docker run --rm -t -v /var/run/docker.sock:/var/run/docker.sock eclipse/che [COMMAND]

  start     # Starts Che server
  stop      # Stops Che server
  restart   # Restarts Che server
  update    # Pulls latest version of Che image (upgrade)
  info      # Print debugging information

And all system configuration would be done with environment variables that a user can pass into the system such as:

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \ 
           -e CHE_LOCAL_BINARY=/home/assembly \  
           eclipse/che start

How did we do this?

Image title

  1. We created a minimal Docker container that includes a lowest common denominator Docker CLI, universally supported and guaranteed to work on a host Docker daemon of any version, including Docker 1.12. This was the Docker 1.6 client.
  2. From inside the che-launcher container, we use Docker’s --net=host option to discover information about the underlying host. For example, ETH0_ADDRESS=$(docker run --rm --net=host alpine /bin/sh -c “ifconfig
    eth0 2> /dev/null" | grep “inet addr:" | cut -d: -f2 | cut -d" “ -f1) is an inline tactic we use to launch a utility Docker container to find out the IP address of eth0 interface of the underlying host.
  3. We use Docker CLI for the launcher to discover information about itself including its container ID, its image, and version. For example,LAUNCHER_IMAGE_NAME=$(docker inspect --format='{{.Config.Image}}'
  4. We normalize any input folders that come from various invokers into a single mount syntax that works for Linux. This includes converting and testing any Windows-formatted volume mounts.
  5. Since the che-launcher is starting, stopping, and managing a non-terminating container, we use the Docker CLI to query the host daemon to find out information about the che-server container. For example, we can use $(docker ps -qa -f “status=running" -f “id=${1}" | wc -l) as a tactic to determine if the che-server container is currently running. This is useful if a user has asked us to stop a Che server that doesn’t have a running container.
  6. When starting a Che server in a container, we can use Docker and curl to determine if the Che server has successfully booted. We can use docker inspect to first determine the IP address of the just-launched che-server container, and then use curl to interrogate the server’s REST API for information. For example, HTTP_STATUS_CODE=$(curl -I http://$(docker inspect -f '{{.NetworkSettings.
    ' "${1}"):8080/api/ -s -o /dev/null --write-out "%{http_code}”)
    will set HTTP_STATUS_CODE to200 once the Che server’s API starts returning successfully.
  7. All commands are self-cleaning, and we can ensure that commands to start the che-server have a --rm flag associated with them to avoid any lingering, but stopped containers. This leads to a run syntax that looks like:
docker run -d --name “${CHE_SERVER_CONTAINER_NAME}” 
          -v /var/run/docker.sock:/var/run/docker.sock:Z
          -v /home/user/che/lib:/home/user/che/lib-copy:Z 
          -p “${CHE_PORT}”:8080 
          -v “$CHE_STORAGE_LOCATION” “$@” 
rm -rf $ENV_FILE > /dev/null

In this case, the $@ is the remaining parameters to be passed to the Docker exec command, and those come from a chained set of method calls that each of which optionally append additional parameters to the docker runsyntax based on the values provided by the user. You can inspect how we do this by reading the chaining here.

Universal CLI

Now that we had the basic ability to manage the Che server packaged in a specialized che-launcher, we wanted to further simplify how users can gain access to the launcher with a CLI.

Also, we were going to package up different functionality for Che using different Docker containers. So, in addition to the che-launcher container, we would have specialized containers for performing remote workspace synchronization, compiling Che source code for developers, synchronizing and SSH access for local IDEs with remote Che servers, and performing other system tests such as networking tests or debugging. In essence, we were building an entire library of specialized Docker containers that can perform cross-platform functions that users and project maintainers could use!

We needed a CLI that would consistently run on any OS that could further simplify the access to Docker and provide a consistent experience across the variety of utility containers that we provided.

Bash and Docker to the rescue. Since we required all of our users to have Docker installed for Che to operate, that installation came with requirements that would essentially guarantee that bash would also be installed on all of those systems, even Microsoft Windows.

So we built the Che CLI using a layered bash script, which would operate consistently on different operating systems and be self-updating. The bash script performs basic health checks for Docker, detects the host operating system, loads pre-set environment variables or profiles defined by the user, and then passes along the proper values into the che-launcher (or other utility container) for execution.

We placed the CLI into the root of our source code repository, and then enable people to curl the images to their disk.

REM You need both che.bat and che.sh
curl -sL https://raw.githubusercontent.com/eclipse/che/master/che.sh > che.sh
curl -sL https://raw.githubusercontent.com/eclipse/che/master/che.bat > che.bat

REM Add the files to your PATH
set PATH=<path-to-cli>;%PATH%
curl -sL https://raw.githubusercontent.com/eclipse/che/master/che.sh > /usr/local/bin/che
chmod +x /usr/local/bin/che

Not all people have curl on their computer, so we actually had another version of this that used Docker to install the CLI, of course. You can use Docker to send files that are inside of a container down to the host operating system. Unfortunately, the syntax for this is a bit tricky and it actually made the syntax a bit more unfriendly for users. You do this by mounting two folders inside of the delivery container. One folder has the files that you want to deliver, and the other is the destination of where they will be delivered. Inside of your container, when it starts, the container must copy the files from the first container volume into the destination mount so that they appear on the host system. This is required in order to avoid overwriting files that are already on the host. Unfortunately, this requires the user to do a volume mount that doesn’t make a lot of sense, so instead, we just give a straight CLI download using curl.

Even if users do not have curl on their machine, there is an easy way around that — Docker, of course. You can just have users type docker run --net=host appropriate/curl <curl-command>.

There is some amazing stuff in the CLI — it’s great what bash and Docker offer to users. You can inspect the CLI and how it sets values, grabs the current profile, and provides self-updating syntax by taking a look at two files in our GitHub repository.


Since we have placed the new che-launcher and CLI into production three months ago, the number of tickets related to configuring how Che runs has dropped by 80%. The only dependency is getting Docker installed, and with the advent of Docker for Windows and Docker for Mac, users are increasingly able to get Docker installed quickly and painlessly, giving us a robust cross-OS platform for running and building software.

I’d like to thank Mario Loriedo, an engineer at Red Hat, for his contributions and advancement in the thinking that has allowed for this. He told me:

Docker container patterns are fascinating — way beyond the limits of microservices, open a completely different world of application architecture

Is iPaaS solving the right problems? Not knowing the fundamental difference between iPaaS and iPaaS+ could cost you down the road. Brought to you in partnership with Liaison Technologies.

integration ,che ,docker ,tomcat

Published at DZone with permission of Tyler Jewell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.


Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}