Docker Container Delivery vs Traditional App Delivery
Docker and container technology is useful for making deployments easier, but there are still several challenges remaining.
Join the DZone community and get the full member experience.Join For Free
in the past, application deployment meant moving lots of components – provided by developers to lots of servers, databases etc. managed by operation. with docker and containers, we often hear statements like: “that all goes away now – developers simply have to deliver a ready-to-go docker image, and we’re done! no more need for app deployment tools like xl deploy !”
having worked with many users moving towards container based deployments, it turns out that that statement simply isn’t true: while docker and containers can certainly make some aspect of packaging and deployment easier, many challenges remain that tools like xl deploy can help with.
the packaging of an traditional application depends how it has been written: operating system (linux, windows), language (java, c#, ruby python, js..), its behaviors (frontend, backend, web, processing), using or not database (sql, nosql)… it is also true with the associated environment and its infrastructure: host (virtual, cloud), the operating system (linux, windows), the middleware (tomcat, apache httpd, iis, ibm websphere, ..), the data (mysql, mongodb).
in xl deploy , we would have the following classic deployment where
- the petportal version 2.0-89 contains: petclinic (jee.war), petclinic-backedn (jee.war), petdatasource (jee.datasource), sql (sql.sqlfolder), logger (a custom type to configure log4j.properties)
- the tomcat-dev environment contains: tomcat-dev (tomcat.server), tomcat.vh (tomcat.virtualhost), sql-dev (sql.mysqlclient), smoke-test (smoketest.runner).
traditional deployment application
the docker’s promise is the following: “one single kind of item in the package (docker.image) will be deployed on a single kind of target (docker.machine) to become a running docker container.”
what are the modifications if i package and deploy my petportal application using docker ?
- in the package, instead of having 2 jee.war file (petclinic, petclinic-backend), i would have 2 docker.image based on a tomcat:8.0 from the docker hub and that contains the war file and its configuration but i would keep my ‘smoke test’ and my ‘sql’ folder. moreover, i would need to package the property file externally to my image (a image is like a commit, do not modify it ! )
- in the environment, i would replace the ‘tomcat.server’ and the ‘tomcat.virtualhost’ by a single ‘docker-machine’ only and i would keep my ‘sql-dev’ mysql sql client, test-runner-1 (smoketest.runner).
finally few things has changed but the deployment task generated by xl deploy is a bit different to use the ‘docker’ commands:
- no more ‘war copy’ steps but ‘docker pull’ steps
- no more ‘start tomcat process’ steps but ‘docker run’ steps
dockerized application with xl deploy
if we detail into the docker run 2 commands generated by xl deploy and its xld-docker-plugin :
for the ‘petclinic-backend’ container, the command is simple as we can read in any docker blog post or documentation:
docker run -d --name petclinic-backend petportal/petclinic-backend:1.1-20150909055821
for the ‘petclinic’ container, the command is a bit complex because we need to configure it – according to the documentation the ‘docker run’ command may accept up to
, many of them can be added several times. in our
case: we need
- to link it with the petclinic-backend,
- to manage exposed ports,
- to mount volumes to apply the configuration,
- to set environment variables:
so the generated command is:
docker run -d -p 8888:8080 --link=petclinic-backend:petclinic-backend -v /home/docker/volumes/petportal:/application/properties -e "loglevel=debug" --name petclinic petportal/petclinic:3.1-20150909055821
as the ‘petclinic’ container needs to be linked to the ‘petclinic-backend’, the xld-docker-plugin takes care to generate the steps in the right order to run first the linked container and then the other container.
like the other middleware, the main pain point is not to call remote commands but to generate the right commands with the right configuration in the right order.
two mains direction comes immediately:
- the container configuration with its more than 50 parameters including the network, os, security settings: tcp port mapping, links with other containers, memory, privileges,….cf docker run documentation
- the volume management, for example: the container configuration is often done by providing a set of files that need to be first uploaded to docker-machine and then mount the volume in the container; same story when you run the container that manages data (e.g : sql database)
in the following scenario, the configuration is managed by the ‘config’ docker.folder that contains placeholders in .properties files. when a value is modified, the folder should be uploaded and the container restarted.
dependency management with xl deploy
so it can be difficult to manage the configuration for a given environment, but the application still needs to be deployed on several environment. xl deploy’s dictionaries help you. moreover, moving to docker imply most of time moving to microservices architecture that will imply more deployments. see the xl deploy docker microservice sample application .
this article originally appeared at the xebia labs blog by benoit moussaud.
Published at DZone with permission of Benoit Moussaud. See the original article here.
Opinions expressed by DZone contributors are their own.