Practical Docker Integration - Part 1
Docker has become much more popular, and accordingly seen widespread adoption. Thinking about Docker adoption? You have a lot to consider, so check out this insight into the Docker adoption path. We'll explore the reasons behind container adoption, immutable deployments, and plans for deployment tools.
Join the DZone community and get the full member experience.Join For Free
Considering docker adoption in your organization? There are many aspects to consider. As we are going through this evaluation process in LivePerson, we thought it would be beneficial if we share some of our insights on this process. Any way you look at such an infrastructure change it has a huge impact on developers, CI, CD, configuration, monitoring, packaging, security and almost any aspect of software development and delivery. You should be going through a path of inspecting each step finding your best practices, forming the skeleton projects and templates which would help solution reuse when converting new projects. The challenge of integrating it into existing services with existing deployment tools is not something to be taken lightly. So, if you are before that process, or even in the midst of it I hope you find our insights helpful.
Today, docker and kubernetes are the mainstream technologies for containerized deployment, these are the technologies we have decided to evaluate. The insights, however, are generic for any container solution. Let's dig deeper into the process details:
Review the reasons you adopt containers, It's always best to start by getting the motivation right. Assuming you already use a configuration tool for your deployments, and you are already very happy with it, you should ask yourself, why should I replace my current methodology with containerized based deployments? A good answer would be that you are a developer who appreciates the concept of immutability and functional programming; In this case, containers allow you to deploy these concepts into your deployments. Having containerized deployments allows you to have immutable deployments in a much easier and fun way. By having your containers read-only, you may even come closer to the concept of pure deployments where you only need to do replacements, instead of modifications.
Another good answer is that adopting containerized based deployments can bring you closer to the continuous deployment methodology implementation. In the excellent book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation which was written before containers became popular, many of the components which the author describes as essentials to a successful continuous deployment are found to exist in container-based deployments, among them: automating all facets of building, deployment pipeline, collaboration between developers testers and operations, managing infrastructure, dependencies, auditing. Many of the above are found to either be part of docker infrastructure or peripheral projects or side effects of using it.
A bad reason for adopting containers would be because you think it's going to improve the performance of a certain server. A container would just "wrap" your process if it's underperforming it's going to continue to underperform. On the rare occasion when the underperformance is due to having many VM's consuming much of the physical machine memory, more lightweight processes would do better resource sharing and reduce resource utilization.
Are you ready for immutable deployments? Above, we dealt with the motivation to adopt containerized deployments. Let's check how ready you are to move to containers. Here are some questions to ask to assess your readiness: do you install your apps from scratch, do you install your apps from scratch in production? If not - are you at least already deploying your apps into VM's, or, even better to cloud-VM's? Do deployment/production teams add new instances of your app without contacting the relevant developers team? If someone were to restart your apps now will all behave as usual (the usual without problems)? If you answered yes to at least some of the above, then, the effort of moving into containerized deployments is achievable with much less frustration. If not, you should first review your deployments, update them, or be ready for much more work when using container-based deployments. (Note, if you plan to mutate your environments with containerized deployments you are heading the wrong way).
What are your plans for current deployment tools? In most cases you already use deployment tools such as puppet, rpm and the like. On one hand using these tools is great because this means your deployments are very close to fully automated (or already fully automated). On the other hand, this means you have a choice to make. Either dump them and use a single container package builder (Dockerfile) or continue using your rpm scripts together with docker scripts and same for puppet scripts. Personally, I would recommend you to try and make things simple, if you can have a single scripting language, go for it, if you already have too much infrastructure around rpm consider keeping it. Note that if you continue having the bulk of packaging in rpm this would mean you should aspire to be able to install your packages even without docker. As for puppet you should highly consider replacing it all together by docker scripting and its peripheral scripts (ie. kubernetes)
Published at DZone with permission of Tomer Ben David, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.