DevOps Tool Tyranny
See what the 2017 State of DevOps report has to say about choosing DevOps and CI/CD tools.
Join the DZone community and get the full member experience.Join For Free
In the software development world, we hear the adage "use the right tool for the job" all the time. Its use goes back decades, and we've all been told, "you don't hammer a nail with..." For me, deciding on the tool is often the most important step in the process (as significant as how you use it) because the implications are long-term and can be expensive to undo if you make the wrong choice.
When it comes to programming languages, different languages are better suited to specific use cases than others. In other instances, the decision is less clear-cut. For example, today, if I were to develop a multi-threaded application I would select Go or perhaps even Node.js in a Kubernetes cluster. I would not choose Java for such a project. No doubt some reading this may disagree with my example, and that illustrates my point. It can be difficult to determine which language is the best for a particular project; there are lots of factors that must be weighed and considered.
Early in my career, I learned the benefits of putting the greater good of the whole project above the benefits of any specific language. I asked questions like, "Does anyone on my team know the language?" "Is the language one with staying power or is it simply in vogue and destined to go out of fashion?" "Is learning this language a pet project for the recommending developer and will they leave after they are done and bored?" "Do I have the talent to maintain this project or will I be caught in a continual rewrite cycle?"
While I learned that standardizing languages provides stability and, perhaps ironically, nimbleness, the 2017 State of DevOps report suggests the opposite when it comes to CI/CD and DevOps tools. It highlights that allowing individual teams to use their DevOps tool of choice results in more productive continuous delivery teams. Of course, the caveat is that these teams are contrasted to teams whose tool choice is dictated by a central group. When has a distant central group ever made effective decisions for individual groups? Centralizing decision making is ludicrous, but so is making pet-project or short-term myopic decisions.
Sadly, revisiting this topic was not done in the most recent study. When it comes to scaling continuous delivery across the enterprise I think you have to strike a balance. In my opinion, each continuous delivery team must be allowed to choose the tooling that best matches what they are delivering. However, I also believe that in order to scale continuous delivery across multiple teams, releases and a dual cloud strategy, it is vital that a large organization standardize on some pervasive and ubiquitous automation mechanics that can act as a digital conveyor belt.
This way, each team uses the tool of choice to a certain point.
Optimizing Your Pipeline
You need a solution that facilitates a fully automated and optimized pipeline, and ensures fast, consistent, repeatable deployments across all environments-including production.
Such a solution also needs to be open, because integrations are key to managing the delivery flow. In other words, fitting different tools smoothly and seamlessly into the value stream is crucial to the productivity of the overall team. This also encourages innovation and experimental methodologies such as canary deployments, while time is not wasted on manual activity.
Ultimately, an open toolchain brings teams together, harnesses technology and encourages innovation. So, why not see how such a solution could help you? Try out the CA Continuous Delivery Automation trial.
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.