Avoiding the Scripting “Scaries”: Best Practices for Avoiding the Scriptocalypse
So many scripts, so little automation. Without these best practices, your scripting efforts will create more backlog than they prevent.
Join the DZone community and get the full member experience.Join For Free
The use of scripts as a key tool in the average developer’s bag of tricks has altered the development pipeline in significant ways. The proliferation of scripts has even led some organizations to outsource their entire team to manage the glut of scripts that have been developed over the course of time. To help companies move beyond this management nightmare, below are some best practices to follow to avoid the script-ocalypse altogether.
Improve Process Robustness and Scalability
Of course, everyone wants to improve their process robustness and scalability – but the question is how? One of the most effective ways of doing this is to employ a tool that helps orchestrate your releases and provides reusable objects. This reduces risk because you no longer have to write scripts for every single application, which creates a cascading effect where not only do you write fewer scripts, but you don’t have to maintain these scripts across the board. As a result, you are assuming much less risk across many levels of your technology and pipeline. Another bonus is that system level visibility across all of your processes will increase – which has many benefits for your entire development lifecycle.
Don’t Just Automate, Orchestrate
Many developers make the mistake of thinking that just because you have scripts, you’re automated. But to be truly automated, every action needs to be automated and driven by your pipeline. Your approvals, your handoffs – everything. Oftentimes throughout the release process, team members find themselves having to stop in the middle of their tasks to ask for approval. This only creates a process that interrupts the entire pipeline. But what if you could automate all of that?
A great quote from the 2018 Accelerate State of DevOps Report that speaks to this says, "High performers automate significantly more of their configuration management, testing, deployments and change processes than other teams." If you look at it purely from a team perspective, the high performers are the ones that have fully automated all their processes. While this can certainly sound like a daunting task, it’s best to look at it with an agile mindset – this is an iterative process. You have to start somewhere, while keeping the goal of 100% automation in view.
Not only should you be automating, but it is critical to also orchestrate your pipeline. By automating and creating reusable objects, you can get to the point of using your pipeline as a service. Imagine reusing this particular pipeline for every application within your process, so you could have hundreds, even thousands, all using this pipeline and all these objects underneath that are all reusable objects. It’s a pretty picture, isn’t it?
Create a Value Stream Map
When you create a value stream map for a pipeline, you want to be very inclusive about who you invite in the room to create the value stream. Include representatives from release management, development, deployment engineers, team leads, executives and infrastructure engineers. Basically, anyone who has “skin in the game” for your release should be present.
Once you feel that you have a solid representation of who is impacted by your releases, you need to sit down and draw out the entire process. This includes figuring out what each script does. While that may be a big task depending on how your pipeline is currently structured, there's actually a lot of opportunity there. For example, you might find there is one application where you use one script to do that deployment. Typically, that script is duplicated for another application and so on. Imagine doing that 100 times. You’ll have 100 scripts or variations of that one script for your 100 applications. That's an obvious and beneficial place to deploy a bit of automation.
This is also a great time to understand what tools you are using for each step. When you're jumping from tool set to tool set it can be difficult to get a holistic view of how this is impacting your overall performance without a complete picture of the value stream. When you have that picture, you can start to ask yourself questions like, “What are all these tools that we interact with that we can integrate into the pipeline? How are we handling approvals?” By including detailed sequencing timelines and identifying bottlenecks, you can realize some impactful efficiencies.
Finally, creating a value stream map gives you an opportunity to organize your scripts. Laying out the entire value stream can give you a picture of the variety and quantity of scripts you’re running, whether they are CI scripts, deployment scripts, or test automation scripts – and bucket them in a logical way.
Identify Redundancy Dependencies and Bottlenecks
Of course, identifying (and hopefully rectifying) dependencies and bottlenecks is a good best practice for any aspect of your development pipeline. But it becomes even more important in the context of using scripts. If you have one application dependent on another application, it’s almost impossible to script. For example, imagine you’re trying to do some testing with two dependent apps. If you don't have the right versions across both, you can't test one because the other one is not up to speed or it hasn't been updated in an environment. Something even as simple as that can be really disruptive and make scripting very difficult. By understanding where your dependencies and redundancies exist, you can take the first step towards solving the issues they present.
Scripts are one of the most effective ways of streamlining your pipeline, but they need to be used in a way that will allow your organizations applications to be scalable and sustainable. By following some of the best practices outlined above, your organization can use scripts like pros – and avoid the dreaded script-ocalypse.
Opinions expressed by DZone contributors are their own.
Microservices With Apache Camel and Quarkus
Auditing Tools for Kubernetes
Extending Java APIs: Add Missing Features Without the Hassle
RBAC With API Gateway and Open Policy Agent (OPA)