Workflow Means Different Things to Different People
Workflow Means Different Things to Different People
To answer important questions about workflows, it makes sense to look at how software moves through a Continuous Delivery pipeline.
Join the DZone community and get the full member experience.Join For Free
Interested in Kubernetes but unsure where to start? Check out this whitepaper, A Roundup of Managed Kubernetes Platforms from Codeship by Cloudbees, for an overview and comparison of Kubernetes platforms. Brought to you in partnership with CloudBees.
Wikipedia defines the term workflow as “an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes” — processes that make things or that just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow improvement theories like Lean and TQM to their business processes.
What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there’s plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z was less of a straight line than a more of a sideways fever chart.
Today, workflow as a concept is gaining traction in software circles, with the universal push to increase businesses’ speed, agility, and focus on the customer. It’s emerging as a key component in an advanced discipline called Continuous Delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.
So, how does workflow actually work in Continuous Delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? What kinds of benefits are they getting?
To answer these questions, it makes sense to look at how software moves through a Continuous Delivery pipeline. It goes through a series of stages to ensure that it’s being built, tested, and deployed properly. While organizations set up their pipelines according to their own individual needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests, and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in Jenkins.
Assessing Your Processes
Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly, like a factory running at full capacity.
But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested, and reworked countless times.
Continuous Delivery itself smooths out these uneven processes to a great extent, but it doesn’t eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built into sometimes stop, veer left or double back over some of the same ground. Things can change – abruptly, sometimes painfully – and pipelines need to account for that.
The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process – accounting for any and all tricky hand-offs embedded within the pipeline design.
Again, for simple pipelines, this may not be a hard task — but for complicated pipelines, there are a lot of issues to plan for. Here are a few:
- Multiple stages. In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
- Forks and loops. Pipelines aren’t always linear. Sometimes, you’ll want to build in a retest or a rework, assuming some flaws will creep in at a certain stage.
- Outages. They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
- Human interaction. For some steps, you want a human to check the build. Workflows should accommodate the planned — and unplanned — intervention of human hands.
- Errors. They also happen. When errors crop up, you want an automated process to let you restart where you left off.
- Reusable builds. In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.
In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs — sometimes automatically, sometimes with human guidance, with varying degrees of success.
As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process – known names like Chef, Puppet, Serena, and Pivotal. Other popular continuous delivery tools have their roots in open source such as Jenkins.
While we are mentioning Jenkins, the community recently introduced functionality, specifically to help automate workflows. Jenkins Pipeline (formerly known as Workflow) gives a software team the ability to automate the whole application lifecycle – simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire software delivery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.
Over the last 10 years, Continuous Integration brought tangible improvements to the software delivery lifecycle – improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous Delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.
Pipeline brings Continuous Delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.
Published at DZone with permission of Sacha Labourey . See the original article here.
Opinions expressed by DZone contributors are their own.