Software Delivery: Shaking Loose of a Local Optimum
Let's talk about the true potential of Kubernetes and its ability to automate for software delivery.
Join the DZone community and get the full member experience.Join For Free
The path from code to production is crucial because it determines the length of our feedback loop, and therefore our rate of improvement and that all-important lead time.
Our current tools think of delivery as generic; they take control, and we fill in the blanks. They let us build, test, deploy. But these words mean something different for every application, on every team, and within every business. And there’s a lot more that goes on, or can go on, between the writing and the running of code. There's often linting, approving, and coordinating. There’s data to think about, and security policies, business team structure, auditing. There are other steps we don't bother with, because there's no slot for them: updating libraries, fixing the code to meet standards, taking a different set of steps for a documentation-only change.
What else have we not thought of, because it doesn’t fit inside the current set of tools? I want to think about delivery from the top. In doing so, I can gain a greater understanding of my work.
We’ve seen major shifts in Operations technology: containers, log aggregation, observability, programmable infrastructure, and Kubernetes. In my opinion, the killer feature of Kubernetes is its API: it is controllable by automation. It accepts instructions and sends out events.
The potential here is even bigger than we have realized, for operations and for delivery.
Modern options for operations have achieved new levels of abstraction and potential. Yet progress in delivery has only been incremental.
Our tools are very configurable, with plugins and scriptable bits, and always we can fall back to running shell scripts of unlimited length. But are we stuck in a local optimum? Are we thinking big enough?
What We Have
Today's popular software delivery tools are fill-in-the-blank. This is great for getting started, when you don't know what you're doing. What about when your business grows, when your delivery is no longer cookie-cutter? Let's look at two examples.
Travis CI is a favorite of mine, because it's so easy to get going. It will happily build any public GitHub repository, on its infrastructure, for free. If my repository fits a few simple profiles, like a Rails app that deploys to Heroku, poof! Some very simple YAML in your repository:
language: ruby rvm: - 2.2 - jruby deploy: provider: heroku api_key: secure: "YOUR ENCRYPTED API KEY"
I like that YAML. It is declarative. It contains data. It fills in the blanks that Travis provides. But the real measure of a tool comes when you get off the happy path. You can do just about anything in Travis, just call out to a shell script. Here's a small piece of one of my .travis.yml files:
install: - pip install -r requirements.txt - bundle install script: - bash travis-build.bash ... deploy: - provider: releases api_key: "$GITHUB_TOKEN" on: condition: "$TRAVIS_TAG =~ ^[0-9]+\\.[0-9]+\\.[0-9]+$"
It has shell commands. Shell syntax for an if-condition. And that infinity of flexibility, a call to a shell script. That travis-build.bash script happens to contain four functions, eight conditionals, and (if I'm reading it correctly) up to seven program calls.
This is programming, OK? As soon as your YAML calls out to a shell, it is not declarative. It is imperative. Or a mix.
This is programming, but my YAML and Bash do not qualify as software. Bash is a scripting language. It is designed to glue programs together, not to construct programs: it lacks composable modules with encapsulation. I can live without types, but in Bash you don’t even declare function parameters! And it’s full of pitfalls. Use Bash for what it’s good at: calling other programs. As a fallback for flexibility that we lack at a higher level, Bash does not shine.
YAML is not a programming language, although it does support variables and has its share of pitfalls. Until the YAML contains commands that get executed -- then it becomes a program, with even fewer options for error handling or clear naming than in Bash. Please use YAML for data, not for code.
There is a gulf between programming and software. We have standards for software: It has design -- domain-driven design that we've worked through with experts. It has automated tests. It has modularity. It uses libraries and abstractions. I want to use a language designed for all this.
Skip ahead to Jenkins. This is more widely-used than Travis for more extensive delivery flows. It can run on-prem. There are hundreds of plugins available. So many blanks to fill! So many options! And now, there's Pipeline as Code. For one thing, you can write a piece of Groovy to define which repositories and branches get a pipeline. That makes Jenkins more responsive to new repositories and branches. It's a step in the direction of responding to new projects.
The people building Jenkins recognize that delivery needs to be controlled by the people who control the code. Instead of constructing each repository's build in a GUI (hello, TeamCity), we put a Jenkinsfile in each repository. That file is in Groovy. Groovy is a programming language with all the modularity and libraries of the JVM behind it. Except, you don't get all that in your Jenkinsfile. You're limited to what is available when Groovy runs within Jenkins. If you want consistency from one repository to another, you can install custom shared libraries in Jenkins and call them in each Jenkinsfile. Rolling out change is challenging; any of those disparate Jenkinsfiles can break after the shared library changes. It's pretty fragile and still limited. It's progress.
This is a step up as programming, but I still wouldn't call it software. When I write an application, I work in more than a programming language: I work in a language system. A language system includes dependency management, frameworks, linters, build systems, editors, community, and the whole ecosystem of libraries available in the language. Great language systems exist around the JVM, Node, and .NET, for instance. Groovy that you run can participate in the JVM language system, but groovy running within Jenkins does not have this access. The Jenkinsfile DSL is still fill-in-the-blank.
What We Need
Conceiving delivery as software invites us to sit down, development team members with the local experts on our deployment environment and security and other concerns. It invites us to have a conversation about, what does delivery mean for this application, on this team, in this organization? Where do we want consistency, where variability? And then we can create and encode a model. A model of the domain of software delivery, within the context of our business and our environment. We can brainstorm events like a push or a new repository, and what we want to always happen. We can divide the parts into team-specific, organization-specific, and community-wide (open source).
Instead of "How do run the build? How do we run tests?" we can ask, "What needs to happen after a code push?" and think carefully about how we work, how our software works, and how our business works with it. We are the domain experts in our own delivery.
This is the top reason I want delivery as software from the top down. By thinking about delivery as software, we think rigorously about our work as software developers. And we get better at it.
I want more than delivery-with-some-code. I want delivery as software. Controlled top-down by my team, with a rich ecosystem of libraries, with frameworks and services for the parts that are common in the community. I want delivery that's event-driven, that's testable, that communicates with me when it's time for a decision.
We've come a long way since the days before automated deployment. Our existing continuous integration/continuous deployment solutions have served us as far as they can. Incrementally better fill-in-the-blank won't take us to the next level of software delivery, and it won't help us reach the next level as software developers. It's time to break out of this local optimum.
To this end, we composed the Software Defined Delivery Manifesto. Say what you want about manifestos; they have a certain flair, a certain impact, a certain dramatic longing for a better world. That is what we are expressing here. Yes, it's a better world only for development teams, and only for development teams in complex organizations. But we matter! Software has a huge impact on the world, and to make that software better, we need to get better at delivering it. Let's push past what we're used to. Accept that our job is not writing code, but running useful software in production. Let's get better at delivery, by modeling it in software.
If you're in, join the movement. Sign the manifesto.
Published at DZone with permission of Jessica Kerr, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.