Shipping to Kubernetes With CloudBees Codeship
Learn the reasons you should be shipping apps to a Kubernetes cluster for CI/CD and how you can accomplish this with CloudBees Codeship SaaS.
Join the DZone community and get the full member experience.
Join For FreeIn this post, I wanted to cover the why and some of the how of shipping apps to a Kubernetes cluster from the CloudBees Codeship SaaS offering.
I wrote previously about how developers or people (developers are people) may wish to think of Kubernetes as an app platform to deploy apps on, embracing the power of it. This isn't for everyone and there are options if you don't want to think about the details of Kubernetes (but still want to make use of it more like an operating system underneath).
So, let's say you have a Kubernetes cluster, and you want to make good use of it, so why CloudBees Codeship?
Well, obviously, the number one reason is to stay within the realms of nautical themes and puns. That is clearly an important factor in choosing a technology.
There are other reasons, however. The CloudBees Codeship Pro platform is a flexible hosted service which is Docker native. This means you can offload your CI/CD pipeline work (build and test) and Docker image building and publishing to an elastic on-demand service. This frees up your Kubernetes cluster to cope with the production workload. If you are using Kubernetes as your application platform, then the unit of deployment, the artifact, is a published Docker image, which is a great fit for CloudBees Codeship.
Pathological Workloads and Reliable Builds
I like to describe CI/CD workloads (particular building and executing of tests) as "pathological." You are checking out, building (if it is a compiled language — compilers can be very CPU/IO heavy, dependency management is IO and network heavy for even non-compiled languages), and then testing essentially arbitrary code. You have caches to manage, and tests that leave garbage around, files and processes everywhere. Basically, the result after any build is like the day after a party. It makes sense to containerize or virtualize that stuff away as much as possible. CloudBees Codeship Pro actually uses disposable VMs for each build run, there is no state to be shared between steps.
You get a clean slate each time, which means reliable, predictable builds (and built-in caching so you don't have to pay the cost of a blank slate each time).
Working With Docker and Kubernetes
Both Docker and Kubernetes are changing often (in good ways) and anything you can do to make coping with change "someone else's problem" is great in my books (laziness is a virtue here), and this is where CloudBees Codeship can shine.
In a given Kubernetes setup, there can be challenges in accessing the lower layers to build Docker images, publish images, depending on permissions. Things can be made a lot simpler if you think about deploying pre-built images; in this case, the "pre-built" images are coming from CloudBees Codeship.
Building Docker images on a Kubernetes cluster is a maturing story. Thanks to the Kaniko open-source tool from Google requiring fewer permissions, there are still challenges in doing this in a way that applies widely to all configurations. The multi-tenancy model in Kubernetes is still an evolving story, and often the best bet is to have the building and image preparation run with VM grade isolation, which is exactly what you get for CloudBees Codeship (in a pay-for-what-you-use model).
On top of permissions and security, it may make sense to keep the build/test workload outside of the cluster, so there is no contention at busy times. In an ideal world (and certainly where things are heading) the Kubernetes cluster would be hyper-elastic (see AWS Fargate for some future direction here).
However, when things heat up, what would you prioritize: building changes/fixes or the production workloads? This is a super tough question, as increasingly running CI/CD workloads are production, but in many cases, people would prioritize running user-facing production and making pipelines wait. If you can offload your image building/testing to CloudBees Codeship then you can kind of have the best of both worlds.
Driving Deployments From CloudBees Codeship
Kubernetes has the concept of "Deployments" which simplify updating a cluster. It can be as simple as building your app and publishing it as a new Docker image, with a new unique "tag" (Docker images can be tagged) and then telling Kubernetes "hey, can you apply things to make it look like this":
>kubectl apply -f my-deployment.yaml
I am grossly oversimplifying (there will be links later for the real blow-by-blow instructions), but at its heart, changing the state of a cluster to get a new version of your app (Docker image) can be something like that, very automation friendly. It works as Kubernetes deployments are declarative: Kubernetes is responsible in taking action to update the cluster. Generally, this helps with zero downtime upgrades, as the changes are rolled out, and the new services don't take the load until they are ready.
CloudBees Codeship Pro has a bunch of premade images as tools (it is extensible via images), for example, "codeship/google-cloud-deployment" and "codeship/aws-deployment," which have the tools needed ready to go.
In CloudBees Codeship Pro, all config is stored "as code" in two files:
- codeship-services.yml: defines what is needed to build/deploy (in terms of docker images, dependencies and so on.
- codeship-steps.yml: this is the actual pipeline, specifying what steps to take, and what branches to run certain steps on.
There are other files, of course, for example, you would have encrypted secrets that give access to your Kubernetes environment from the running pipeline (CloudBees Codeship will encrypt them for you) and you put them all alongside your code.
So let's take a look at a simple example (adapted from this CloudBees Codeship Kubernetes demo):
- service: app
type: push
image_name: gcr.io/project-name/app-name
image_tag: "{{ .Timestamp }}"
registry: https://gcr.io
dockercfg_service: gcr_dockercfg
- service: google_cloud_deployment
tag: master
command: /deploy/deploy.sh
This specific example was for Google's Kubernetes service (it is very similar for Amazon and other services). CloudBees Codeship knows about Docker registries (and how to publish to each of them), so the first "step" is about publishing. This will happen on every change, and it uses the timestamp to give a unique image tag. The second step does the deployment, but only when the change is merged to the master branch. The details of the deployment are encapsulated in the deploy.sh script.
The example above was adapted from a free mini-book you can download — this is Google Kubernetes Service specific but covers things in more detail than here. You can read a lot more technical documentation here on driving Kubernetes and kubectl from CloudBees Codeship, with instructions specific to Azure, AWS, Google and IBM Blue Mix.
You focus on making changes to your app, testing them locally, and then testing them via steps in "codeship-step.yml," and once happy, a merge to a master branch will trigger a deployment and update of the cluster.
Steps in CloudBees Codeship can run unit tests, integration tests, and make use of as many containers as you need to test your application. One of my favorite features of using containers like this in CloudBees Codeship Pro is that you can run and debug the same commands locally. Once you download the CLI, you can use the Jet steps command >jet steps
.
Run this command in the directory where your code and config is, and it runs the same steps that would run on the server. This lets you iterate fast when debugging. You can even perform deployments this way if you have to (obviously it is better letting CloudBees Codeship do this based on triggers from the SCM, so you have an audit trail of changes), but it can be done if needed.
Editor's note: This post was originally published on the Codeship website and has been updated.Published at DZone with permission of Michael Neale, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments