Kubecon.io delivered unparalleled insight into the Kubernetes ecosystem. The presenters bridged the divide between Dev and Ops; describing container application platforms that streamline continuous integration, accelerate continuous delivery, optimize machine resources, and operate at scale. Each presenter acknowledged current Kubernetes challenges, while describing best practices and project roadmaps that will eliminate barriers. One key focus area presented during Day One was continuous container integration and delivery.
Development agility relies on continuous integration and continuous delivery (CI/CD). Performing continuous integration across multiple projects at a high scale requires spawning and tracking the status of hundreds or scores of containers.Evan Brown, Cloud Solution Architect at Google (@evandbrown) described container discovery, operations, efficiency, and hygiene best practices employed when he helped create the build system for the Go project.
Evan mentioned how the build system performs container discovery using the Go google.defaultclient and cluster.masterauth libraries. The libraries help find specific container nodes and establish the OAuth handshake. Working code using the libraries can be viewed in the coordinator/kube.go code file. From an operations perspective, kubectl proxy, provides an easy way to interact with the Kubernetes REST API using your favorite HTTP browser.
When spinning up multiple containers and orchestrating workflows across containers, understanding the container status (i.e. pending, failed, running) is extremely important. Evan described how to watch a pod status using a long poll on the container object and the context and ctxhttp objects. Go context provides a mechanism to set watch deadlines, cancellation signals, and request-scoped values between API and process boundaries. Based on streaming status, the Go build system will either delete pods and move on, or open channels to stream status information across builder components.
Clayton Coleman, Architect for Red Hat Atomic and OpenShift, connected Kubernetes clusters and containers with application development on a cloud platform. OpenShift has been entirely rebuilt on a Kubernetes foundation, and the OpenShift vision is to increase development team velocity by enabling best practice application development workflow. Clayton described three complementary development pipelines: source code builds, release branching, and library consumption.
RedHat OpenShift goes beyond providing generic Kubernetes container clusters to integrate developer workflow processes and tools. For example, team blogs describe how to integrate Jenkins or perform Blue-Green & AB deployments.
A third presentation on Day One at Kubecon.io echoed the Platform as a Service shift towards Kubernetes and simplifying DevOps workflow. Matt Butcher, Deis Core Contributor at Engine Yard, (@technosophos) shared challenges and lessons learned while developing Kubernetes based solutions. Kubernetes manifests describe the complete solution comprised by containers. He mentioned how managing manifests is difficult, and how few real world manifest examples exist today outside the Kubernetes codebase. Lacking manifest examples, curation best practices, and a standard manifest repository, newcomers to Kubernetes face a steep learning curve when building real-world solutions using containers.
Matt described how the Deis team augments the Kubernetes core with Helm. Helm is a package manager that provides the best way to find, build, and share software built for Kubernetes. What Docker Hub did for container sharing, Helm will provide for Kubernetes application sharing. The Helm project will provide a GitHub repository to store and collaborate on application definitions. DevOp teams will fetch information about available packages from this archive.
Helm introduces the concept of a chart. Helm charts are pre-packaged release definitions that can be easily shared across DevOps team members. A chart contains one or more Kubernetes manifest files and descriptive metadata. Charts in the archive will be usable as-is, but will also serve as a basis for customization.
By raising the focus above workloads and infrastructure containers, Deis, Red Hat, and Google team members are delivering DevOps pipeline constructs that will rebuild team collaboration practices and accelerate software delivery.