DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Kubernetes, the Easy Way

Kubernetes, the Easy Way

Learn about the variety of tools that make it easy to deploy applications to Kubernetes.

David Dooling user avatar by
David Dooling
·
Oct. 29, 18 · Opinion
Like (6)
Save
Tweet
Share
9.90K Views

Join the DZone community and get the full member experience.

Join For Free

Kubernetes has come a long way since its release a few short years ago, so has the ecosystem of tools surrounding it. What began as a single command-line tool, kubectl, for interacting with the Kubernetes API has grown into a rich ecosystem of tools for developing, deploying, and reporting on applications in Kubernetes.

We were early adopters of Kubernetes at Atomist, having used it to host our services for over two years. We love Kubernetes’ simplicity and reliability and appreciate its rapidly growing feature set. We’ve also watched with keen interest the growing ecosystem of tools around Kubernetes, tools like Helm, ksonnet, Jenkins X, Skaffold and Draft for smoothing the development and deployment workflow. While these tools are great, as we began to manage running more services on more Kubernetes clusters across a variety of environments, we really wanted something that better fit our workflow. We wanted a solution that was simple, yet flexible, a solution that allowed us, as developers, to leverage our tools and expertise to deliver our software with a minimum of fuss and ceremony. Naturally, we turned to our Software Delivery Machine (SDM) for a better solution.

Using our knowledge of Kubernetes, we knew we could provide a seamless onramp onto Kubernetes for new and existing services alike. Using the SDM, we knew we could provide a powerfully customizable experience for services that don’t quite fit the generic build-test-deploy mold. At Atomist, we think how you deliver software is as important as how you develop software. After all, software that just sits on your laptop isn’t helping anyone. We’ve created the idea of an SDM to encapsulate this belief: delivery as development. Delivering software is not distinct from developing software. As such, we should be able to use our entire developer toolkit to engineer our delivery, not be limited to Bash and YAML. Plus, with an SDM we are integrated into Atomist’s other features: rich, actionable development lifecycle notifications and messages providing a rich ChatOps interface.

Let’s take a look at how you can deploy applications to Kubernetes using Atomist.

Deploying to Kubernetes

At Atomist, we think of delivery more holistically than adding some Bash scripts into your CI pipeline. In the beginning there’s an idea for a new service and a need to create a new project. The SDM fulfills this need via generators which, eschewing templates or copy-and-paste, transform any existing real, compilable project into a new, properly restructured and refactored project for your service.

Once the project is created, the SDM is able to inspect the project and, since Atomist understands code, it is able to build and test the project and can offer to add a technology appropriate Dockerfile to the project.

The SDM checks out the project, checks and builds it, and offers to add a Dockerfile.

When you click the “Add Dockerfile” button, the SDM checks out the project, creates a branch, makes, commits & pushes the change, and then opens a pull request (PR) for you to review. Atomist development lifecycle messages keeps you up to date on the SDM’s activity:

Dockerfile branch build and PR. Note the additional Docker build goal.

Notice how the list of goals have changed from the previous commit to this one. The SDM has recognized that this branch has a Dockerfile and it knows how our team builds and pushes Docker images, so it does that for us without us having to modify a build script or CI configuration. After the build passes, we have the option to merge the PR right in chat.

After we merge the PR, the SDM builds and deploys the new commit in master.

When we click the “Merge” button, Atomist merges the commit into master and the SDM sees the new commit and responds with a further expanded set of goals. Now that we have a Dockerfile on the default branch, the SDM, knowing how to deploy our services to Kubernetes, goes ahead and deploys the new Docker container to the testing namespace of our Kubernetes cluster.

At this point we could have the SDM run integration tests in our testing environment or, as we do here, have it provide a button to allow a qualified user to approve the testing deployment, effectively promoting it to production.

In a matter of minutes, our new service is running in production.

After pushing the button, the SDM promotes the service to production and, for good measure, creates a tag for the released semantic version and promotes that tag to a GitHub release.

Closing the ChatOps Loop

You may have noticed near the bottom of the development lifecycle messages the attachments for Services and Containers:

Reporting of running services in Kubernetes.

The Services tell us what is running where from the service’s perspective. When each instance of our service spins up, it posts an application payload to an Atomist webhook endpoint. The Containers tell us what is running where from Kubernetes’ perspective. Atomist listens for Kubernetes pod changes and uses them to learn what images are running in what pods. Atomist connects these running services and container images with their commits and renders them all together in chat. With a glance, you can see what is running where and assess if a service is functioning properly.

At some later point, let’s say we deploy a version that, despite our best efforts, has a bug. Since we used an actual service as the basis for our new project, our new service already has Sentry wired up and capturing errors. Atomist receives the alerts from Sentry, correlates them to a commit and running instance of the service, and offers to roll back to previous known good versions, i.e., versions with no Sentry alerts, of the service.

Closing the ChatOps loop, surfacing alerts and providing means for resolution.

Easy Kubernetes

It’s worth taking note of all the things we didn’t do to get our new service deployed to our Kubernetes cluster:

  • We didn’t have to configure CI.
  • We didn’t have to encrypt secrets.
  • We didn’t have to copy a deployment script from some other project.
  • We didn’t have to provide a deployment spec.
  • We didn’t have to provide a service spec.
  • We didn’t have to create or modify an ingress.
  • We didn’t have to install any command-line utility.
  • We didn’t have to download the credentials for our Kubernetes cluster and yet were able to securely deploy our application to the cluster.
  • We didn’t have to learn any new technology or tool.

This list of things we didn’t have to do highlights one of the most powerful benefits of using an SDM to deliver your software: an SDM is a tool to distribute expertise across your organization. When someone implements a better way to build, test, deploy, or monitor your services, that improvement is immediately available to all the projects in your team.

The SDM collects the best practices from across your organization into a shared, EXECUTABLE best-practices document.

If you’re deploying applications and services to Kubernetes, you need to start using Atomist. Here’s a video walking through creating, deploying, and managing services in Kubernetes: Deliver Spring Boot Applications to Kubernetes with Ease from Atomist on Vimeo.

If you’re interested in a tighter development feedback loop, like that provided by Skaffold, see how you can use the Atomist CLI local mode to automatically deploy as you develop.

Kubernetes Docker (software) Semantic data model Continuous Integration/Deployment application Software

Published at DZone with permission of David Dooling, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Building a Scalable Search Architecture
  • Tech Layoffs [Comic]
  • The 12 Biggest Android App Development Trends in 2023
  • How Do the Docker Client and Docker Servers Work?

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: