Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How Capital One Automates Automation Tools

DZone's Guide to

How Capital One Automates Automation Tools

Learn about Capital One's four basic principles of automation from their master software engineer's talk at All Day DevOps.

· DevOps Zone
Free Resource

The Nexus Suite is uniquely architected for a DevOps native world and creates value early in the development pipeline, provides precise contextual controls at every phase, and accelerates DevOps innovation with automation you can trust. Read how in this ebook.

Listening to his talk, it seems like George Parris and his team at Capital One aren’t keeping “banker’s hours.” George is a Master Software Engineer for Retail Bank DevOps at Capital One. At the All Day DevOps conference, George gave a talk, entitled "Meta Infrastructure as Code: How Capital One Automates our Automation Tools with an Immutable Jenkins," describing how they automated the DevOps pipeline for their online account opening project for Capital One, a major bank in the United States. Of course, there is a lot to learn from their experience.

George started by pointing out that software development has evolved, coming a long way even in just the last few years. Developers now design, build, test, and deploy, and they no longer build out physical infrastructure - they live in the cloud. Waterfall development is rapidly being replaced by Agile, infrastructure as code, and DevOps practices.

Where we see these technologies and methodologies implemented, IT Operations teams are acting more like developers, designing how we launch our applications.  At the same time, development teams are more responsible for uptime, performance, and usability. And, operations and development work within the same tribe.

George used the Capital One Online Account Opening project to discuss how they automate their automation tools - now a standard practice within their implementation methodology.

parris2.png

For starters, George discussed how Capital One deploys code (hint: they aren’t building new data centers). They are primarily on AWS, they use configuration management systems to install and run their applications, and they “TEST, TEST, TEST, at all levels.”  Pervasive throughout the system is immutability - that is, once created, the state of an object cannot change. As an example, if you need new server configurations, you create a new server and test it outside of production first.

They use the continuous integration/continuous delivery model, so anyone working on code can contribute to the repositories that, in turn, initiate testing. Deployments are moved away from the scheduled release pattern. George noted that, because they are a bank, regulations prevent their developers from initiating a production change.  They use APIs with the product owners to automatically create tickets, and then product owners accept tickets, making the change in the production code. While this won’t apply to most environments, he brought it up to demonstrate how you can implement continuous delivery within these rules.

Within all of this is the importance of automation. George outlined their four basic principles of automation and the key aspects of each:

Principle #1 - Infrastructure as Code

They use AWS for hosting and everything is in a Cloud Formation Template, which is a way to describe your infrastructure using code. AWS now allows you to use CFTs to pass variables between stacks. Using code, every change can be tested first, and they can easily spin-up environments.

Principle #2 - Configuration as Code

This is also known as configuration management systems (they use Chef and Ansible). There are no central servers, changes are version controlled, and they use “innersourcing” for changes. For instance, if someone needs a change to a plugin, they can branch, update, and create a pull request.

Principle #3 - Immutability

Not allowing changes to servers once deployed prevents “special snowflakes” and regressions. Any changes are made in code and traverse a testing pipeline and code review before being deployed. This avoids what we all have experienced - the server that someone, who is no longer around, set up and tweaked differently than anything else and didn’t document what was done.

Principle #4 - Backup and Restore Strategy

A backup is only as good as your restore strategy. You know the rest.

George also dives into how they do continuous delivery/continuous integration in his talk, which you can watch online here.

If you missed any of the other 30-minute long presentations from All Day DevOps, they are easy to find and available free-of-charge here. Finally, be sure to register you and the rest of your team for the 2017 All Day DevOps conference here. This year’s event will offer 96 practitioner-led sessions (no vendor pitches allowed).  It’s all free, online on October 24th.

The DevOps Zone is brought to you in partnership with Sonatype Nexus.  See how the Nexus platform infuses precise open source component intelligence into the DevOps pipeline early, everywhere, and at scale. Read how in this ebook

Topics:
devops ,aws ,ci/cd ,addo

Published at DZone with permission of Derek Weeks, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}