TechTalks With Tom Smith: Tools and Techniques for Scaling DevOps
TechTalks With Tom Smith: Tools and Techniques for Scaling DevOps
More integrated toolsets, automation, and data are needed to successfully scale DevOps.
Join the DZone community and get the full member experience.Join For Free
To gather insights on the current and future state of how DevOps is scaling in the enterprise, we asked IT executives from 32 different companies to share their thoughts. We asked them, "What techniques and tools are most effective for scaling DevOps?"
You may also enjoy: TechTalks With Tom Smith: What Devs Need to Know About Kubernetes
Here's what they told us:
- In light of our target of large, highly-regulated enterprises we have the philosophy to build on whatever works. Gain a system-level view of the system. Leverage the current areas that are working and use that as the basis of your transformation. It’s not about rip and replaces, it’s an evolution rather than revolution. Have a toolset that supports the organization across multiple iterations of change in hybrid environments to support enterprise-scale and governance. Support across the entire transformation journey. Don’t focus on the new shiny bullets. Focus on kaizen concept of continuous improvement with small iterations. Understand how you are delivering value to the customer and then prioritize the value stream to optimize based on business priority. If you have a system-level view, then everyone has the same single-source of truth which is critical given that silos still exist. Have executive management on board to drive this type system-level change without being too bureaucratic. Think of the enterprise as a holistic system delivering customer value.
- Once you start gaining maturity you start seeing more homegrown tools and standardization across tools. A lot of initiatives dev-led used by one team but not another. More standardization on homegrown tools. Much more integration. When an alert triggers, a webhook or Lambda uses another source to produce meaningful contact for the developer. See more advanced tooling and user integration to create seamless workflows. I look for metrics for how many touches to the infrastructure are manual. This trends down over time. People can hit a wall, need to watch metrics to improve and get more out of the engineers.
- The short answer is to treat everything as code: servers, storage, network, applications, anything you can think of. In the end, this is the only way to allow us to actually create full ownership for services and to work with those services in an iterative manner. An important aspect for us is to actually source control this “code”. Ultimately, we created a full audit trail by forcing ourselves to use touchless deployments. Regarding tooling, our advice is to choose what best fits your requirements. For instance, we have chosen Ansible as our configuration management for its ability to provide a secure state solution. From our perspective, it is a huge benefit, especially when transitioning existing services to DevOps, rather than having to replace them with newly developed services.
- Tool selection should come after management has decided to embark down the DevOps path. There are many DevOps tools. We use containers, Kubernetes, Git, Google Cloud, Jenkins, etc, just to name a few. It’s knowing how to use the tools together that really matters. For my team, coming up with a common tools strategy has been key as it promotes the establishment of collaborative objectives.
- This is where value stream management comes in. Scale agile, digital transformation initiatives. Look for how to connect tools together for a systems-thinking approach, how the systems interrelate and work together over time. Able to share learning and metrics across teams. Find tools that tie the toolchains together by taking data from different tools and applying metrics to reduce waste, improve flow, and deliver value. Any tool that helps with that is becoming more paramount now.
- The organization’s CI/CD tool is very effective at facilitating the scaling of DevOps. It is a tool where DevOps practitioners can use to fully automate processes and procedures to transform lines of code into running workloads in production.
- There are many popular tools like Jenkins, Ansible, Docker, Puppet that makes the DevOps platform much easier to integrate. Finding the right tool for the right task is always important as there will always be a need to scale at a later time and it should also be suitable for different workflows.
- The keys we’ve found to be most effective to ensure successful collaboration and speed have been around the following areas: continuous integration for automation and testing with tools such as Jenkins, configuration management tools such as Chef and Terraform, deployment techniques through such methods as containerization, and collaboration and tracking/reporting tools such as ChatOps and Jira.
- What I see across the board is automation. You need to automate things that are repetitive in a manual way. Save time and increase the quality of the outcome. Automation works regardless of the tech stack. Data transparency is key for monitoring feedback loops, data is easy to access and see along with the operations and delivery pipeline. Our UFO visualizes and radiates data to the entire engineering team to indicate if there are slow-downs or blockers. Different teams use it to visualize performance versus SLAs. It's also important to have an open mind for new technology but don’t think you have to apply the latest and greatest technology to solve a problem. Invest the time necessary to stay up to date. Go to meetups to learn, share, and get to know new people.
- With our DevOps platform, we wanted our developers focused on writing code and rapidly releasing application changes, and not to be concerned about obtaining other IT services like servers, storage, networks, and more. To meet this goal, we built our platform around four primary modules. 1) Self-Service Catalog - Provides a shopping cart experience. 2) Developer Tools - Provides a full developer experience which includes planning tools (backlogs, bug tracking), code repositories, application stacks, and release approvals. 3) Automation - Provides a runtime for microservice-based applications running in containers managed by Kubernetes (OpenShift). To enable these apps, we use a series of automation and platform tools like Ansible, NetApp Trident, and Red Hat CloudForms. 4) Infrastructure-as-a-Service – Provides all the applications, tools, and platform software and hardware, which we provide through IaaS for both public and private cloud. This provides automated hardware on which to deploy our cloud-native applications. These four areas, combined and automated to provide end-to-end DevOps workflows, allows our DevOps teams to act fast and scale when needed.
- Automate, have code that has four distinguishing elements: 1) expecting things to break, create in planning an incident response program that is fast, responsive, and scales; 2) have a structured way of continuous learning to prevent repeat incidents; 3) relocating resources, predefined contract where resources will be allocated at predefined times – what to do and when; 4) minimize the risk of breaking by identifying issues upfront. This is what makes SRE unique.
- Automation: DevOps isn’t only about tools but doing it right does involve using tools to automate as many tasks as possible. Automation frees engineers from repetitive tasks while also making it easier to ensure organization-wide consistency. Here are some concrete steps to take to ensure your engineering team is leveraging automation: 1) Make it easy for team members to try out new automation tools; 2) Create a process for evaluating and purchasing automation tools; 3) Make incremental steps towards increasing automation. What one thing could be automated this month? Automating repetitive processes frees up your engineers to work on the type of projects they do best and enjoy most: Building creative new software solutions to solve your business problems and respond to opportunities.
- 1) Decoupling deployment from release makes many things possible with less drama and fewer "all hands" calls. 2) Build-in feedback loops. How do I make sense of the impact? Have to have automated testing. How do I know how it's running in production, how are the users reacting? How’s it going without a fishing expedition. 3) Measure everything. Visible ops, dashboards. Hygeia from Capital One dashboard that shows how each project is moving through the pipeline with the goal of identifying bottlenecks. Feature telemetry/usage dashboards. Able to see whether or not people are using, is it performing? Measure things at the feature level.
- One of the most important techniques is to just get started and have a sense of ownership of what you’re building. If you’re used to asking five different teams to help get something accomplished or to ‘run’ things for you, be sure that you find out more about how those things work, how they integrate, and most importantly, how you can get more access and data to understand how they’re performing.
- It’s building a streamlined process from development to production. Transparent and automated pipeline from source control to production. Heavily invested in Docker and K8s and move to a microservices architecture. Microservices and DevOps go hand-in-hand. Deploy and add features easily.
- What has worked for us is splitting teams into small, autonomous, full-ownership teams consisting of 4-8 people, including software engineers and SREs. Successful implementation means giving full-ownership to teams so that they have the autonomy to manage and operate their own services. Teams do their own deploys, manage their own runbooks, and define their own on-call rotations and processes. We empower our teams to ship fast and innovate, without having to wait on a centralized SRE or DevOps team to implement. Additionally, Docker is essential for our teams because it helps developers do operations work. For example, Docker commands/scripts are universal, so engineers don’t have to spend time figuring out which command works for which coding language. Of course, we rely on our own tools to do DevOps as well. We actually use our software to proactively monitor systems with alerts. SREs help their teams push and stress their services with load testing by tracking with dashboards. During incidents, we use our software to share context and form and test hypotheses about causation.
- We're rewriting the pipeline all the time. As you go more through more of the pipeline, the dev team is willing to base more on the external parts of the stack. Adopt more Platform-as-a-Service (PaaS). Become more willing to use the additional functionality of public clouds. As you feel the benefits of DevOps, you realize you get more benefits as you buy into the underlying platform. DevOps is becoming part of the platform and people are accepting because of the benefits of doing so.
- We see a lot of different pipelines. We see a lot of pieces and I’m not sure they matter. A lot of good options. We will continue to see innovation and learning in those spaces. Always look at how you are building stuff and experiment with other tools to see if it improves the process. Your pipeline is as much of what you are building as the product itself.
- A full software delivery pipeline starts with requirements planning and portfolio management and continues all the way through product release. What is often referred to as “idea to cash.” Successfully scaling DevOps requires visibility across that entire spectrum. That ‘spectrum’ needs to include multiple tools to provide capabilities like testing, build automation, and deployment; the ‘visibility’ required needs to be able to take into account the information and progress within each tool and across the many manual steps that surround those automation tools. That management layer is manifested in Value Stream Management. A Value Stream Management solution integrates with your existing toolchains providing management, orchestration and predictive analytics that ensure that you will scale your development while maximizing the value returned to the business.
- Get together and get organized! Build best practices and a hub to collect them for the organization, where ideas don’t get lost and problems that get solved don’t have to be re-solved again by someone else. Be honest about your needs and your users — if your development team thrives in flexibility and nimble movement, don’t expect them to comply with a monolith DevOps orchestration system that requires them to bend to its will, even if doing so would probably make the whole thing seem simpler to operate. And be honest about the DevOps expertise of your organization, as expecting everyone to suddenly be an expert in order to use new tools effectively isn’t doing to set you up for success. You might be better off with more opinionated pipelines for some teams and less structured workflows for others, and the same complexity might not be ideal for every team in the organization. Look for a solution that adapts to your organization, not one that forces your organization to adapt to a tool, as the latter always takes longer and more energy.
- No particular tools or stacks. Each enterprise makes its investment. We work more around the stacks they’ve already invested in.
- Operating a culture that attracts and retains talented people, that builds careers and produces the innovation and opportunities that serve individuals, customers, and the business alike is the balance needed to both scale and maintain DevOps practices.
- The technique I found to be most impactful is breaking down large requirements set into chunks that can be delivered during consecutive sprints. Combined with a solid toolchain, this leads to smaller changesets that are deployed over time instead of a large change at once, which reduces the risk of failure. New features or components can be tested “silently” in production before exposing them to all users. For example, a new API may be tested by slowly sending more traffic in stages without showing the results to a user until there’s enough confidence in its correctness and scalability.
Here’s who shared their insights:
- Nancy Wang, CEO, Advancing Women In Product
- Mick Morrissey, Director of Engineering, Asavie
- Lyon Wong, Founder & COO, Blameless
- Patrick Reister, Senior Build Engineer and Ivan Szatmári, Head of Release Management, Bohemia Interactive Simulations (BISim)
- Rob Zuber, CTO, CircleCI
- Brian Dawson, Director of Product Marketing and Brian Nash, Director of Product Marketing, CloudBees
- Eric Robertson, V.P. Product Marketing Management & Strategy Execution, CollabNet VersionOne
- Jeff Williams, Co-founder & CTO, Contrast Security
- Mike Rose, VP Engineering, Cybera
- OJ Ngo, CTO, DH2i
- Chris DeRamus, Co-founder & CTO, DivvyCloud
- Tobi Knaup, Co-Founder & CTO, D2iQ
- Andi Grabner, DevOps Activist, Dynatrace
- Antony Edwards, COO, Eggplant
- Kris Lahiri, Co-Founder & VP Operations and Chief Security Officer, Egnyte
- Chris Michael, DevOps Engineer, FileCloud
- Tamas Cser, Founder & CEO, Functionize
- Justin Stone, Senior Director of Secure DevOps Platforms, Liberty Mutual
- Mark Levy, Director of Strategy, Software Delivery, Micro Focus
- Phaedra Divras, Chief Operating Officer, Mission
- Michael Morris, Senior Director of IT Cloud & DevOps Platforms, NetApp
- Tori Wieldt, Developer Advocate & Sr. Solutions Marketing Manager, New Relic
- Bob Davis, CMO, Plutora
- Veejay Jadhaw, CTO, Provenir
- Vishnu Nallani Chekravarthula, V.P. Head of Innovation, Qentelli
- Anurag Goel, Founder & CEO, Render
- Davy Hua, Director of DevOps, Security & ITOps, ShiftLeft
- Dave Karow, CD Evangelist, Split
- Ben Newton, Director of Product Marketing, Sumo Logic
- Adityashankar Kini, V.P. Engineering, Sysdig
- Neil Barton, CTO, WhereScape
- Dan Beauregard, DevOps Evangelist, Xebia Labs
Opinions expressed by DZone contributors are their own.