Join the DZone community and get the full member experience.
Join For Free
Learn how integrating security into DevOps to deliver "DevSecOps" requires changing mindsets, processes and technology.
To understand the current and future state of DevOps, we spoke to 40 IT executives from 37 organizations. We asked them, "What do you consider to be the most important elements of a successful DevOps implementation?" Here’s what they said:
Culture of Collaboration
- It's not just about CD and automation. Collaboration between developers and executives in middle and upper management about managing and working with employees with CI/CD and automation, organizing into teams of eight to 10 working from concept to delivery. How you think about and plan it with the minimum viable product in mind. Experiment, measure and prove hypotheses. Have moon shots and chunk work into iterations.
- Automate everything: Builds, deployments, testing, reporting, everything. Ensure alignment across teams: Dev, IT / Ops, QA, Product. DevOps, by definition, bridges across development all the way through to operations. Go fast but maintain control. It’s not the wild west; teams need to make fast decisions and move quickly, but everything needs to align to the strategy.
- For many companies, when they talk about DevOps, they primarily focus on developers and on the technical side. They tend to forget about the cultural and people part, as well as the operations part. But DevOps transformations really require bringing developers and operations together and connecting them, not only from a technology point of view but from a collaborative standpoint. They need to work together to figure out how to do things like push code to production or fully integrate their tools. They also need to develop a mutual understanding of what each side does and to feel mutually responsible for each other.
- Start with culture first and envision how the team interact to achieve the goals. Then you need to articulate the goals. No combination of tools will make you successful with DevOps. Definitions and understanding are all over the map. DevOps should not be a designated team lest you are setting up another silo. It’s a philosophy or movement rather than a methodology. Philosophical way of thinking about building better software faster.
- Have the workflow and ownership well understood before using tools. Plan out process and workflow. Know who owns what part of the infrastructure. Have autonomy to make decisions within their teams. Make sure boundaries are well defined.
- It's a matter of people more than technology. The real benefit starts from the ability to make the team more agile and break teams into smaller units to be more nimble and productive. The same way microservices starts with people. Adopt the right organizational changes that can be supported by the right technologies.
- The most important element of a DevOps implementation is transparency. The DevOps philosophy is founded on the assumption that coordinated collaboration is the best way for businesses to innovate and grow. DevOps programs enable businesses to have realistic expectations and align incentives between the operations, development, and security teams. In an effective DevOps program, everyone in the group understands their role within the project which paves the way to a smooth rollout.
- Start with CI and then move to CD. Legacy has been a big focus to help those enterprises through a bigger transition. CI is just one step. Doing all tests manually, you realize the need to automate to be able to scale. Parallelize tests so you can run concurrently. Container and headless tech enable you to run high value faster and move further left. Legacy still doing 90% manual tests. How to think about automating. Do you need to run all of your tests? Change management requires organizational changes. Need to determine what good looks like. A lot is change management and you need support from the top. 1) Right Culture and Mindset: Successful DevOps need to be followed in the entire organization right from the executive leadership down to everybody in the Dev and IT departments. You can’t just do DevOps in a silo without Product, Marketing, Engineering, Finance, Sales and Exec Teams being affected by it. 2) Right Tools: Leverage DevOps tools: Source control management: Jenkins, bitBucket, etc.; Database Automation: Datical, DBMaestro, etc.; Continuous Integration (CI): Jenkins, Travis CI, etc.; Configuration: Chef, Ansible, Puppet, etc.; Deployment: Terraform, AWS CodeDeploy, etc.; Containers: Docker, Kubernetes, etc.; Release Orchestration: CA CD Director, OpenMake; Cloud: AWS, Google Cloud, Azure, etc.; AIOps: Splunk, Sumo Logic, etc.; Analytics: New Relic, Dynatrace, etc.; Monitoring: Nagios, Zenoss, etc.; Security: Checkmarx SAST, Signal Sciences, etc.; Collaboration: Jira, Trello, Slack, etc. 3) Right Database: Move to database technologies that scale out (and not scaleup) since you never know when demand will go up and you can’t build for some future state.
- 1) Culture and buy-in are key when launching DevOps, as are tools and processes that ensure your team is able to maintain consistency and reliability from the first lines of code to scaling an app. From a cultural perspective, teams need to embrace new ways of working – think truly rapid iteration and smaller releases rather than ‘big bangs.’ Developers must also shed harmful habits that seem like best practices “because we’ve always done it that way.” One example: holding on to the idea of “never deploy on Friday” that prevents teams from fully embracing a model that allows for safe releases at any time, even the most critical ones. Another example: keeping humans in the loop. Intuitively it seems like a good idea to have that last step before a release is sent into the wild controlled by a person at the keyboard, but if there’s one thing humans are good at, it’s introducing variability and making mistakes! Trust the “robots,” and ensure you can automate from end to end. 2) Provide tools to app dev teams practicing this sort of DevOps. Code and infrastructure config are managed from end to end with the same tool – Git – and “robots” take care of turning straightforward config files into complex container-based infrastructure. To help teams focus on what matters (their app!) and move faster, we create perfect clones of production for every branch nearly instantly.
- It may sound cliché, but culture is the cornerstone of success. Culture drives collaboration and sustainable practices. It’s not easy. Collaboration can be hard work, but you see results over time, and people seem both happier and more productive in their work.
- The primary element of a successful DevOps strategy is breaking down the walls separating individual teams within IT and encouraging active collaboration across teams. At the end of the day, the most desirable outcome is to deliver business value to users. Today, this can be better facilitated with cross-team collaboration between developers, testers, etc. This way, teams will be empowered to continuously develop, test and improve with real-time feedback. Traditional techniques are still largely relevant in various phases of production and deployment. However, the need to automate and upgrade with evolving technological developments has to be realized wherever possible.
- Outcomes with a feedback loop from internal and external customers for developers to get faster feedback on features and bugs. Looking for gaps and bottlenecks. DevOps Dojo concept aligns a space where people can become immersed in the DevOps culture. Establish an enterprise toolchain and allow people to use and flex outside as necessary.
- Improving the value stream starts with visualizing the value stream. Begin by creating a visual baseline of the value stream including process time, wait time, percent complete, and accuracy with efficiency KPIs. Capture work-in-progress – how much is there and how does it affect what you’re doing. Ensure the work you are doing as part of the delivery pipeline is tied back to planned work and business value. A pie chart that shows number of commits to source control, changes show what’s delivering business value and what isn’t. In order to measure the effectiveness of what you are doing, you need to match the change to what you are doing to planned and unplanned work. DevOps in the delivery pipeline were a black box. Provide more players KPIs and information to non-technical players so know how well they are performing.
Mutually-Agreed Upon Metrics
- Release something. Start small and get some quick wins. You need collaboration and visibility into the entire toolchain even while using multiple different tools. Without that, you’re creating silos more quickly than when doing manually (most common fail). Doing automation without fixing the process or culture and respecting the need to have visibility across the value stream and CI/CD pipeline. Understand the KPIs and metrics to know if you’re getting better. Another fallout from Jenkins World is pipeline visibility and management. No one is watching everything embedded in scripts. How to integrate a DevOps pipeline with a legacy pipeline. If you can’t measure you cannot manage and continuously improve. Look at the delivery pipeline as a system. Four challenges of scale: 1) visibility of the portfolio, 2) fragmented management, impedance management between the old world and new world, looking at the delivery pipeline; 3) continuous improvement needs to be measured and valued. Getting to the metrics of the delivery. Be able to answer whether or not you are doing better than before. How are you solving problems – not giving up, getting stuck, and slowing down? Looking at other methodologies like value stream management, tools to measure application delivery, visibility into the process.
- Quantify quality. Put DevOps for a repeatable process and can track metrics. Measure and ensure meeting KPIs. Put in procedures and process to ensure every test is repeatable and scalable. Take a baseline and deploy over and over again. Get a baseline and anything about that is your metrics coming in. Need data to qualify a particular.
- Continuous Integration (CI) and Continuous Delivery (CD) are often components of DevOps implementations, but they are not one in the same. DevOps is much more about culture and process than tools. It’s easy to think a DevOps tool or solution is a quick path to the promised land when, in fact, it’s much more about how individuals on a team, and teams themselves, work together with a focus on quickly, efficiently and securely delivering business value. In order to have a successful DevOps adoption, companies must make sure team members represent the appropriate functions and are moving toward the same goal. This starts with effective communication and common incentives. Development and operations teams already have many common goals like security and customer satisfaction, but in the past, they were often motivated by different concerns; development teams focused on software quality and on-time delivery, whereas operations teams centered on system stability and uptime. Aligning these mindsets is extremely important. If these cross-functional teams have more business value-focused ways of measuring their performance like feature lead time, mean time to recovery and deployment frequency, then they’ll have something closer to a common definition of success. When that common definition exists, teams will start to experience success together, helping to remove some of these barriers to adoption.
- 1) Share the same goal: The key to successful DevOps is to change the way different functional units work together. Therefore, teams should have a shared goal instead of having siloed or even contradictory measurements for different units. 2) Make things simple: Automation is one way to simplify the whole process, but we often can do much better by abstracting away complexity when necessary. For example, we ensure applications are decoupled from the baseline operating system configuration, so those can be changed or updated without a corresponding lock-step change in the applications that are deployed on those clusters. Decoupling infrastructure changes from application changes has allowed us to work towards reducing heterogeneity in our data center, which is a key aspect of their overall maintainability. Similarly, we focused on the introduction of Infrastructure-as-a-Service for commonly needed platforms, such as Kafka, Kubernetes, Spark, Solr, and Hadoop. This allows us to create teams with expertise in those particular technologies to deploy them at scale. This way, the application teams simply need to go through a self-service provisioning interface to use them. This approach allows us to benefit from a centralized team for those technologies, while the self-service approach ensures we don’t create artificial barriers to the adoption of those technologies. 3) Measure your success: This element is critical to close the loop from setting up the shared goal. To make this possible, we have built out a comprehensive metrics and monitoring platform. This has enabled us to get much greater insight into the performance and behavior of our services. It also provides developers with an easy way to configure monitoring at the application level, which has given our teams more confidence to deploy software quickly. Beyond system metrics, we also include measurement of business goals, such as time-to-market and the end-user experience.
- It’s important to make sure you include the assumption of the ephemeral nature of cloud computing into your implementation. All resources are temporary and expendable. Implementation has to be a process for keeping those resources available even if pieces of them go away. Instead of going in and installing everything we need on hardware, we need to set up a process and automate it so we can recreate what we’re working on quickly and easily. Another important element is to think about emergency prevention, rather than focusing on an emergency response which is what most organizations tend to do. We’ve thought a lot about choosing the right tools and it’s important for us to do research in that area. Choosing the right tools for server management and log handling is a big deal. Being involved in infrastructure design with regard to how our application scale-up is also a big part of what makes DevOps run smoothly.
- Successful DevOps implementations must start with culture change to build a collaborative culture across Operations and Development that aligns goals and priorities. When operations teams are solely focused on 100% service levels, and Development teams are driving to innovate faster, there can be considerable friction and mistrust. Being able to integrate objectives into a shared purpose is key to capitalizing on the benefits that a DevOps process can offer. For example, a goal that includes a shift in focus from simply “running infrastructure” and prioritizing “uptime,” to ensuring true application performance and an end-user experience that exceeds expectations, combines the competing goals of uptime and innovation. When an IT team is aligned on these two factors, they naturally collaborate, consider performance metrics differently, and find new ways to work together seamlessly. For successful DevOps integration, which includes transitioning to metrics of resiliency, recoverability, and change velocity, Operations and Development teams must agree on a common set of performance measurement and reporting approaches, and fully accept that old up-down, percent-utilization obsessions must yield to measuring the end-to-end experience of end users. Skills to manage and understand in-browser metrics, API performance, and metrics derived from events, alongside learning to trace transactions instead of “watching the stack,” are all key to cooperative DevOps service delivery.
- Look at the surrounding technologies Pytorch or Tensor Flow. When you look at containerizing and look at versioning people understand they can version with Git and it works. With 50 GB data in deep learning. It’s big to be able to go back to the data and see what you run. Can’t make multiple copies of 50 GB databases. Copying between NAS and server you have to wait. More core copy to GPU and push back to NAS or SAN and lose lineage. Execute a snapshot and collocate GPUs can snapshot and keep a version including your Git repository. Don’t have to copy everything everywhere. Customers with 10 GB ethernet need more local data. No one has infinite bandwidth. Deep learning requires more than a TB of data. It will take time for the process to complete. It's even slower as more people start using big data. We can optimize in either direction – GPUs local or not. Boxes only with GPU as edge nodes. Given the circumstances and size of storage, you can guarantee data at your location. Co-locate enough data with the job. Less contention for network bandwidth. Need flexibility in these solutions. If have to rearchitect you will feel even more pain.
- Database change in many organizations is treated differently than the rest of the application state. Take a step back and review database code changes instigated by application developers, performance tuning, maintenance is separate when looking at end-user revenue-generating applications. The developer will take the first pass at a change and after that, the visibility is lost. Sent to the DBA over email for ticketing, ServiceNow or JIRA, goes through modification when and where deployed can change. Can reliably guarantee what developer wrote is not being deployed. Treat database code as all other code. Treat database code as application code. Recognize the need of operators is different than the needs of developers. A lot of database tooling hasn’t evolved from the old school DBA. Tools have been built around the operations side of the business. Going from glacial change to agile app shops. Database code changes as part of that. You need a DBA with the right intelligence to make it easier for developers. Talk about core principles and treating DB code as app code.
- Focus on the performance and analytics of applications. How to guarantee performance and metrics? Simplicity – don’t over-engineer things. Let the application determine what you need. More components result in more opportunities for failure.
- Any software company needs to understand how operations work so software is appealing to the IT professionals and be able to partner with the tools and cloud. Understand where you are today, where you want to be in the near future and be practical about how to get there. Most important is to get there.
- Turn vertical silos into horizontal. Organize people by the process designed by the tools you are using. What are the processes to focus on automation and the pipeline? DevOps is not a silver bullet.
- One of the biggest pain points for DevOps and integration is there are a lot of proprietary black boxes with a specific dedicated process for integration assets versus software it means your pipeline is broken. It slows you down and makes it impossible to integrate. You need a DevOps toolset to accommodate the main process.
- Speed and agility with control. Quality, secure, always available. Educate and get people to understand DevOps helps them get there. Meet requirements of quality expectations -- speed with control.
- The most important element is automation. Automate as much as you can to get the value from the methodology as quickly you can.
- DevOps is a cultural and process change. Implement aspects of what role we believe monitoring plays in DevOps and the cloud – monitoring as a service wherever you deploy. Performance as a service – pulling out metrics to create the unbreakable pipeline. Bring toward self-healing for auto-mitigating actions. Depending on where customers are we help implement the next steps to self-healing and no ops or autonomous ops. Determine what stage they are and where we can help get faster-automated feedback to the right people.
- The fundamentals remain just be more aware and open to dialogue. There used to be more pockets of information, now more at a global level. Leadership from the top down need to embrace DevOps. It will require some rewiring. You should not shock the team. Create a culture of discovery and fail fast. Failure should not mean punishment. There is no way to discover a process if you’re not going to fail. It’s about failing fast.
- The cultural shift to an organization that focuses on learning, experiments and continuous improvement has to be the most important element for sustained success. A successful DevOps implementation is never finished. Its ongoing, constantly improving the “system” by developing capabilities that support the success of the business.
- Good automation, definitely. If it’s in an existing environment, it’s important for your engineers to really understand all of the processes that occur regularly so they can be automated in a custom fashion. In new environments, it’s often better to look at what best practices exist in the marketplace and if there’s existing (often free) automation you can use to implement those best practices.
- The notion of DevOps is tied very closely to several other ideas, as such DevOps is something that can be embraced piecemeal as each of those related ideas come together. Containerization of code, new microservice-based application architectures, modern and automated application delivery processes, changing organizational structures – they are all part of a major, industry-wide transformation around the way applications are delivered. To be successful using DevOps to transform application delivery, enterprises must recognize that this is a significant change, but not one that needs to be implemented all at once. They must understand the big picture and have a clear vision for how they can bring these different elements of cloud-native together in a rational way. Taking the journey one step at a time, starting small and expanding from points of local success, enterprises can slowly build the expertise they need to embrace DevOps on a large scale. An enterprise might start by containerizing legacy code to streamline dev/test and improve portability. Then they might leverage Kubernetes to automate deployment and ongoing operation of that containerized code. Eventually, they might start to experiment with a simple microservices-based application, creating a small DevOps team for each microservice and giving each team end-to-end responsibility for its microservice through its entire lifecycle. As competencies build and rewards of the approach become clear, the enterprise may begin to take on larger projects, building microservice-based applications, transitioning from functionally siloed organizations toward smaller DevOps teams, and extending agile processes across the complete application lifecycle. The traditional application can also be modernized along the way, by adding new features as microservices using DevOps team and processes. The full transformation will take time, but significant gains can be realized at each step along the way, ensuring that each phase of the journey is worthwhile in its own right. Ultimately, enterprises that embrace application delivery transformation, including DevOps and these many interrelated components, will become more agile and more innovative while improving efficiency and the bottom line.
- Moving to DevOps can be a huge undertaking, especially for organizations with a lot of legacy applications and technology. Many companies fail to reap the many benefits because making a big move at once requires substantial changes to culture, processes, and tools. I recommend IT teams make smaller, more strategic moves to ensure successful implementations. Start implementing DevOps with skunkworks teams for each new project, a new application for instance, instead of trying to force the modern development process for older applications.
Here's who shared their insights with us:
- Tim Curless, Senior Technical Architect, AHEAD
- Will Hurley, Vice President of Software Lifecycle Services, Astadia
- Lei Zhang, Head of Developer Experience (DevX), Bloomberg
- Ashok Reddy, Group General Manager, CA Technology
- Sacha Labourey, CEO, CloudBees
- Logan Daigle, Director DevOps Strategy and Delivery, CollabNet
- Sanjay Challa, Senior Product Marketing Manager, Datical
- Colin Britton, CSO, Devo
- OJ Ngo, CTO, DH2i
- Andreas Grabner, DevOps Activist, Dynatrace
- Anders Wallgren, CTO, Electric Cloud
- Armon Dadgar, founder and co-CTO, HashiCorp
- Tamar Eilam, IBM Fellow, Next Generation Cloud and DevOps, IBM Research
- Mathivanan Venkatachalam, Vice President, ManageEngine
- Jim Scott, V.P., Enterprise Architecture, MapR
- Mark Levy, Director of Strategy, Micro Focus
- Glenn Grant, President - U.S. East, Mission
- Jonathan Lewis, VP of Product Marketing, NS1
- Zeev Avidan, Chief Product Officer, OpenLegacy
- Tyler Duzan, Product Manager, Percona
- Bradbury Hart, Vice President and Chief Evangelist, Perfecto
- Damien Tournoud, Founder and CTO, Platform.sh
- Bob Davis, Chief Marketing Officer and Jeff Keyes, Director of Product Marketing, Plutora
- Brad Micklea, Senior Director and Lead, Developer Business Unit, and Burr Sutter, Director, Developer Experience, Red Hat
- Dave Nielsen, Head of Ecosystem Programs, Redis Labs
- Brad Adelberg, Vice President of Engineering, Sauce Labs
- Adam Casella, Co-founder and Glenn Sullivan, Co-founder, SnapRoute
- Dave Blakey, CEO, Snapt
- Keith Kuchler, Vice President of Engineering, SolarWinds
- Justin Rodenbostel, Vice President of Open Source Applications, SPR
- Jennifer Kotzen, Senior Product Marketing Manager, SUSE
- Oded Moshe, VP of Products, SysAid
- Loris Degioanni, CTO and Founder, Sysdig
- Jeffrey Froman, Director of DevOps and Aaron Jennings, Engineer, Temboo
- Pan Chhum, Infrastructure Engineer, Threat Stack
- John Morello, CTO, Twistlock
- Madhup Mishra, Vice President of Product Marketing, VoltDB
- Joseph Feiman, Chief Strategy Officer, WhiteHat Security
- Andreas Prins, Vice President of Product Development, XebiaLabs
Learn how enterprises are using tools to automate security in their DevOps toolchain with these DevSecOps Reference Architectures.
Opinions expressed by DZone contributors are their own.