The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report Continuous integration and continuous deployment are the two major components of DevOps principles. Every organization that wants to move away from the traditional way of working has to learn, design, and implement a mature CI/CD pipeline. Having a mature CI/CD pipeline is a good start for site reliability engineering, but alone, it’s not enough. The site reliability engineering (SRE) methodology brings a new perspective to the software development life cycle by aiming to achieve reliability at scale. Drawing on my own experience of being an SRE for more than five years, I will touch on some of the key benefits I've experienced and why it's important for SREs to be involved in the CI/CD pipeline. SRE Engineer vs. DevOps Engineer Approach Toward CI/CD Although DevOps and the SRE approach have many things in common, they are still two different approaches that were created for different purposes. SRE was created after DevOps, when it became apparent that the DevOps way of working could not tackle all issues and satisfy all requirements. That’s why we can see these different approaches toward the CI/CD pipeline, where the most important activities of the SDLC happen. I had a chance to work as both a DevOps engineer and an SRE engineer, and here are some differences that I observed: Subject DevOps Approach SRE Approach CI/CD pipeline Aims at establishing a CI/CD pipeline where either there are no pipelines at all, or the one in place has not been properly implemented Aims at modifying an existing pipeline and identifying bottlenecks and problems that impact lead time Automation in CI/CD Tries to automate everything in the CI/CD pipeline Takes it one step further and automates incident management and production issues Incident management and CI/CD Has to align with different parties’ engineers to apply any changes Has more freedom to decide and execute operations in order to mitigate issues quickly Normal in CI/CD Having a good, working CI/CD pipeline There is no normal. Reliability is never taken for granted and, it is assumed that unexpected incidents will occur. This results in constant improvement of the CI/CD pipeline, reducing incidents, and increasing team maturity in mitigating issues on time. Improvement From the Ground Up KPIs are the core of decision-making for SRE engineers, and they become a performance dashboard that developers can see to view the quality of their work across various metrics. This informative approach makes development more conscious and aware of application/service performance earlier than releasing in production. Therefore, developers have a chance to identify issues and inefficiencies much earlier than a usual development life cycle without SRE. SRE engineers start from the measurements and look at the existing CI/CD KPIs, if there are any. Otherwise, they define the KPIs themselves to indicate the current status of pipelines. Then they can create a roadmap with measurable milestones to improve the pipeline. This approach helps engineers consider various performance factors right away when they are busy with the functionality design process. As SRE engineers create KPIs, they need to work closely with developers to understand the system logic, architecture, and components relations. This collaboration creates a team synergy where all engineers not only learn from one another, they master various skills in a team and can replace each other whenever it’s needed. Traditionally, application functionalities are the most important part of the design architecture. That’s why, sometimes, aspects like high availability and reliability are not taken into account at the beginning. The SRE approach considers reliability, availability, and resiliency from the design stage. This results in huge savings from development and operations up front, since it is very costly to redesign projects if these issues surface later in production. SRE Incident Management and CI/CD When it comes to production incidents, it is crucial to detect issues and restore the system to its normal state. SRE practices came as an enhancement to DevOps practices. One interesting SRE approach is that engineers can deploy new patches during an incident without impacting the other parts of the running system. Downtime is inevitable when there is an incident or a new deployment is in progress. SRE engineers constantly try to reduce downtime, however, and they use new techniques called zero downtime deployment. SRE engineers can decide on the required change or fix and immediately trigger the pipeline to release the change from dev to production. SRE engineers do not follow a bureaucratic approach in which a certain number of parties have to be involved in the production environment's decisions, and there is no hierarchy in place. The SRE approach takes risk, but it creates an autonomous team that can decide and act fast on incidents. However, this doesn’t mean that the other parties are never involved or informed properly. Here are some examples of practical situations where communication should happen: Suppose there is something to be done in a high priority which disrupts the production applications and services. In that case, a client should be informed before, during, and after the operation to ensure everything is under control related to live operations, data loss, and so on. Suppose there is a rollback operation and an old version of an application or a service should be installed. In that case, the development team should be informed and involved to ensure there is no problem with other services after this rollback. Suppose any process, deployment step, or even any line of code which developers write should be changed by SRE developers in an emergency. In that case, the development team should be informed afterward to make sure they are aware of these changes and the reasons they were made. SRE Proactively Trains Team Members on CI/CD When we talk about quality, we should be able to turn the quality into numbers. Then, we can quantify the quality with some metrics. I remember when we created our first-ever production dashboard out of application performance. Most people did not get what we were doing. As we rolled it out, however, it was visible how much memory and disk space were being used by every production server. After a couple of weeks, non-operational people started to notify us about the application quality. They simply looked at the dashboard and spotted some warnings. Since it was easy to understand, they were able to get the problem quickly — and even took initiative to make sure we were aware of it. This is an example of how we managed to create some basic metrics and define a baseline to check the production performance. Before having an extensive monitoring dashboard, you can still get better control over your platforms with basic monitoring metrics. Here are some common metrics you can create if you are still new to this area: If you have IaaS, you can start with monitoring your infrastructure availability— resources like your CPU, memory, and hard disk. These areas are the most common troublemakers, so you can identify issues before they become a disaster. If you have some web services running, you can start with monitoring your service availability by checking the endpoint. Additionally, you can monitor the HTTP errors to understand what is happening with requests. The SRE approach is strongly against creating a silo. SRE engineers work closely with developers, testers, and anyone who impacts the software project. This collaboration creates a strong knowledge-sharing loop in which most team members can pick different tasks and responsibilities. Moreover, this approach creates new SRE engineers from developers and testers interested in understanding application design, implementation, and operations. Common Misconceptions About SRE When any methodology is used incorrectly, it might not be as useful or effective as when it’s properly implemented. The same goes for SRE, a new version of DevOps. When an organization that is not mature enough in DevOps considers implementing SRE, wrong perceptions can lead to much confusion. After years of doing DevOps and SRE activities, I learned that you need to have a good understanding of DevOps to become a good SRE. The reason is that DevOps is the predecessor of SRE, and to identify why we are doing things in an SRE way, you need to know the history behind that. Another common misunderstanding happens when companies see the SRE as just another expert in handling incidents and operations. Let’s refer to the Google definition of SRE. We learn that SRE is considered a team made up of different experts who can build, run, and maintain application services autonomously. SRE goes one step further than DevOps and takes all responsibilities. This way, you have full control of your SDLC and have one team that communicates, decides, and implements things very quickly. Having a good understanding of the context of SRE is key to making sure you can implement it properly. Bottom Line The SRE approach is the latest advancement of the DevOps way of working. It offers best practices to keep all services and applications running reliably. SRE works smoothly with CI/CD pipelines; you can constantly see where you are and what can be improved in your SDLC. This keeps you on track at all times, and it helps you avoid taking any success for granted. SRE engineers are the frontrunners on these efforts — they bring this mindset to an organization. SRE engineers define their KPIs based on customer requirements and what makes the platforms reliable. These requirements can change every day, so SRE engineers help teams adapt to these changes while the production reliability stays intact. This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report
Productivity in software development has always been tricky to measure. Unlike in other industries, the act of programming is not something that’s easy to parallelize. The development process is unique in that it requires a diverse mix of technical and communications skills, which calls for a set of specialized metrics to keep track of the team’s vitals. The Pulse of Software Development Not all metrics were created equal. Depending on the context, some are more useful than others. The things we choose to measure can help us find problems or obscure them behind irrelevant data and non-productive goals. When it comes to deciding which metrics to keep track of, we should consider a few points: People don't act the same when they feel observed. This is called the Hawthorne effect, and it can create undue pressure. It's best to keep the metrics non-personal and anonymous where possible. The first point also means that metrics should only be used to track a team's progress over time and not to compare teams or individuals. Putting too much emphasis on hitting an arbitrary number creates incentives to game the system. Dave Farley and Jez Humble had this to say on that subject: "Measure the lines of code, and developers will write many short lines of code. Measure the number of defects fixed, and testers will log bugs that could be fixed by a quick discussion with a developer.” — Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation So, before choosing the metrics you want to use to follow your team's progress, everyone should know that their only purpose is to track progress and identify problems. They are not meant to commend or chastise individuals. A dashboard with all the chosen metrics should be created and visible to everyone on the team. Four DORA Metrics DORA metrics is the principal tool that we have to measure software development. They consist of four benchmarks: Deployment frequency (DF): how often an organization successfully releases a product to the users or deploys it to production. Lead time to changes (LT): the amount of time it takes a commitment to reach production or release. Mean time to restore service (MTTR): how long it takes an organization to recover from a failure in production. Change failure rate (CFR): the percentage of releases or deployments that cause a failure in production. Development teams can be ranked on one of four levels: Low, Medium, High, and Elite. Metric Low Medium High Elite DF fewer than 1 per 6 months 1 per month to 1 per 6 months 1 per week to 1 per month On demand (multiple deploys per day) LT more than 6 months 1 month to 6 months 1 day to 1 week Less than 1 hour MTTR more than 6 months 1 day to 1 week Less than a day Less than 1 hour CFR 16 to 30% 16 to 30% 16 to 30% 0 to 15% Year after year, the DORA research team has proven that a high DORA score is a predictable indicator of high performance. As a result, they should be included in any measurement strategy involving software development. Cycle Time Along with DORA, cycle time is another principal indicator of productivity. It is defined as the average time between the moment we decide to add a feature and its deployment or release to the public or customer. Cycle time spans the entirety of feature development, from inception to reality. Lead time to changes begins ticking when the first line of code for a feature is committed. A fast cycle time means a team can consistently deliver features at a sustained rate. Quality Quality implies different things to different people. While some teams emphasize adhering to style rules, others might be more concerned with security risks or maintaining an enjoyable user experience. What matters is that the team is in agreement regarding what quality entails for them. We can use a mix of parameters to estimate the quality of the code. That which does not meet a predetermined quality bar should cause the CI pipeline to fail. Some valuable indicators are: Number of vulnerabilities. Violation of style guidelines. Code coverage. Number of stale branches. Cyclomatic complexity. Broken architectural constraints. For instance, making sure that code in one module does not reference classes in another module. Customer Feedback Customer feedback can come in many forms, such as tickets opened, usage patterns, mentions on social media, and information gleaned from Net Promoter Score (NPS) surveys. The specifics vary depending on the business and product, but we must have the voice of the customer represented in some concrete form because, at the end of the day, they pay the bills. Are users happy with the product? Employee Satisfaction Our users and customers are not the only ones whose well-being we must tend to. Developers, testers, quality and business analysts, product managers, and managers are crucial as well because we need them all to make a great product. The best ideas come from optimistic, confident, and well-rested minds. Employee satisfaction is affected by various factors, which we should measure in some way: How comprehensive and updated is the documentation? How easy is it to onboard a new developer? Do employees feel their voices are heard? How is the work/life balance? Is anyone burning out? Is the workplace a safe environment to take chances and experiment? Do employees have the right tools to do their jobs? Do they feel they can offer constructive criticism safely? Average CI Duration Software development is an exercise in experimentation — we make small changes and see how they work out. The feedback from the CI pipeline ultimately determines if a change stays in the codebase. Working in small increments becomes painful when the CI/CD process is slow because developers must either wait to see the results or move on and try to remember to return to the pipeline when the results are in. In either case, it is very difficult to keep up the creative flow. CI duration formula. The CI pipeline's average duration should be measured in minutes. We should aim for at least ten minutes in order to keep developers engaged and the code flowing. CI Runs Per Day This is the number of CI pipeline executions per day. We want to keep this figure high — at least four or five runs per active developer — because it implies that developers trust and depend on the CI/CD process. When the CI runs per day decreases, it might be caused by a slow or awkward-to-use CI/CD system. CI Mean Time to Recovery (MTTR) We can only test, release, or deploy when the build is working. In such cases, everyone should stop what they are doing and focus on restoring the build. Mean time to recovery measures how long, on average, it takes a team to fix a broken CI build. We're typically only concerned with the main branch when measuring this metric. Long recovery times signal that we need to work on making the CI/CD process more robust. We must also ensure that the habit of prioritizing the fix of the CI build is ingrained in the team’s culture. CI Test Failure Rate Measures how often the CI pipeline fails due to a failed test. Tests are a safety net, so there’s nothing wrong with them failing. Nonetheless, developers should run the tests on their machines before committing to code. If the failure rate is too high, it might indicate that developers are finding it hard to run tests locally. CI test failure rate formula. CI Success Rate The CI success rate is the number of successful CI runs divided by the total number of runs. A low success rate indicates that the CI/CD process is brittle, needs more maintenance, or that developers are merging untested code too often. CI success rate. Flakiness Flakiness indicates how brittle the CI pipeline is. A flaky build fails or succeeds randomly for no apparent reason. Flakiness is caused by flaky tests or an unreliable CI/CD platform. Flaky tests negatively impact CI run time, success rate, and time to recovery. The Test Summary tab shows flaky and slow tests. Coverage Code coverage is the percentage of code that is covered by the test suite. This is a bit controversial since it is a metric that has been known to be frequently misused. For example, requiring 100% coverage does not raise quality — on the contrary, it leads to unnecessary testing of trivial code. Like anything else, coverage is useful when used in moderation. For instance, a project with 5% coverage is undoubtedly under-tested to the point that the outcomes of the tests aren’t showing us much. Defect Escape Ratio Measures the number of errors that were not detected by the CI/CD process. A high value means that testing is inadequate. In this case, we should check the coverage value and then re-evaluate how the test suite is structured. We might need more kinds of tests in our test suite. Defect Escape Ratio. Uptime Uptime is the percentage of time the application is available. The higher it is, the less outage there were in a specific period. For example, a 99.9% uptime amounts to eight hours and 45 minutes of downtime per year. This operational metric should always be part of our measurements as we risk losing customers every time the site or application is down. A low uptime value points to problems in the infrastructure, the code, and/or the deployment process. Uptime Total Yearly Downtime 99.9 % 8h 45m 56s 99.99 % 52m 35s 99.999 % 5m 15s 99.9999 % 31s Service Level Indicator Businesses signing service level agreements (SLAs) must pay attention to uptime in order to avoid fines or other penalties. The service level indicator (SLI) contrasts actual application performance or uptime with predetermined standards. Even when SLAs are not in effect, a company can establish an internal service level objective (SLO) which accomplishes the same function. SLI shows reality versus SLA or SLO. Mean Time to Detection This is the average time a problem persisted in production before it was detected and assigned to the appropriate team. We can measure it as the time since the problem began until an issue or ticket was raised. Mean time to detection directly correlates to how comprehensive monitoring is and how effective notifications are. Mean Time Between Failures Measures how often a system or subsystem fails on average. It’s a metric suitable for measuring the stability of the application’s subcomponents. It can help us determine which parts require refactoring. The time between failures. Metrics Just Measure Symptoms Metrics are the vital signs of your project. A poor metric is a symptom, not a disease. They point out the presence of a problem but do not say anything definite about the underlying cause. While it might be tempting to fix a problem by “managing” the variables underneath a metric, doing so is akin to self-medicating — it only succeeds in hiding the symptom. Like any good doctor, a good engineer investigates, proposes solutions, and confirms their effectiveness by checking to see if the metric has improved. Thanks for reading, and happy measuring!
This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report What Is GitOps and Why Is it Important for an Organization? GitOps is a model to automate and manage infrastructure and applications. This is done by using the same DevOps best practices that many teams already use, such as version control, code review, and CI/CD pipelines. While implementing DevOps, we've found ways to automate the software development lifecycle, but when it comes to infrastructure setup and deployments, it's still mostly a manual process. With GitOps, teams can automate the infrastructure provisioning process. This is due to the ability to write your Infrastructure as Code (IaC), version the code in a Git repository, and apply continuous deployment principles to your cloud delivery. Companies have been adopting GitOps because of its great potential to improve productivity and software quality. GitOps best serves organizations that develop cloud-native solutions based on containerization and microservices. How Does GitOps Improve the Lives of Developers and Operations? The increased infrastructure automation that comes with GitOps creates the opportunity to develop a more "self-service" approach for application developers. Rather than negotiating for cloud resources, skilled developers can use Infrastructure as Code to declare their cloud resource requirements. This becomes the desired state of the infrastructure, stored centrally and serving as the immutable reference point between the requirements as stated in the code and the actual state of the live environment. The self-service approach is liberating for developers. It makes them more productive, allows them to focus on innovation, and gets their apps to market more quickly. Additionally, it avoids the mire that can be introduced when developers and operators need to negotiate resources. On the other hand, there is a frequent misconception that the increased automation of operations means that Ops teams need fewer people and Ops's role in the pipeline is marginalized. Our view is the exact opposite; we believe that modern approaches such as GitOps and the Internal Developer Platform provide exciting opportunities for Ops (Platform Team) to enhance their skills and create more value for the organization. In a high-performing, cloud-native software development organization that embraces GitOps, you are likely to find a growing Platform team that is helping to make it all work. The actual technology used by the Platform team may vary. In some cases, this could just be a closed PaaS solution. In others, it could be a combination of various tools to create a bespoke platform tailored to the organization's needs. This gives them the ability to exert more influence and control over the infrastructure resources and architecture and create "guardrails" that enforce a simple, efficient, and standardized approach to cloud-native application deployment. GitOps helps improve the collaboration between developers and operation teams, increases their productivity, and increases deployment frequency. It enhances the developers' experience by enabling them to contribute with features without the need to know the underlying infrastructure. At the same time, it gives control to operations with code reviews and approvals. With these improvements, teams can release faster and more secure to maintain their position in the market. What Are the Three Must-Do Steps to Implement GitOps? To experience the most prominent advantages of implementing a GitOps model in your company, like standardization and consistency in your overall workflow, here is what you need to consider. Everything as Code Declare your IaC. Use a Git repository for your IaC development. Replicate the practices that are part of your application code lifecycle to your infrastructure code as well. Using technologies like Docker and Kubernetes, define your environment, versions, configurations, and dependencies as code, and ensure they get enforced on runtime. Gradually extend the GitOps model onto anything that can be defined as code, like security, policy, compliance, and all operations beyond infrastructure. Figure 1: Everything as code Declarative code improves readability and maintenance. CloudFormation, Terraform, Pulumi, and Crossplane are some possible declarative languages you can use to define the configuration of how you want your infrastructure to look. When everything is defined as code, you can use a Git repository for your development and explore benefits, such as version control, collaboration, and audits. Review Process A proper Git flow consists of: The main branch, which usually represents an environment, like dev, test, stage, prod, and the state running on that environment. When developers need to introduce changes to the code, they create a new branch from the main branch. When the changes are ready, the developers create a pull request that should be reviewed by operations to validate and approve. Security and compliance experts can also be involved in this stage to validate the environment's state properly. Once approved, the code can be merged into the main branch and delivered to test or production. Using this workflow, you can track who made which change and ensure the environment has the correct version of the code. Figure 2: GitOps workflow If you already take advantage of the Git flow system by working with feature branches and pull requests, then you won't need to invest much in a new process for your GitOps workflow. Furthermore, as your infrastructure (and other operations) are defined as code, you'll be able to implement the same practices for code review. Separate Build and Deploy Process (CI and CD) A CI process is responsible for building and packaging application code into container images. The CD process executes the automation to bring the end state in line with the system's desired state, described in the repository code. Ultimately, GitOps sees CI and CD as two separate processes — CI as a development process and CD as an operational process. A GitOps approach commonly used to separate these processes is to introduce another Git repository as a mediator. This repo contains information about the environment, and with every commit there, the deployment process is triggered. There is a component, called the operator, residing between the pipeline and the orchestration tool. The operator constantly compares the target state in the environment repository with the actual state in the deployed infrastructure. The operator changes the infrastructure to fit the environment repository if it detects any changes. Also, it monitors the image registry to identify new versions of images to deploy. This way, the CI process never touches the underlying infrastructure (e.g., the Kubernetes cluster). Figure 3: Pull-based GitOps deployments Decoupling the build pipeline from the deployment pipeline is a powerful protection against misconfigurations and helps achieve higher security and compliance. Conclusion GitOps, as an operational model, uses DevOps practices known to many teams. Using GitOps, you can automate the infrastructure provisioning process and use Git as a single source of truth for your infrastructure. Therefore, to create a successful GitOps model, you need a declarative definition of the environment. It would be best if you also had a pull request workflow in your team. To be able to collaborate on the infrastructure code and create operational changes, you should open a pull request. Senior DevOps engineers and security experts then review the pull request to validate the changes and merge them into the main branch if everything is okay. For a full GitOps implementation, you need to have CI/CD automation for provisioning and configuration of the underlying environment and the deployment of the defined code. Lastly, there should be a supporting organizational culture inside the company. In our experience, a GitOps approach has made it natural to get to a structure in which developers enjoy increased automation from self-service infrastructure resources and platform engineers enjoy taking on a more influential role in the organization. In that regard, it's a win-win approach that makes everyone more aligned and fulfilled. This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report
I was thrilled to participate in DevOpsDays Chicago 2022 as my first in-person event as a Developer Advocate at GitGuardian. I am very excited to tell you all about this awesome event that happened from September 21 to 22 at the University of Illinois, Chicago’s The Isadore & Sadie Dorin Forum. Over 350 attendees, vendors, and volunteers gathered to take stock of the state of DevOps and share our knowledge and love of building in the cloud. Celebrating 8 Years of DevOpsDays Chicago Chicago is a city in the middle of everything, and not just geographically. It is also the home of many corporations and a vibrant technology community. DevOpsDays Chicago brought together developers, operations teams, SREs, and InfoSec leads to share their knowledge and experiences with the goal of helping us all adopt better DevOps best practices and be more secure. Due to the pandemic, DevOpsDays Chicago went to a single-day virtual event in 2020 and was canceled in 2021. Absence truly does make the heart grow fonder, as every attendee I talked to mentioned how glad they were that the event was back. The community of DevOps pros was eager to share stories, and commiserate about the challenges modern cloud-native software development brings and newly learned best practices. For folks who were not able to make it in person, much of the event was also streamed for free. DevOps professionals around the world were able to tune in live for the single track of 30-minute talks, the shorter, 5-minute Ignite talks, as well as the afternoon workshops, including mine. Really enjoying my first @devopsdaysChi and partnering with the @newrelic DevRel team. By the way, we're hiring! #DevOpsDays #o11y — Tim Butler (@PDXTimB) September 21, 2022 Open Spaces Make DevOpsDays A Unique Event There was one important part of the event that was not shared over the live stream: the Open Spaces portions of DevOpsDays. This is one of the biggest reasons people I talked to cited for attending the event in person. Open Spaces is a way to run an "unconference," where the agenda is set by the conference attendees, and sessions run as small, interactive group discussions in breakout rooms. It starts with attendees volunteering topics of interest. Some of the topics this year included: achieving awesome observability, DevOps feedback loops, book recommendations, chaos engineering, GitOps in practice, and even volunteering for and running DevOpsDays. After collecting the suggested topics the organizers assign breakout rooms and fit everything within allotted time slots. While on the surface it might sound a little chaotic to have the conference attendees self-organize into roundtable discussions, it actually is quite a smooth process thanks to some simple rules: The people who show up are the right people. Whatever happens, happens. Whenever it starts is the right time. When it's over, it's over. Law of Mobility - If you want to move to a different open space, then move. Bring your best self. I think this is a really wonderful way to let people share their knowledge and experiences. I really learned a lot during the Open Spaces. I hope more events adopt this unconference approach as it is a very empowering experience for all involved. We’ll get the times, rooms and topics typed up asap. Can’t wait? Here’s the rough version #devopsdays — DevOpsDays Chicago (@devopsdaysChi) September 21, 2022 Container Security Conversations At DevopsDays The sessions covered a wide range of very interesting topics, from Lesley Cordero’s Effective Observability in Microservices Architecture to Abby Allen’s Parenting makes me a better Product Manager. All were great and well-delivered by the speakers. One major thread that ran through a lot of the talks and in many of my conversations was container security. While every talk is worth watching, I will highlight a few talks that stood out as articulating the underlying security discussions at the event. Developers Securing Their Clouds During his talk “Stories from the trenches – democratizing security with modern development,” CTO at oak9, Aakash Shah, asked the attendees who held the title of a full-time security engineer. Not one hand was raised. He then asked who was a DevOps engineer and almost all the hands went up. The room was full of DevOps professionals very concerned about security! In his talk, Aakash discussed an AAA model for how security for devs must be laid out. While he goes into much more detail in his talk, those A’s stand for: Accessible - Translate security best practices into user stories; avoid jargon Actionable - Fit security into sprints instead of 40+ page requirement docs Applicable - Understand business use cases; base action plans on reality It can be overwhelming to think of all the possible ways to approach security, but keeping those AAAs in mind when discussing and implementing security solutions can help everyone work smarter and safer. Keeping user stories and business use cases front and center of the discussion is critical, especially while containers, and tools like Kubernetes. drive the complexity of modern cloud-native architectures up exponentially. He also warned that it is tempting to “just dump yet another tool” to deal with security issues. While tools absolutely are necessary, he advocated for a more developer productivity mindset, where continuous education and better collaboration between security and development teams is more important than any single piece of technology. The rise of platform teams for security, quality, and design need to balance developer freedom vs governance - @ProvablyInsecur of @oak9io recommends developer champions programs for training at large orgs @devopsdaysChi — Joyce Lin (@PetuniaGray) September 21, 2022 Learning How To Hack Containers Eric Smalling, Developer Advocate at Snyk gave one of the most chilling and eye-opening workshops I have ever attended called “Hands-on hacking containers and ways to prevent it.” Rather than just a lecture on best practices, Eric walked us through how a hacker might systematically look through a container, escalate privileges, own namespaces, and potentially a whole cluster! He safely did this against a demo environment they set up using Synk Lab’s Kubernetes Goof repository. The about of the repo reads: “Kubernetes Stranger Danger.” It is a free and open-source tool you can use to learn how Kubernetes clusters can be attacked and show why some best practices need to be enforced. One of the larger dangers he pointed out was how often people forget to set correct permissions for namespaces. It is pretty easy to just give a namespace full access rights while also treating them like private secure places to store scripts that might contain secrets. If they are accessed this potentially spells disaster for the whole cluster and application. Another alarming danger was container privilege escalation. Once a bad actor realizes they can, they will own as much as they can, which might be "the world" for your application. He stressed if there was one big takeaway from his talk it would be “don’t allow privilege escalation!” Learning about #Kubernetes security from the awesome @ericsmalling at #DevOpsDays — Dwayne McDaniel (@McDwayne) September 21, 2022 Integrating Security Into the DevOps Culture Building DevOps is not something you can buy off the shelf, it is about adopting the right culture and methodologies before you invest in tech. Similarly, if you approach security as just an add-on and do not address the culture or methodology, you can not ever fully deliver DevSecOps, according to Senior Developer Relations Engineer at New Relic, Daniel Kim during his talk, “Building security into the DevOps pipeline at scale.” “We can not just stick an audit at the end,” when shipping features: we need to think about security at every stage of the SDLC. We can do this from the start by defining threat models in the planning phase, and bringing in the security team while the application still just exists on a whiteboard. During the coding phase, developer errors can be addressed by implementing the right tooling. His example was hardcoding secrets, which can be prevented by using Git Hooks. Catching issues early will help make sure your build and deployments go off as planned. Adding security testing through the build phase in your CI/CD pipeline will mean the security team won’t be seen as a blocker, needing to do full security audits per build. One of the tests Daniel discussed in depth was Software Composition Analysis to ensure dependency libraries do not present threats. Daniel also addressed a large area of concern for developers and security: the fast evolution of tools and their accompanying threats. While containers and Kubernetes allow us to build amazing things at scale, we must recognize legacy security tools and approaches, like firewalls often are not keeping up with reality. This is where baking security into your DevOps culture comes in: the more the security teams are part of the development planning, the easier it is to identify potential threats and apply the best tools to keep everyone safe. Up next “Building security into the DevOps pipeline at scale” with Daniel Kim @learnwdaniel— DevOpsDays Chicago (@devopsdaysChi) September 22, 2022 Learning From Each Other I had a blast at DevOpsDays Chicago 2022. It was great to teach people some advanced Git with my workshop, “Git - Beyond Just Committing,” but I think the greatest education that happened throughout the event came from conversations in the hallway track and during Open Spaces. I know I walked away with a new appreciation for topics like GitOps automation and container egress testing. Congratulations to the awesome organizing team and volunteers for making DevOpsDays 2022 happen in person. It was a lot of work and it is very appreciated by the community! The @DevopsYak with some of our amazing @devopsdaysChi volunteers!#devopsdays — Matty Stratton (@mattstratton) September 22, 2022 While we might not all agree on particular tools or what we even mean by Chicago Style Pizza, we all came together to agree that the future is DevOps and that security is something we all need to keep discussing. I look forward to continuing that conversation at next year's DevOpsDays Chicago and other events, perhaps near you sooner!
This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report Enterprises are embracing cloud-native technologies to migrate their monolithic services to a microservices architecture. Containers, microservices, container orchestration, automated deployments, and real-time monitoring enable you to take advantage of cloud-native capabilities. However, the infrastructure required to run cloud-native applications differs from traditional ones. This article will describe Infrastructure as Code, its benefits, and the popular IaC tools. You will also learn how to model infrastructure as part of the CI/CD pipeline and incorporate it into the standard development lifecycle. What Is Infrastructure as Code? IaC helps to manage and provision infrastructure resources through code and automation. In a traditional on-premises environment, operators log into the server and execute a series of commands via the command line or console to perform changes. However, these manual configurations are prone to drifts and create inconsistent environments. It's also a time-consuming process to deploy similar changes across your infrastructure. There is no quick way to verify the correctness of these manual changes. If there are issues, it isn't easy to recreate them. In comes IaC to simplify the management and provisioning of your infrastructure resources. With IaC, you don't make manual changes to servers, instances, containers, or environments. Both the creation and modification of the resources are automated. IaC is the practice of using code to create, describe, and manage infrastructure resources. Infrastructure as Code Benefits Let's now take a close look at how IaC can benefit your organization. Increased Productivity Through Automation Making infrastructure changes and deploying them is a repetitive process, and it is time-consuming if your developers/ operations team needs to do these manually in a recurring interval. With IaC, you can focus on coding and be more productive by automating the infrastructure deployments. As a result, IaC enables faster time to market for your business features. Repeatable Deployments With High Predictability Having your infrastructure defined in source control lets you automate the deployment process and enable developers to follow the software development lifecycle for infrastructure changes. It also empowers developers to perform successful deployments backed by efficient practices like peer review, static code analysis, and automated testing. Improved Consistency With Minimal Configuration Drift Performing infrastructure changes manually can lead to inconsistency between the servers and is generally the primary reason for configuration drifts. Applying changes manually is error-prone, and so IaC saves the day by standardizing the infrastructure modification process. The deployment process is faster, repeatable, and more consistent with automation and version control. Documented Process for Deploying Changes It would be best not to completely rely upon your sysadmin or operations team to deploy infrastructure changes. This increased dependency on selected operators to make server changes can create blocker scenarios when they are not available. In the long run, it is not a scalable solution. Having the infrastructure code in source control provides visibility to everyone in your organization about the current state. Developers can deploy infrastructure changes if all the stages in the CI/CD pipeline pass. If there are any issues, it's easy to troubleshoot by looking at the change history in the source control. Figure 1: Infrastructure as Code in action Infrastructure as Code Tools Some open-source IaC tools can be categorized into various groups: Configuration management tools – Chef, Puppet, and Ansible are popular tools that allow you to install, update, and manage resources on existing infrastructure. Server templating tools – Tools like Docker and Vagrant allow you to create an image from a configuration file, which is then used to provision infrastructure in a repeatable manner. They promote the idea of an immutable infrastructure, which we will explain later in this article. Container orchestration tools – Tools like Kubernetes and Docker Swarm allow you to orchestrate container workloads in a dynamic cloud-native landscape. Provisioning tools – Tools like Terraform, Azure Resource Manager, Google Cloud Deployment Manager, and AWS CloudFormation allow you to provision servers and other resources in the respective cloud environment. Figure 2: Infrastructure-as-Code tools Mutable vs. Immutable Infrastructure With immutable infrastructure, you create a new server with the revised configuration if you want to modify an existing server. In a cloud-native environment, where containers are spinning up and down every minute, immutability is a required feature. You should package your application and its dependencies into a container image, and as you want to modify the configuration, you deploy a new container version. The pets vs. cattle analogy for infrastructure management is popular in the dynamic container space. You treat your infrastructure resources like cattle so that you can delete and build them from scratch when a configuration change is required. Pets are indispensable resources where you make configuration changes in place. Immutability helps simplify your operations, minimize drift, boost consistency between environments, and build a secure infrastructure. Declarative vs. Procedural Approach With a procedural approach, you write separate scripts to achieve the desired state. Over time, you tend to have many scripts that have been applied to your environment, and you can review the modifications via change history. The order of execution of the scripts matters, or else you end up with a different end state. Tools like Chef and Puppet follow the procedural style of infrastructure management. Please find below some of the frequently used Puppet commands: Command Action puppet agent Retrieve configuration from a remote server and apply to localhost puppet apply Apply individual manifests to the local system puppet module Find, install, and manage modules in the puppet repository puppet describe Displays metadata about puppet resource types puppet config Review and modify settings in the puppet configuration file With the declarative approach, you maintain the desired state in a configuration template. If you want to make any modifications, you update the same template to reflect the desired state and let the IaC tool generate the difference script and apply it to the environment. Tools like Terraform follow the declarative approach, where the code in source control always reflects the current state of the infrastructure. The declarative approach helps you to create reusable code since you are just focused on describing the desired state and offloading the complexity of syncing the current and desired state to the IaC tool. Using Terraform to Manage Infrastructure Terraform is a cloud-agnostic, open-source tool for infrastructure provisioning. Created by HashiCorp in 2014, Terraform uses HashiCorp Configuration Language (HCL) to describe infrastructure code. It supports many providers and can help manage resources in individual cloud providers, such as AWS, Azure, and Google Cloud. Terraform is backed by a large and growing community. The primary function of Terraform is to help you create, modify, and destroy infrastructure resources. To provision resources using Terraform, you need to use the following commands: Command Action terraform init Initializes the working directory that contains the configuration files terraform plan Compares the current state and desired state and creates an execution plan for making changes terraform apply Applies the changes that were proposed by the Terraform plan and ensures that the desired state is reached terraform destroy Cleans up the infrastructure resources that are managed by the configuration files Figure 3: Terraform lifecycle Infrastructure Automation With GitOps GitOps workflows apply DevOps best practices for application development (version control, code review, automated deployments, etc.) to infrastructure deployments. GitOps is based on the declarative model, where configuration files get stored in Git, and approved changes get automatically deployed to the environment. As shown in Figure 4, the GitOps operator ensures that the desired state stored in Git is in sync with the actual state of the deployed infrastructure. Enterprises are rapidly adopting GitOps to manage their Kubernetes cloud-native environments at scale. Applying GitOps practices enables continuous deployments with proper auditing and high reliability. Figure 4: GitOps pipeline Conclusion IT infrastructures are growing exponentially and embedding automation into every stage of the delivery pipeline ensures faster and more consistent deployments. Automating your IaC processes is as important as the automation of your application deployment. You can leverage multiple IaC tools together to automate your infrastructure management. Each of the tools mentioned in this article has its strengths and weaknesses, so having a sound understanding of those and selecting the right tool that fits your use case is critical. IaC is the future, and organizations are readily embracing it to increase the reliability of their infrastructure. This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report
This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report DevOps is a hot topic that is quickly becoming the way of software development. It aims to promote development speed and reduce costs while increasing productivity and efficiency in your organization. DevOps is powered by automating your entire development, delivery, and operations processes. With continuous integration (CI) and continuous delivery (CD), you can do more with less, so it is beneficial to start implementing these concepts into your company as early as possible. What Is DevOps? DevOps is a cultural phenomenon used by companies that like to release quality software fast. It is done by automating the entire development and delivery process with the help of techniques and tools. However, it is not just a set of tools but a broader movement that focuses on how to improve the flow of software by streamlining the processes and mindset. The idea behind DevOps is to have an end-to-end automated pipeline from Dev (development) to Ops (operations). This way, the software moves quickly and can be tested as it moves through different phases of delivery and deployment. Continuous integration and continuous delivery become an essential part of the DevOps initiative and carry a lot of importance. DevOps vs. Traditional Software Development Over many years with traditional software development methodologies, development and operations teams have always been treated as separate entities. They each focused on their own efforts, which often resulted in a lack of communication between the two groups. The DevOps movement solves this particular problem by enriching team collaboration. There are four major differences between DevOps and traditional software development: DevOps can be viewed as a natural extension of the Agile movement, focusing on how to break down communication barriers between development and operations. In the traditional software development lifecycle model, projects move through linear and sequential phases without rapid feedback loops or ongoing iterations, whereas the DevOps approach is more iterative. The traditional software development approach takes a lot of time to deliver software projects because everybody works on a big chunk of software without proper planning. In contrast, DevOps work is divided into small batches, with each batch delivered quickly and then iterated on rapidly. The traditional software development approach follows sequential steps that are hard to bypass, like gathering requirements, planning, writing code, testing, deploying to production, etc. But it doesn't work that way in the DevOps world — testers test the code alongside developers so that things don't have to be redone when problems arise. This makes for a more streamlined and efficient software development lifecycle. Introduction to CI/CD Continuous integration and continuous delivery (CI/CD), as an iterative process, requires developers to have a working build of the application on which they will release new features. It is a system that allows teams to integrate changes quickly without sacrificing quality or safety. CI/CD is composed of three core tenets: continuous integration, continuous testing, and continuous delivery. CI/CD works by using the principles of automation. When code is committed to the repository, it triggers a pipeline of build tasks that executes the following steps: Check out the latest version of code from the repository. Perform compilation and unit tests. Generate artifacts such as documentation or reports on the build status (whether everything passes). Start deployment on a staging environment (i.e., an identical copy of production). Why CI/CD? Continuous integration and continuous delivery can be a blessing to both developers and customers. They allow you to test your code, find bugs, and fix them quickly before your customer has even had the chance to notice. In addition, once you have a working build on which you will release new features, CI/CD allows you to deploy it automatically to a staging environment that is identical to your production environment. This way, you don't have to rebuild the entire application every time it needs an update. CI/CD is beneficial for developers because it enables them to work more efficiently and productively. CI/CD tools are of value to the developer because they can automate tasks like testing and deploying, which saves them time. For example, when a new build is ready, they can use CI/CD to automatically deploy that build to a staging environment or their customer's production environment. This way, tests are already in place before the update is live, so any new bugs will be caught before they make it into the hands of your customers. CI and CD for Automation DevOps usually revolves around these simple pillars: CI (continuous integration) CD (continuous delivery) Continuous testing Containerization Continuous monitoring As discussed above, CI/CD helps you automate releasing software from development to production by breaking down the process into stages. CI is the automated testing of code changes before they are released to production. CD is the automated release of code from development to production environments. Continuous monitoring ensures everything is always running smoothly. Containerization forms an integral part of the DevOps process, as it helps package the software and move it along the pipeline stages, making it easy for developers. It became popular with platforms like Docker that help companies package and ship their applications quickly. Most of the CI/CD tools today work with containerization in mind. How To Implement CI/CD in Your Organization Building the Right Mindset If you aren’t a developer, learning to think like one is just as important as the tools in CI/CD. To be successful with CI/CD, you need to know how to automate and modify your current processes. You also need to be able to work closely with developers. When developing software, an individual or team will have to go through different stages of development, and it’s important to understand them to communicate effectively with the development team. These stages include research and analysis, architecture, design, coding, testing, release, and so on. Choosing the Right Tools There are various tools available for CI/CD, but before you decide which one to use, it's important to understand what your goals are. For example, what parts of your application will you want to integrate? Do you need deployments? And how many people will be working on the code? Once you have an idea of what features you'll want to implement in CI, it becomes easier to choose which tool is best for your project. In addition to tools, hiring the right set of DevOps people can also help you quickly organize things and make decisions. Adding CI/CD Into Your Development Process It’s not enough to just set up CI/CD and forget about it. You also need to make sure you’re using it correctly. For example, there are a number of best practices that can help you increase your efficiency and reduce operational costs: Using pipelines – Developers should create a pipeline that goes from code check-in, through tests, and into production. This will allow developers and testers to collaborate more effectively on the process, as they both know what stage the code is in. Automating release management – The best way to do this is by setting up a series of automated release gates for each environment. This will help ensure that all bugs get caught before being released into your production environment. Monitoring every step of the way – You also want to keep an eye on how long it takes for jobs to run in CI/CD environments so you can optimize the entire process. The longer it takes for something to run, the more expensive it becomes — and the less time your team actually has available for developing new features or fixing bugs. Can You Use CI and CD Separately? CI is necessary for any software development project, but CD adds an additional layer to CI in the DevOps automation framework. We often see this confusion about whether to use CI, CD, or both. There are many benefits to using CI/CD together when developing and deploying your software, but sometimes it can be overkill for certain projects. You may just need CI. For example, if you're working on a small web application with a few developers, there's no point in spending the time and money required to set up CD. But for big organizations and startups working on big projects, both CI and CD help developers focus better on their jobs. Conclusion CI takes the first step toward a successful DevOps approach. CD goes a little further to change software development by deploying software multiple times a day with confidence. Automation is key to any successful CI/CD strategy, but it doesn't stop there; security is becoming a high priority to keep the software development pipeline clean and to minimize the attack surface. Successful implementation of CI/CD with the right culture, mindset, and people can help you win in this highly complicated DevOps landscape. This is an article from DZone's 2022 DevOps Trend Report.For more: Read the Report
DevOps got off to a promising start. Way back in 2006, Amazon CTO Werner Vogel prophesied a hassle-free relationship between development and operations: “The traditional model is that you take your software to the wall that separates development and operations and throw it over and then forget about it. Not at Amazon. You build it, you run it. This brings developers into contact with the day-to-day operation of their software.” This you-build-it-you-run-it movement, which became known as DevOps, got us all excited over the promise that it would destroy silos and get teams working together more efficiently than ever before. But that was a long time ago. And lately, things haven’t been so sunny in DevOps land. What are some signs that the honeymoon is over? Well, for one thing, developers are complaining that they don’t want to handle ops (reasonably so, since this isn’t their core expertise). And meanwhile, operations teams are complaining that they’re overwhelmed with minute day-to-day demands from development. As DevOps engineer Mat Duggan writes, operations “still had all the responsibilities we had before, ensuring the application was available, monitored, secure, and compliant,” today they’re also responsible for building and maintaining software delivery pipelines, “laying the groundwork for empowering development to get code out quickly and safely without us being involved.” Today, every company says they’re doing DevOps, but all too few are reaping the benefits. Meanwhile, DevOps no longer signals a mindset of innovation the way it used to; today it’s just another buzzword every company has to use. Is it too late to save DevOps? Or do we still have a chance to get our teams back to the core promise, the excitement of true collaboration, that Vogel and others once foresaw? Rising Challenges We’ve seen how DevOps promised to bring a breath of fresh air to the software development world when it first emerged. So what happened? Why haven’t things gone as we hoped they would? Mostly, it’s because of new and unforeseen challenges that have emerged in both development and operations. Challenges for Development Teams Include: Faster integration and delivery cycles Loss of dedicated QA to thoroughly test releases (it’s been sacrificed to speed) Dev tech stack has grown and become more varied More complex and distributed apps with complex microservices architectures Security issues and software supply chain vulnerabilities Lack of time/incentives to learn more about the ops side of things Challenges for Operations Teams Include: Lack of clear ownership across the SDLC (with frequent finger-pointing at ops, since apps often work in test but fail in production due to real-world complexity) Developer demands for help when app testing fails Handling ongoing monitoring and observability, security, and compliance Adapting workflows to integrate IAC solutions with guardrails for safety and consistency Troubleshooting abstracted systems (like Kubernetes) — when apps fail, developers have little clue what the problem might be Demands to learn significant aspects of the dev stack and tools to solve problems And then, of course, there are challenges that both DevOps and Ops have in common, such as… Faster release cycles force both teams to be on high alert at all times Technical debt created by taking shortcuts to meet deadlines but must be paid for tomorrow Neither side is able to focus on its core expertise Finally, an issue the entire community is struggling with is “tool tax.” Also known as “tool sprawl,” or as Sharon Gaudin at GitLab writes, a “mish-mash of tools that force team members to continuously jump back and forth between multiple interfaces, passwords, and ways of working.” There are so many great DevOps tools out there, but you know things have gotten out of hand when our teams are using, on average, between 20 to 50 different DevOps tools. Is Self-Serve the Solution? As we’ve seen, there are so many issues getting in the way of speedy, efficient development. So what can we do to help our teams get past all these roadblocks? Some businesses have embraced the promise of “self-serve DevOps” approaches. And in theory, these sound like a dream come true. Self-serve DevOps promises to remove routine demands from operations teams’ shoulders while giving developers the tools, infrastructure, and services they need with simple, efficient requests. This clears bottlenecks and streamlines development workflow, letting them get back to coding without waiting for operations involvement. And, of course, it lightens the manual workload on ops teams. There are a few ways self-serve DevOps is being implemented, at least partially, today: Internal developer platform Workflow automation tools Service catalogs API marketplaces Unfortunately, none of these approaches are ideal. They typically just add one more layer of complexity, actually making the problem worse with another shiny new system for developers and operations to learn and more context switching to distract them from core tasks. Also, if you peek under the hood, many of these solutions actually rely heavily on “human-in-the-middle” involvement, meaning operations still gets those calls in the middle of the day — or night — that make their job so nerve-wracking. Time to Go Headless? Unlike existing “self-serve” DevOps approaches — that, as we’ve seen, usually simply add more unmanageable complexity — a headless approach offers something truly new and different. The term “headless” emerged from the world of websites (CMS) and e-commerce. In a headless model, the back end is “decoupled” from the front end. This means that the information (content, catalog entries, pricing, and so on) is “presentation-agnostic.” Your content doesn’t care how it’s displayed — via a web app, mobile app, desktop, and so on. This lets businesses decouple content from how that content is used or displayed, creating a nimble and (in theory) intuitive approach that works across all of today’s devices. For websites and apps, going headless can streamline development or testing: For e-commerce, back-end development can be done without worrying about compatibility/usability for different UIs, e.g. mobile, web, applications, and so on. For browsers, testing can be done faster without actual UI loading times, etc. A headless approach can help us think differently about DevOps as well. Both development and operations today are overwhelmed with tools, as we’ve seen. But what if they could both get their work done intuitively without the need for multiple tools and UIs just to complete a task? Decoupling DevOps Workflows and Pipelines With one single clear-cut, streamlined, and intuitive way to access the entire DevOps workflow, developers and DevOps won’t be forced to master and constantly access the wide range of disparate systems such as cloud infrastructure, CI/CD, and identity providers. No new systems to learn. No context switching. Just one simple interface to rule them all. In case you think I’m about to suggest adding yet another tool to make DevOps’ lives easier, fear not. I’m actually suggesting the exact opposite approach, and that is to “decouple” DevOps: First… detach these mission critical functions from all the myriad systems and UIs that our devs and DevOps are using now. Then… give them a way to get it all done in one place. And not just any place. Somewhere devs and DevOps are already working: their collaboration and project management tool of choice. That could be Slack—which is used by 40% of Fortune 100 businesses—or MS Teams—which has 270 million users as of this writing. Headless DevOps takes place anywhere your teams communicate all day, every day. Because only then can they realize the real promise of self-serve with easy and secure access to cloud infrastructure, CI/CD pipelines, information, and more. Picturing the New Headless Reality Are you hoping to make your DevOps teams more productive? Wherever they’re hanging out, whether it’s Slack or Teams (or even CLI) you need to transform that into their universal DevOps UI. This creates a win-win: Devs win because they can access necessary cloud infrastructure, trigger pipelines, and so on without needing domain expertise in those areas. Plus, they can do it securely with built-in access control. Ops wins because they can be hands-off for the most common day-to-day functions, letting them focus on bigger-picture issues. Using conversational AI embedded in Slack, MS Teams, or CLI, you can provide granular access control, automating all your workflows with simple conversations. The benefits of this approach are no doubt obvious to anyone who’s ever dealt with backlogs and bottlenecks in the SDLC, but just to mention a few specifically… Eliminating context switching and repetitive tasks — no more juggling Slack requests, Jira tickets, and a range of other tools and platforms Making automations highly accessible — development doesn’t need to rely on operations to find and trigger appropriate workflows, thereby eliminating human intervention and delays wherever possible Ensuring accountability — no more buck-passing, with every DevOps function under a single roof and a clear line of authority Improving security — operations build in guardrails to give developers safe and secure access with granular, just-in-time access to resources they can’t normally access Ending conflicts — end tension between departments by simply (and safely) giving developers what they want, when they want it In short, you’ll get your teams back to work like never before. And best of all, you’ll do it all not by adding another shiny new tool your teams will groan about but by transforming a tool you’re already using into a DevOps powerhouse. Now that’s truly using your head.
Senior Software Cloud Architect,
Site Reliability Engineer (SRE),
DevOps Architect / Azure Specialist,