2018 DevOps Predictions (Part 2)
2018 DevOps Predictions (Part 2)
Executives predict that DevSecOps will become mainstream along with serverless architecture, which will simplify DevOps toolchains.
Join the DZone community and get the full member experience.Join For Free
Can you release faster without sacrificing quality? See how with our free ebook Strategies for a Successful Test Automation Project and a free trial of Ranorex Studio today!
Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2017 and their predictions for 2018.
Here's article two of two of what they told us about their predictions for DevOps in 2018. We covered additional predictions for 2018 in a previous article.
Aruna Ravichandran, Vice President DevOps Product & Solutions Marketing for DevOps, CA Technologies
We will continue to see end-users make a tighter connection between a company’s brand and the quality of its code, based on their experiences across a company’s applications. As a result, in 2018 more organizations will look to intensify their automated continuous testing efforts/shift testing left to earlier in the SDLC as they work to release higher quality code, faster; and increase their adoption of digital experience monitoring and analytics solutions that help them understand how users are using applications and apply enhancements that optimize their experience.
In 2018, leading DevOps organizations will do something differently – Continuous Testing. These organizations will invest in people and empower their developers with continuous testing capabilities that allow for a more complete range of quality checks (including security - DevSecOps) to be performed earlier, more frequently, and with greater reliability. QA will no longer be a bottleneck – instead, they will become the Testing Center of Enablement – an organization that works as a support or consultancy group to enable developers to build better test cases. These organizations will also develop better processes in which they automate everything at every phase of the SDLC. Finally, they will migrate away from legacy testing solutions that inhibit them from Shift Left methodologies by leveraging open source and cloud-based testing solutions. Testing will shift left to developers, testing tools will become democratized and testers will evolve as automation engineers.
The New Year will also bring a heavier focus on Test Data Management as businesses continue to find the availability of test data to be one of the most significant constraints that drives lead time in test cycles. Additionally, solid test data management practices will be key to overcoming compliance hurdles and avoiding huge fines associated with the EU General Data Protection Regulation Compliance regulation, which comes into effect on May 25, 2018.
Whether organizations mask it, clone it, mine it or generate synthetic data they will need to understand the structure of data. At an enterprise scale, organizations will have to automate the fulfillment of the everyday test data requests. Automation will reduce the request fulfillment from days and weeks down to minutes. Next, there needs to be some thought about how the consumption model will change as testing continues to shift left and right. As more testing shifts left going forward, consumption by developers will increase – which leads to consumption of the data via APIs.
In 2018, we expect to see leading businesses forgo practices where IT teams continue to be aligned according to discrete technology touchpoints (web, mobile, backend processes); executing separate functions while trying to ensure that they can provide the “best user experience” without any line of sight or intelligence into the respective areas that play into that experience.
To mitigate this issue, DevOps teams will begin adopting digital experience monitoring and analytics solutions that correlate data – from the point of customer engagement to back-end business processes. By contextualizing and correlating data across user-experience, application performance, infrastructures/systems, and networks, every team (be that front-end UX designers, Site Reliability Engineers or support teams) will gain critical and shareable insights into the experience of every customer, on any journey, from any digital platform and across every business channel. Incorporating digital experience monitoring and analytics, solutions will become an essential way for enterprises to not only manage the exponentially increasing amount of raw data (logs, metrics, transactions) but to analyze and identify patterns and behavior.
Through actionable insights, we see monitoring evolving rapidly from a reactive, end-of-the cycle and production only function, to a service leveraged across the DevOps toolchain. For example, product owners and teams will use digital experience monitoring to guide and improve designs, while development teams will drive better business outcomes by correlating code and practices to application performance.
David Millington, Product Manager, Embarcadero
The lines between database development and database administration will be further blurred, while new lines will be drawn between depth and breadth. DBAs will need a broader understanding of many different database platforms, while their developer counterparts will have a deep understanding of only a few. Also, companies will search for a balance between fast development on many platforms versus native experience. The popularity of web frameworks like React Native for mobile apps will decrease, and cross-platform frameworks giving native and performant behavior on every platform will rise.
Thomas Di Giacomo, CTO, SUSE
Another priority will be bringing IT operations and developers closer together. It started a few years ago with DevOps and containers, but it’s becoming more and more clear that bringing those two things together, the application power, is going to become even more relevant for the future.
John Viega, CEO, Capsule8
The Cloud-Native Experiment Will Go Live - The cloud-native world, defined by containers, DevOps, continuous delivery methods and microservices, has seeded itself in more than 80% of large enterprises but often at just an experimental capacity. That won't be good enough anymore. In 2018, many more companies will shift from their legacy environments to hybrid or even fully cloud-native approaches. Companies such as Google, Facebook and Amazon were at the forefront of the cloud-native movement building highly scalable, dynamic applications and in 2018 any corporation looking to use technology as a strategic advantage will rebuild their application stack and move toward a cloud-native approach. This, of course, means security teams will need to rethink their security strategy for this stateless world or stateful/stateless hybrid world, which has been one of the main issues holding a number of these companies back, but the value of the cloud-native environment is too much to ignore.
Mark Pundsack, Head of Product, GitLab
Improved ease-of-use of Kubernetes: There’s a steep learning curve to Kubernetes and various organizations are aiming to improve the developer experience. There’s no clear winner on which tool will achieve that, but efforts will continue, and some tool will ultimately rise to the top.
More containers running in production environments: Containers are a core pillar of DevOps, but in 2018 they will hit a tipping point where we’ll see more developers running containers in production on Kubernetes than not.
Backlash against the DevOps toolchain: Demanding a more complete approach, developers will speak out against the current DevOps toolchain. As opposed to logging into 10 or more different tools to accomplish a task, both developers and enterprises will want an approach that is seamless and effective.
The prominence of DevSecOps: In 2017, few enterprises have embraced DevSecOps as a strategy to secure their development projects. By 2018, this will become top-of-mind, with enterprises baking security into the DevOps lifecycle rather than being an afterthought. Vendors will either offer this solution voluntarily or be asked to do so by customer request.
Stephanos Bacon, Senior Director, Portfolio Strategy, Red Hat
After years of hype and positive coverage, we will see a number of high profile examples of "DevOps failures" that will have two parallel effects: In some organizations, there will be a chill and a re-assessment of the initial excitement. Organizations that are more forward thinking, however, will look to these failures as learning experiences and will take more seriously both the changes in culture and process, as well as the requirements on underlying platforms and software architectures required to benefit from DevOps. Vendors that can work with their customers across this full spectrum of needs, stand to benefit.
In the software delivery lifecycle, it has been speculated that a problem found in maintenance, which happens after production, is 100x more expensive than a problem found in design. Other research indicates that software glitches in the US cost the economy $1.1 trillion in 2016. Failures create huge levels of pressure, customer dissatisfaction, and financial stress. 2018 will be the year that this issue is given much greater priority and, as a result, we are likely to see continuous delivery go mainstream as organizations aim to eliminate ‘glitch risk’.”
Dan Juengst, Principal Technology Evangelist, OutSystems
The DevOps movement will continue especially in the world of cloud-native and container-based application architectures where the automation capabilities of DevOps will become a requirement in order to successfully manage application delivery and change.
Nick Zimmerman, Senior Site Reliability Engineer, SparkPost
In 2018, serverless architecture should continue to grow in acceptance and implementation. Expect big things from the major providers to help support this movement including the improvement of the technologies connecting serverless functions to containerization.
Steven Mih, CEO, Aviatrix Systems
Networking as code will finally join compute and software as code in public cloud environments. This transition will happen as purpose-built cloud networking solutions, designed from the ground up for the cloud generation, will be deployed widely in enterprises.
Networking as code will become part of the DevOps stack. It will be analogous to any other tool in the DevOps software stack.
Responsibility for networking/connectivity functionality within enterprises will shift from the traditional IT networking teams to DevOps. DevOps will be able to automate everything, including networking and security.
An expanding range of infrastructure-as-code tools will be widely adopted by DevOps. Terraform and public cloud vendors’ other infrastructure-as-code tools address only a subset of the networking-as-code functionality. Purpose-built cloud-generation networking as code will become standard in DevOps toolkits.
Ravi Mayuram, SVP of Engineering and CTO, Couchbase
Secure by Default Takes Precedence Over Ease of Use in DevOps: DevSecOps -- or the merging of security with DevOps -- is rising in prominence to combat omnipresent security vulnerabilities by incorporating preventative measures in the initial development stages. While there was previous tension between easy-to-use and secure-by-default solutions, security has become top of mind again for developers due to GDPR compliance and increasing data regulations. As NoSQL gains prominence in the enterprise space and databases are filled with more customer data, built-in security will continue to become increasingly important.
Serverless architecture proliferates within organizations: As cloud technology has matured, serverless architecture has surfaced to compose reactive architectures that drive smaller, more efficient services. Serverless architecture will constitute the next infrastructure overhaul at the application layer, especially as DevOps seeks to drive business value in new ways. 2018 will see serverless architecture spike in adoption, and new use cases will emerge to assemble -- and disassemble -- the stack in ways that haven’t been possible before.
Sarah Lahav, CEO, SysAid
Greater adoption of DevOps, with more focus on culture. Whilst the most innovative, cutting-edge IT companies have been enthusiastic about DevOps for quite some time now, for more mainstream organizations, it wasn’t really on the ITSM map. Recently, that’s flipped. DevOps has gone mainstream, with more and more organizations adopting some of its ideas and methodologies to help them deliver value faster, and at lower risk. Some of these IT organizations focus on the technical aspects of DevOps, and there is certainly a lot of value to be gained from automating the toolchain that integrates and deploys code, resulting in smaller, safer and faster changes to IT.
But organizations that never go beyond this focus on technology will miss out on many of the real benefits to be gained from the DevOps approach. This benefit comes through managing Flow, Feedback, and Experimentation and Learning – the three ways of DevOps; and organizations won’t gain this benefit without doing the work that’s needed to understand the three ways.
Unfortunately, as so often happens when an innovation goes mainstream, some organizations will claim to be “doing DevOps” when all they have done is adopted some of its typical technical features. When this fails to deliver according to expectations, then they’ll decide that “DevOps has failed” or “DevOps doesn’t work.” They’ll then move on to the next popular fad, and will never reap the benefits that come to organizations that see DevOps as a combination of culture, agile, lean, measurement and sharing (CALMS), so introduce and implement technical change appropriately.
Jason Hand, DevOps Evangelist, VictorOps
As the need for optimized speed and bringing down Mean-Time-to-Repair (MTTR) continues to grow, companies are looking for ways to become better at observation and getting a pulse on what’s happening to their system without having to make a risky move. Given this, more companies will look to bridge the gap and meet in the middle between traditional ITSM and DevOps.
We will see a blending of IT roles within Dev and Ops, creating more of an “engineer” position. For example, traditional roles such as Quality Assurance (QA), Sec, Ops and Database Administrators (DBAs) will blend together into engineering teams that are more product-focused.
With continuous delivery practices continuing to grow, we expect to see more engineering teams – regardless of responsibilities – want to know what’s going on within their delivery efforts through insights and analysis. Today's engineering teams demand that they know about problems sooner and sooner in the systems development lifecycle (SDLC). If it's actionable, they want to be alerted well before it makes it to the end of the delivery pipeline and into Production.
There will be a shift in terms of how people think of DevOps itself. DevOps has generally meant the process of speeding up the delivery of software, but I think more organizations are realizing that it’s all about delivering the value of your product/service to the end user. It’s as simple as that.
Leo Laskin, Senior Solutions Architect, Sauce Labs
Continuous testing gets its due in the mobile arena: The year-over-year growth of mobile shopping has finally jettisoned traditional desktop purchasing as king of the holiday hill, according to Adobe Analytics. With this swing, a greater emphasis on mobile testing is needed to ensure app experiences are up to snuff. In order for developers to keep pace with rapid deployments associated with mobile, continuous testing will need to play a pivotal role. A mechanism that’s struggled to find a home in the development lifecycle, continuous testing will finally pay major dividends in 2018 with the rise of mobile.
AI testing grows up: With mixed success in ‘17, AI testing will take a major step forward next year. Currently, AI testing can help develop tests, but the key breakthrough will feature AI testing that can easily identify bugs within test failures without needing QA staffers to pinpoint issues themselves. This will cut down on cyclical review time and speed up deployments, pushing efficiency boundaries for release times.
Organizations don’t automate everything yet -- but develop a culture of intelligent automation: Though we’ve already begun to witness a cultural shift in how organizations approach testing, 2018 will see a continued shift in the role manual testers play -- and an increased demand for intelligent automators. Organizations are departing from the traditional testing paradigm of taking manual test cases and writing automation on those tests and shifting toward revamped tooling and intelligent frameworks to implement automation from the get-go. Automation is the future: and the upcoming year will see businesses increasingly deploy automation within their organization -- beyond just using automated deployments.
“Automated” CI-CD pipelines will continue to fall short of achieving full automation. The vision of a fully automated CI-CD pipeline will remain an unrealized goal for most enterprises, as the advancing technology continues to require manual steps and special code and scripts to operate -- especially for run-time security. Enterprises will stay on target in seeking the ability to integrate pipeline components and restructure their organizations around more streamlined and automated processes, but the accomplishment of these goals will remain in the future.
Opinions expressed by DZone contributors are their own.