Over a million developers have joined DZone.

2018 Cloud Predictions (Part 2)

DZone 's Guide to

2018 Cloud Predictions (Part 2)

Public, private, on-prem, hybrid? Bet on multi-cloud, and get ready to see the massive impact containerization and the IoT edge will have on cloud computing.

· Cloud Zone ·
Free Resource

Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2017 and their predictions for 2018.

Here's the second of two articles with predictions for the Cloud in 2018. You can find Part 1 here.

Thomas Di Giacomo, CTO, SUSE

A top priority, and that's not really a new technology trend, but it’s more about how to accommodate the existing and old IT with the new IT, that’s from an end user perspective for enterprise IT. How to make sure that people can adapt to new technologies but keep running what they already have. This is a top priority not just for next year but for the years to come. Companies will have to keep adapting and adopting new technology while they keep running what they have. Hybrid and multi clouds were buzz words a few years ago and now they are back because it’s happening in real life with real companies. We have independent studies from analysts showing that most companies are running a hybrid cloud strategy, [private and public clouds and/or on prem] and are making it a priority now.

As technology and emerging markets mature, consolidation of these solutions will occur. We have been around for 25 years and were at the forefront of Linux. Since we have this historical context, we can share thoughts on organizations that will consolidate or expand. What we saw in the beginning of enterprise Linux was that there were a lot of different solutions and then comes the consolidation as the technology matures. We have seen this happen with Linux, we have also seen this with OpenStack. At the beginning of OpenStack, there were limited companies interested and then there were a lot of companies interested to a point where there were dozens and dozens of OpenStack solutions. Then the technology and the markets mature, and you start to see consolidation, reducing the number of solutions and the number of companies offering the solutions.

Rob Young, Manager, Virtualization Product and Strategy, Red Hat

In 2018, except to see the market for virtualization solutions disrupted, customers will be less willing to pay a premium for virtualization and expect it as part of the infrastructure they currently run or are planning to run. Automation, management, and orchestration of on-premise, remote and cloud-based application deployments will also become even more of a business-imperative for enterprises. We'll see users with Mode 1 legacy applications shift those existing workloads, as well as new development investments, to Mode 2 infrastructures — think hybrid-cloud running containers.

Gunnar Hellekson, Director, Product Management, Linux and Virtualization, Red Hat

It's all about containers in 2018 — and virtual machines running on containers will gain momentum in the new year as Kubernetes continues to evolve. It's important to remember, however, that virtualization and containers are not competitive, but compliments to one another. We will continue to see the vast majority of containerized infrastructures operating on hypervisors, with early adopters actually merging the two with Kubernetes.

Chris Colotti, Field CTO, Tintri

2018 will be the year disaster recovery (DR) moves from being a secondary issue to a primary focus. The last 12 months have seen mother nature throw numerous natural disasters at us, which has magnified the need for a formal DR strategy. The challenge is that organizations are struggling to find DR solutions that work simply at scale. It’s become somewhat of the white whale to achieve, but there are platforms that are designed to scale and protect workloads wherever they are — on-premises or in the public cloud.

Gary Watson, CTO and Founder, Nexsan

In 2018, we’re going to see a palpable retreat from the public cloud. While it works well in certain situations — such as for short-term storage — the reality of public cloud is that it isn’t nearly as cheap and efficient as people would like to believe. Companies still need an army of people to manage it, and the price for most options continues to increase quarter after quarter due to storage and CPU growth over time, eventually making public cloud expensive compared to on-premises storage of the same thing. In the coming years, we’ll see new things coming down the storage pipeline which will make orchestrating on-premises servers and virtual machines easier, with lower cost and manageability and scalability as easy as if on the cloud. I predict there will be a correlation between a return to on-premises and a slowing down of the adoption of the public cloud.

Jason Collier, Co-Founder, Scale Computing

In 2018 and beyond, the future is all about simplifying hybrid IT. Using Hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments,” said Jason Collier, co-founder at Scale Computing. “As more and more organizations utilize the hybrid cloud model, organizations will look for solutions that allow them to utilize their apps created for on-prem to run in the cloud, which will be a game changer for end users, channel partners, and MSPs globally.”

Dan Juengst, Principal Technology Evangelist, OutSystems

We will see more enterprises go “all in” on public cloud as the features, capabilities, and economies of scale of public cloud continue to improve.

Gordon Haff, Technology Evangelist, Red Hat

In 2018, hybrid cloud discussions will gain in sophistication around portability of workloads. We have a lot of useful tools in the portability tool chest: software-defined storage, hybrid cloud management, container standardization through the Open Container Initiative, broad support for Kubernetes, and, of course, Linux. However, we also need to appreciate that the very large datasets created and used by the Internet-of-Things, Machine Learning, and other new workloads may limit "seamless" portability. Understanding the best architectures and practices for emerging application types should be a priority.

Dmitri Tcherevik, CTO, Progress

Everything as a Service  The software industry was at the forefront of SaaS transformation. Software companies went from selling packaged solutions to delivering online services. Instead of hunting elephant-size deals and fasting between “kills,” we switched to subscription-based business models that deliver consistent and predictable revenue.

Now other industries want in. Manufacturers want to switch from selling equipment outright to offering this equipment as a service. The buyer will lease the equipment in tandem with maintenance that is required to fulfill the service level agreement included in the contract. Both sides benefit. The buyer will get equipment that is guaranteed to work. The seller will have an opportunity to build a relationship with the buyer.

In order to make this change, manufacturers need to start thinking of themselves as service providers. To deliver services versus products, they need to deploy and constantly update sophisticated IT systems. As a result, many of these businesses will be transformed into software companies, as exemplified by Ford, GE, GM, and others. The phenomenon is spreading to other industries, such as healthcare, biotech, financial services, and telecommunications.

The second phase of the ‘Software is Eating the World’ phenomenon  During the first phase, non-software companies, such as banks and manufacturers, packaged their internal solutions as software services and APIs effectively becoming SaaS companies. Examples include Charles Schwab, GE, John Deer, Caterpillar, Monsanto, and many others. During the second phase, the same companies will be turning into Insight-as-a-Service providers. They have the domain data and the data scientists.

Cloud-native applications  Many of today’s business applications were built as monoliths – one single body of code that was designed to be built and deployed as an indivisible whole. Its state is typically managed in one database and modeled as a single data model.

Mobile applications, subscription-based business models, and cloud deployments are making monoliths obsolete. To support the higher workloads generated by mobile apps and service API calls, companies need to scale up the backend.

The only way to scale up a monolith is to throw more hardware at it. This hardware needs to be statically allocated for peak capacity. It goes unused most of the time, which is expensive and wasteful.

To take advantage of the elastic nature and flexible pricing offered by cloud computing, monoliths need to be broken down into functions and services that can be managed independently. These applications are often known as cloud-native apps.

A great many companies are now embarking on this change. They are strangling their monoliths and gradually replacing them with collections of loosely coupled services.

Steven Mih, CEO, Aviatrix Systems

Multi-cloud deployments will begin to take off. Whether to avoid lock-in to a single public cloud vendor, or to have the ability to match particular workloads and applications to specific cloud vendors’ strengths, we expect more enterprises to begin operating in multi-cloud environments by 2018.

The emergence of networking as code will enable greater hybrid cloud and multi-cloud adoption. Just as Terraform and other infrastructure-as-code tools have become prevalent in public cloud environments, the emergence of networking-as-code solutions will make it easier for enterprises to adopt and expand their hybrid cloud and multi-cloud environments.

There will be a tipping point in the number of people who realize that private clouds are more expensive overall than public clouds. We believe that up to 90% of people in technical decision-making positions within enterprises will reach this realization in 2018.

Public cloud vendors will have to become more transparent about how they operate. Right now, public cloud operations are a black box. But the European GDPR requirements will hit in 2018, and advanced enterprise users are becoming more sophisticated in their cloud strategies and need to be able to differentiate their offerings based on the customer experience they provide. As a result, public cloud vendors will be forced to reveal more about how and where specific data packets are hopping through their cloud networks.

You’ll lose your technical job if you can’t spell Lambda. In other words, cloud skills will become a requirement for anyone holding a technical position within an enterprise. Cloud experts don’t have 25 years of hard knocks and battle scars behind them. There’s a lot of new terminology, new perspectives, and new skills required for managing cloud operations. But by 2018 or soon thereafter, anyone in a technical position at an enterprise will either understand how to navigate in the cloud world or be left behind. Knowing your way around AWS, Azure, and Google Cloud Platform will be assumed, just like today it’s assumed you’ll be able to use Outlook, Excel, or SalesForce.

‘Network engineer’ will no longer be a job title. Networking and connectivity functionality will be part of cloud operations and handled by the cloud team.

The next two years will be extremely critical for public cloud vendors. With 80% of enterprise workloads still running in on-prem datacenters, the public cloud is still seen as a discretionary play. If enterprises have difficulty moving more workloads to the cloud — which is the case now with existing enterprise applications — CIOs will begin to say no to budgets for cloud deployments. If enterprises can’t migrate to the cloud, the public cloud vendors’ business will stagnate. If the situation doesn’t improve in the next couple of years, AWS and the other public cloud vendors will be in trouble.

Ravi Mayuram, SVP of Engineering and CTO, Couchbase

Multi-cloud Takes off as the Future of Cloud Technology: The cloud has matured from private to hybrid and now multi-cloud technology, which will become the de-facto standard for cloud architecture as companies seek to optimize workloads and avoid vendor lock-in. Modern data infrastructure will accompany the adoption of multi-cloud, as data security and integrity become mission-critical with data moving between clouds and an increasing number of touchpoints. No longer just a competitive advantage, multi-cloud will become the new cloud standard in 2018.

Edge Computing Leaps to the Forefront: Cloud computing revolutionized virtualization and ushered in the digital era, and now edge computing will bring those digital learnings back to hardware for applications that extend customer engagement in novel ways. Industrial IoT applications, sensors, and VR-powered devices use edge computing to provide offline capabilities that deliver the seamless, real-time experiences modern users expect. Data capabilities and chip technology are now advanced enough to support real-time compute at the edge, and 2018 will see organizations updating infrastructure to take advantage of the benefits of edge computing.

Jevon MacDonald, CEO, Manifold

Developer services will become the latest battleground in the cloud wars: The success of independent services like MongoDB, SendGrid and Twilio, has proven that the best developer services don’t always live on one of the Big Three’s cloud platforms. And with Kubernetes eliminating the complexity of creating multi-cloud applications, the industry will continue to shift towards cloud-agnostic solutions. In 2018, Amazon, Google and Microsoft will need to address the elephant in the room, and decide whether to open up their clouds in a meaningful way, or continue to fortify their walled gardens. Monopolies have an expiration date, and the first to realize that opening up their cloud will become the new center of gravity for developers and ultimately rule the cloud services ecosystem.

Matt Creager, Vice President of Growth and Developer Relations, Manifold

Kubernetes will give rise to a new breed of PaaS: Many of the features we once attributed to PaaS, from zero downtime deployments to executable packaging deployments, are now the responsibility of containers, container registries and orchestration tools like Kubernetes. In 2018, Docker and Kubernetes will be the foundation of a new generation of PaaS that offer highly opinionated workflows for developers building specific classes of application. Imagine a fully-managed PaaS based on Brigade — a tool designed to make implementing custom deployment pipelines simple.

Serverless will become the next browser war: For all of the promise and momentum behind serverless today, the reality is that its fate is far from certain. Every major IaaS now offers a different flavor of FaaS — making it next to impossible to move between them. 2018 will be a pivotal year in determining the technology's fate.Next year, we’ll debate a specification for the serverless runtime, soon after it will be governed by the CNCF alongside Kubernetes.

Peter Cho, Vice President of Product, Manifold

Cloud services will fail or succeed based on one factor: developer experience: As markets mature, having the best technology stops being a guaranteed ticket to success. Instead, it’s the overall user experience that drives adoption and retention. We’ve seen this with mobile banking. We’ve seen this with file sharing. And in 2018, this trend will become the top driver of purchasing decisions for developer services. Looking ahead, the top services will be the ones delivering what developers need most to succeed, which includes the basics, like great documentation and support, but also modularity and extensibility. Most important of all, services that are cloud, platform, and language agnostic will rise to the top of the market.

Fei Huang, CEO of NeuVector

Multi-cloud container deployments will debut. Enterprises will increasingly deploy their services across multiple clouds, both in the name of resilience and avoiding vendor lock-in. Securing (and monitoring) these deployments will be a challenge that requires multi-cloud aware security tools.

Lynn Elwood, Vice President of Cloud & Services Solutions, OpenText

In 2018, organizations will look for invisible infrastructure – or seamless integration of cloud, on-premises and hybrid infrastructure, allowing users to do business regardless of where applications live. Organizations will continue moving to the cloud, but that does not mean they will look for immediate rip-and-replace strategies. Especially among large enterprise, organizations will look for partners to help them complete their digital transformations while leveraging the significant technology investments already in place.

New cloud ecosystems (many of which are borderless) and new regulations like GDPR will change the way we look at data ownership in 2018. A common view for many organizations today is that they own data they collect (and can pretty much do with it what they please). GDPR will cause a shift in how we view data ownership to focus on the individual. Under new rules like GDPR, individual consumers data privacy must be protected, and organizations that don’t change their behavior may face hefty fines and reputational damage. 

In 2018, we’ll continue to see growth in managed services as organizations turn to partners to manage details of cloud and on-premises infrastructure and applications, especially for small to mid-size businesses looking to stretch budget and reign in staff costs.

John Gentry, CTO, Virtual Instruments

A shift from the ‘hybrid data center’ to the ‘hybrid application’

  • The realization that there are multiple components to a single application living or running in different data centers, or on different infrastructure types
  • As opposed to the previously experimental approach or a blind “cloud first” strategy, we will start to see the selection of the right technology for the application type or profile becoming a more deliberate process 

Cloud repatriation - a big theme for 2018

  • Repatriation is recognition that not all workloads are suitable for a public cloud; a hybrid application has components that live in the cloud, but also those that should be on-prem.
  • The repatriation and redeployment process will shake up the cloud vendor ecosystem, where AWS will no longer be the de facto solution as companies re-evaluate their cloud strategy

The rise of application-centric infrastructure performance management

  • The application will become the center of the conversation for IPM (as apps reside across the layers of compute, network and storage environments) and workload behavior analysis and intelligent placement will become the driver for technology selection. 
  • A trend towards more collaborative, cross-functional teams and less specialization, reflected in performance monitoring approach.  IT organizations will use fewer silo-centric tools and use more vendor-agnostic cross-silo monitoring platforms that include machine-learning based analytics - to move from reactive troubleshooting to proactive performance management.

John Considine, General Manager of Cloud Infrastructure Services, IBM

As GDPR becomes a reality, cloud security growth and sophistication will skyrocket. GDPR becomes a reality on May 25, 2018 and will affect companies both in and out of the EU who handle the data of EU citizens. According to a GDPR readiness survey, almost 40 percent of businesses are fearful of a major compliance failing and the financial penalties for non-compliance are severe. As they navigate the complexities of GDPR, enterprises will double down on cloud security and focus on taking security measures to ensure their cloud apps protect personal data from loss, alteration, or unauthorized processing. In response, cloud service providers will continue to take extraordinary steps to ensure security is at the core of the entire cloud stack. Cloud security services will become more sophisticated with advancements to encryption capabilities, the continued integration of AI, and development of security services that work seamlessly across public, private and multi-cloud environments.

Reaching a tipping point in maturity: containers, Kubernetes, and serverless. Microservices architectures built on containers and serverless computing have revolutionized the speed at which apps can be built and how they connect to the most competitive technologies today: AI, blockchain, and machine learning. In 2018, we will see the adoption of these technologies reach a tipping point. They will move from early adoption to becoming the de facto standard for complex and production-ready apps across industries and companies of all sizes.

This shift is being driven by new tools that emerged in 2017 -- like Grafeas, Istio and Composer -- that enable developers to more securely manage and coordinate the many moving parts created by building with containers, serverless and microservices. These tools are enabling greater visibility for the developer including who is working with data, what’s being changed, and who has access, leading to better security. The result will be an uptick in the development of mature apps that can span and operate across multiple systems, teams and data streams.

Cloud-native thinking will drive a cultural shift that spurs innovation. Cloud thinking – both cultural and technical – is the only way forward, and many organizations are moving towards a true cloud-native architecture. As this shift occurs, organizations are embracing new technologies that are increasingly easy to implement. As we enter 2018, these changes will also begin to drive greater cultural shifts within these organizations. Technical teams, from data scientists to developers, are now considering questions like, how does my architecture foster innovation by handling the explosion of data from IoT? How can I take advantage of serverless to accelerate development? Is security built in to support blockchain solutions in my industry? As teams answer these questions, they will begin to transform into a more collaborative and iterative learning-based culture. This shift to cloud-native thinking will drive organizational innovation that brings together professionals with specialized skillsets from app developers, data scientists and data users in brand new ways.

Industry solutions are the future of cloud in 2018. Enterprises have moved beyond pure infrastructure-as-a-service and are turning to the cloud as a platform for innovation and new business value. As enterprises become focused on the bigger picture of what cloud can do for their business, they will look for industry-specific solutions that provide a single architecture from infrastructure all the way up to higher value AI and analytics services as opposed to purchasing piecemeal cloud services. For example, financial services companies will look to the cloud for high-performance computing as a service that couples GPU-accelerated infrastructure with embedded AI and deep learning capabilities in a single solution to help them turn the billions of data points they collect into actionable insights.

Nic Smith, Global VP of Product Marketing for Cloud Analytics, SAP

  • Data gravity to the cloud. 2018 is shaping up to be the tipping point for cloud analytics with the growth of cloud data and applications. Most organizations need to operate in a hybrid mode with analytics across data and applications as they transition and take advantage of cloud. An analytics strategy that is able to address this transition will be critical to run the business while bridging to the future.

  • A new plateau for ease of use. Natural language will allow users to ask their data a question and receive contextual insight and recommendations without having to understand the underlying data schemas, this will drive new use cases and adoption.

  • Value of end-to-end cloud analytics. Cloud-based analytics platforms will emerge to deliver a rich set of analytic capabilities to discover, plan, predict, visualize, prepare, collaborate, model, simulate and manage all leveraging a common data logic. The SaaS model provides the business a way to take advantage of continual product innovations in a seamless experience with common UX. This will address analytical requirements throughout the organization at a lower TCO vs. fragmented solutions causing inconsistencies.

  • Contextual insights delivered in the moment. Organizations will take analytics to the edge with contextual insight delivered to users in their applications, in the most beneficial moment with relevant context. Customer churn analysis, workforce planning, sales compensation, supply chain logistics are just a few examples which will benefit from timely insights delivered to users in-context within their application workflow.

  • Insight as a service to rise. With the growth of cloud data and AI, ML automation; the business will tap into context-rich insight which they do not own or control. Connecting to an advanced Data Network, users will access relevant external sources and combine with internal data to offer new digital services to their customers.
cloud ,multi-cloud ,edge computing ,containerization

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}