Cloud 2019 Predictions (Part 2)
Cloud continues, containers make it easy to move.
Join the DZone community and get the full member experience.
Join For FreeGiven the speed with which technology changes, we thought it would be interesting to get IT executives' predictions for 2019. Here's what they told us to look for regarding the cloud:
John Morello, CTO, Twistlock
Cloud sprawl is real and getting worse as more and more cloud services are introduced. Just as with server sprawl and VM sprawl before it, the challenge with cloud sprawl is governance and knowing what you actually have running as you can’t secure what you don’t know about. Cloud providers make it so easy and seamless to create new services that it’s easy to experiment, move on, and then forget that you’ve deployed a database or app exposed to the world. Organizations should stress operational discipline like using automation for all deployments, so there’s clear boundaries, a defined process, and a basic record of the services they’re using.
It’s not an either/or; what we see in the real world is that customers see them as different tools with different strengths and weaknesses. Every customer we see using serverless is also using containers and choosing per scenario what’s the best fit for a given app. For example, in financial services, serverless is often used as the modern equivalent of batch jobs whereas containers are a more natural fit for modernizing J2EE apps. When you want more control and compatibility, you choose containers; when you want simplicity and are willing to give up some control, serverless can be attractive. Nearly everyone will have some of both.
Ben Bernstein, CEO, Twistlock
Serverless will continue to grow, but it will continue to be “containers’ little brother.”(i.e., it will scale proportionally to containers). I believe serverless is not a stand-alone stack, and performs best when combined with other cloud-native technologies, therefore I predict that in 2019, we’ll continue to see serverless computing grow, alongside containers, Kubernetes, OpenShift, Istio, and other cloud-native technologies/solutions.
Dima Stopel, VP of R&D and Co-Founder, Twistlock
I do not see serverless as a natural replacement to the container paradigm. Serverless technology is great for stateless applications, but while theoretically any application can be refactored into storing all the state in the database, it just doesn’t make sense in many cases. Containers, on the other hand, can support state. I also believe the adoption rate of the serverless environment will be different. In the case of a new application, it is relatively easy to develop it using both containers and serverless. However, given a typical existing application it is substantially easier to put it inside a container than to refactor it into a stateless, serverless app. This is why containers conquer the market so fast. I don’t think it will be the same with serverless.
Rick Kilcoyne, VP of Solutions Architecture, CloudBolt
We’re going to see containers make serious inroads in 2019. As CI/CD pipelines are built around containers, this drive changes in how we view the cloud.
There will be a commoditization of the cloud — services providers must grapple with the notion they are no longer selling solutions but selling a commodity. Because of this, the industry faces an interesting cost structure predicament. Enterprises want commoditization of the cloud, while vendors want to provide services that lock in customers.
People will care less about where the cloud is running, but more how containers are running
"Infrastructure as code" will hit it’s trough of disillusionment. IT teams will start wrestling with the view that infrastructure is code, and they’ll have to manage it carefully. It will be critical for IT to develop protocols ensuring code is wrapped and can be executives the same way every time.
Serverless computing is a trap door. It seems simple/enticing, but what manages the code? How do teams ensure the same code runs across cloud providers and that it all works?
Jerry Melnick, President and CEO, SIOS Technology
Advances in technology will make the cloud substantially more suitable for critical applications. Not surprisingly, organizations will expand the use of cloud services in 2019 for both existing and new applications, further accelerating the migration of workloads from the datacenter to the cloud. With IT staff now becoming more comfortable in the cloud, their concerns about high availability (HA) and disaster recovery (DR) will also begin to ease. This change is significant because the workload migration will increasingly include mission-critical applications.
Companies have long relied on purpose-built HA failover clustering technology in their data centers to protect their most critical enterprise applications. These third-party failover clustering solutions will further evolve to adapt and optimize operations for the cloud, making the cloud more suitable for critical enterprise applications. At the same time cloud service providers will continue to advance their basic availability capabilities to meet the needs of a broad range of applications, many of which have lesser demands than for full HA, but still, need basic DR assurances. With the evolution of both third-party clustering and nascent cloud availability capabilities, along with ready access to cloud DR capabilities, migrations from on-premises to cloud will accelerate.
Dynamic utilization will make HA and DR more cost-effective for more applications, further driving migration to the cloud. Economies of scale and on-demand provisioning in the cloud are nothing new. What will be new in the cloud is the ability to dynamically configure its virtually unlimited resources spread among multiple availability zones and geographical regions. And this on-demand high-availability will make the cloud an even more cost-effective platform for critical applications.
High availability requires redundancy, with standby resources that are provisioned and ready to run, to enable rapid recovery under all possible failure scenarios. These standby resources all sit idle unless and until called into service during a failover from the primary. The increasing sophistication of fluid cloud resources across zones and regions connected via high-quality internetworking now enables standby resources to be allocated only when needed, making HA and DR far more affordable.
The cloud will become a preferred platform for SAP deployments. SAP is the undisputed leader in supply chain management, making its SAP and SAP S4/HANA-based applications the lifeblood of organizations around the world. Given its mission-critical nature, IT departments have historically chosen to implement SAP in enterprise data centers, where the staff enjoys full control over the environment.
As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud, while also enabling IT to focus on service delivery rather than infrastructure management—all without sacrificing performance or availability.
Cloud “quick-start” templates will become the standard for complex software and service deployments. The statement “some assembly required” has always been the case when implementing new applications or provisioning new services, whether in a private, public or hybrid cloud environment. Beginning in 2019, cloud service providers will simplify provisioning with more widespread adoption of quick-start templates. These templates are wizard-based interfaces that employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications.
Among their key benefits are reduced training requirements, improved speed and accuracy, and the ability to minimize or even eliminate human error as a major source of problems for DevOps. Their use will substantially decrease the time and effort it takes for IT staff to set up, test, and deploy dependable HA and DR configurations. The resulting turnkey deployments can be expected to become a new best practice for even the most critical of applications.
Robin Schumacher, SVP and Chief Product Officer, DataStax
Multi-cloud will rule the world and reduce costly downtime. Just as multi-data center deployments became the new normal years ago for companies that have been successful at digitally competing in their chosen market, multi-cloud deployments will begin to usurp their place. The Uptime Institute’s latest report shows that, while single cloud designs can benefit enterprises, they’ve become the second leading cause of application downtime — much like single data center deployments were in the past. Those moving their data-driven apps to the cloud will need to architect them in a way so they are hybrid and multi-cloud in design in order to ensure zero downtime and uniform performance for their global customer base.
Joy King, VP Vertica Product Marketing, Micro Focus
Cloud economics are coming down to earth in 2019. Public cloud providers introduced a new architecture, separating compute resources from storage resources. This will become the new architecture that all data centers — on-premise, in the public clouds, and private clouds — must and will implement in 2019, with a focus on big data analytics. As data volumes continue to expand exponentially, data must be stored economically but compute must be applied with purpose, not dictated by the disk sizes needed for the data. The ability to apply purposeful compute to variable workloads aligned with business priorities while leveraging low-cost data storage platforms will be a primary driver of predictive analytics success, something that cannot be achieved if either data size or compute capacity is limited in any way. Companies who rely on analytics platforms/tools that cannot accommodate this transformation will find themselves spending more and getting less, falling behind against the new economy disruptors.
Eric Han, VP of Product Management, Portworx
Containers find a new use case: machine learning. Containers and machine learning are two of the hottest trends in tech, but they aren’t often talked about together. In 2019, that will change as businesses start to recognize the benefits of running machine learning workloads in a containerized environment. There are several benefits to combining these technologies: the ability to cluster and schedule container workloads allows containerized machine learning applications to scale easily. And machine learning processes that exist inside of containers can be exposed as microservices, allowing them to be easily reused across applications. As enterprises look to cognitive technologies to transform their businesses, they will see the benefits of bringing machine learning workloads to highly flexible and scalable cloud-native environments.
A winner will emerge from the cloud war. Microsoft has long counted on a hybrid-cloud approach to stand apart from the crowd with Azure. With Google and Amazon both opening up hybrid cloud options in the second half of 2018, it’s officially a trend. In the cloud wars, the winner will recognize that not all enterprises are ready to dive head first into the public cloud and will need to deliver more options. 2019 will see a battle between the big three as they balance wanting to offer the most forward-thinking services with the reality that they may be too far ahead of their customers. And IBM’s acquisition of Red Hat creates another wrinkle as it explicitly seeks to lead the way with hybrid cloud in the enterprise. Who can be the best resource for developers and the community, offering turnkey products for managing their cloud services across any and all environments? That will determine who wins in 2019.
Opinions expressed by DZone contributors are their own.
Comments