Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Why ''Less'' Is More is the Secret to Cloud-Native Computing

DZone 's Guide to

Why ''Less'' Is More is the Secret to Cloud-Native Computing

What's missing when it comes to cloud-native computing? Turns out, code, state, and trust.

· Cloud Zone ·
Free Resource

Hybrid IT. Hyperconverged infrastructure. Multicloud. Containers and microservices.

It seems that the entire IT infrastructure landscape is advancing so quickly, it might as well be lava under our feet.

It is imperative that we find some island, some core of stability, lest the shifting magma of innovation and disruption burn us to a crisp.

The good news: we have such an island: cloud-native computing.

Bringing the Cloud to All of IT

Cloud-native computing takes the best practices of cloud computing – horizontal scalability, resilience, and end-user configurability, to name a few – and extends them to the entire IT environment.

Just as the cloud abstracts the hardware and the entire physical layer generally, cloud-native computing extends this principle to all on-premises environments.

Cloud-native approaches thus underpin hybrid IT, which seeks to abstract multiple public cloud, private cloud, on-premises virtualized and legacy environments, giving us end-to-end, policy-based control and workload portability.

However, cloud-native computing goes beyond hybrid IT. It also recognizes that containerization is central to modern IT infrastructure – not exclusively, but on a spectrum ranging from traditional virtualization to serverless computing.

Unsurprisingly, Kubernetes is thus central to the implementation of cloud-native infrastructure today, although it is by no means mandatory that all cloud-native approaches leverage it.

Perhaps the most important way to think about cloud-native computing is as an architectural approach. Cloud-native architecture builds on both cloud and DevOps best practices, taking them beyond the cloud itself to all of enterprise IT.

Only one problem: cloud-native is even more than an approach, as it shifts the way we must look at IT infrastructure. In reality, it represents a lens through which we can see the entirety of enterprise IT in a new light. For this reason, I consider it to be a new architectural paradigm.

"Less" #1: Codeless

Ironically, the best way to understand the paradigm-shifting power of cloud-native architecture is to highlight what’s absent from it: cloud-native is codeless, trustless, and stateless.

I don’t mean to say that we don’t have to deal with state information or write code, and we can certainly trust some things. Rather, these three "lesses" characterize core cloud-native principles that we must aspire to in everything we do.

"Codeless" applies to how people assemble and configure elements of the IT infrastructure and follows directly from "software defined" best practice. When we say "software defined," as in software-defined networking, we mean that we can represent the behavior of the systems in question as metadata-based models that behave in a declarative manner.

The "infrastructure as code" movement is part of this story – only with cloud-native, we want to move away from code in how we describe and configure the behavior of our systems. Instead, we want them to be configurable and extensible, instead of customizable.

It’s no coincidence, therefore, that Kubernetes follows such practices. Even the various "flavors" of Kubernetes – and there are several – all share a single code base. This shift in priorities from customizability to extensibility has arisen as a result of learning the hard lessons of technology that came before – from the expensive slog of customizing enterprise applications to the plethora of different versions of Linux.

Furthermore, while the configurability of cloud-native infrastructure is part of the codeless story, it also extends to the applications that run on that infrastructure. It’s no mistake, therefore, that low-code and no-code application creation platforms follow codeless principles as well.

In fact, codelessness thus combines several existing trends into a single architectural paradigm, including software-defined approaches, declarative configuration, model-driven computing, and extensibility over customizability.

"Less" #2: Trustless

Trustlessness, or what cybersecurity professionals also call "zero trust," is an essential characteristic of modern cybersecurity. We can no longer rely upon perimeter security to provide trusted environments. Instead, we must assume all parts of are network are untrusted, and every endpoint must establish its own trust.

It’s not surprising, therefore, that Kubernetes calls for trustless interactions. Microservice endpoints are dynamic, so it’s essential for such abstracted endpoints to take care of their own security.

Trustlessness extends beyond the world of Kubernetes and containers, of course. Serverless computing is essentially trustless, as the function endpoint is now fully abstracted, and thus responsible for its own security.

The rise of IoT and edge computing raise the stakes on zero trust computing, as the number and variety of endpoints explodes. In fact, we have no hope of adequately securing the IoT unless we take a trustless approach.

In fact, when you think about managing security in the context of an enterprise IoT deployment, there would simply be no effective approach unless the underlying architecture is also codeless and fully policy-driven. Therefore, trustlessness and codelessness go hand-in-hand.

"Less" #3: Stateless

Statelessness is perhaps the most difficult of the three "lesses" to understand, largely because of the shifting role of state in modern architectural approaches.

In client/server and n-tier architectures, we managed state in the persistence tier – in other words, databases. As we learned to scale n-tier architectures, managing state in caches became increasingly important, and today, caches are an essential component of cloud computing and thus cloud-native infrastructure generally.

Now, however, containers and microservices raise the stakes on managing state. Containers are inherently stateless, a necessary side-effect of their inherent ephemerality. After all, you wouldn’t want to store data in one if it could disappear at a moment’s notice.

Statelessness is also one of the secrets of containers’ rapid scalability and elasticity. Virtual machines may take a few minutes to spin up, while containerized microservices (as well as serverless functions) take milliseconds, largely because they are stateless.

Managing state, however, is as important as ever, as most applications require some form of data persistence, if only to keep track of what the application is doing at any moment in time. Therefore, we must deal with state in an inherently stateless environment, where our applications consist of stateless microservices that know how to process information without storing it.

To accomplish state management in a stateless environment, Kubernetes takes a cloud-native architectural approach by abstracting storage via codeless, declarative principles and exposing such stateful resources via APIs.

This approach allows for whatever availability and resilience the organization requires from its persistence tier without requiring the containers themselves to be stateful. Furthermore, since state management depends entirely on APIs, it’s essential to secure such endpoints in a trustless manner.

Of the three "lesses," however, statelessness remains the greatest challenge – and the most dynamic area of innovation. Expect to see dramatic progress in this area over the next few years.

The Intellyx Take

As applications scale up and down in a cloud-native environment, microservices appear and disappear on the order of milliseconds. In addition, CI/CD approaches for deploying such microservices will lead to new and changing microservices on a continual basis.

The only way to configure an infrastructure to handle such dynamic behavior at scale is via a codeless approach. The only way to secure such application assets is via zero trust. And the only way such applications can keep track of anything and still scale is by managing state in an inherently stateless environment.

Zero trust is becoming an increasingly dominant best practice in the world of cybersecurity. With the rise of low-code/no-code platforms as well as the configurability of Kubernetes and other modern infrastructure software, codelessness is also becoming established.

With the rise of ephemeral containers and microservices, then, statelessness is also earning a spot in the triad of core cloud-native architectural principles.

Many organizations have made progress on one or maybe two of these sets of practices. Fewer have adopted all three, and fewer still have fully fleshed out a cloud-native architecture where all three "lesses" intertwine.

But make no mistake – this combination of the three "lesses" in order to support a new architectural paradigm is the direction enterprise IT is heading. Cloud-native computing is here to stay, and as it continues to mature, organizations that aren’t moving forward with it will find themselves at an increasing disadvantage in the digital era.

Copyright © Intellyx LLC. Intellyx publishes the Agile Digital Transformation Roadmap poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Allen Watkin, Susanne Nilsson, and Intellyx.

Topics:
cloud-native architecture ,trustless ,statelessness ,cloud ,zero trust ,cloud native principles

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}