Why Bother Building Cloud Native Applications?
Why Bother Building Cloud Native Applications?
Although it seems like a natural choice, you should consider whether you need to go native with cloud applications.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
I was recently asked, what's the point of building an application as a "cloud native" app? After poking at cloud native development the gains weren't clear to this developer compared with running in virtual machines or on bare metal servers.
Many developers are comfortable with the way they build and run their applications. In order to switch they need a compelling reason that they can quickly and easily get to. Using a different fad, such as "the way Google does it", isn't sufficient reason.
So the real question is, what are compelling reasons to build and run a cloud native application?
What is Cloud Native?
This is a loaded term. It used be used alongside cloud foundry as a marketing term. Now there is the Cloud Native Computing Foundation that includes player like Google (and they have Kubernetes). And, what about Infrastructure as a Service (IaaS) the original well known cloud whose ideals came out of Amazon?
Cloud Native applications are those specifically designed to leverage cloud technologies. That means they natively leverage horizontal scaling, are fault tolerant to going away and starting on a whim, and use external services or take advantage of microservices as needed. All of this means zero downtime updates and one off instances of an application are easy... or should be.
So, what do you get when you go cloud native?
1. Scale, Scale, Scale
When a traditional application needs to scale it can happen in a few ways.
- The application is moved to a larger server with more resources. With more resources it can handle a larger load. Or, maybe it's moved to a server with a different configuration that can do something special. This is a very hands on operation.
- An application can be setup to run on multiple servers. This can be as independent instances or as instances that know about each other. Sometimes applications will need to be updated to handle this setup. For example, in the way session handling was built into the app (think sticky sessions).
- I've seen custom setups to handle scaling. When the load get's high some system decides how to get more resources and makes it happen. The systems I've seen are highly customized and can be quite fragile. In any case, once you've gone this way you've now gone down the DIY path and are unable to leverage the work everyone else is doing (and freely sharing).
There may be other way, as well. In in the first two cases you scale up to handle peak load but never adjust resources down so they can be used by other things. You end up owning, maintaining, and paying for the worst case scenario for hosting your application.
Moving to a cloud native approach allows you to scale up and down based on demand. Scaling up and down happens all the way down at the microservice or individual application level. It happens based on demand. This frees up assets to be used for other things. Oh, and scaling is built into the way the applications and platform work so everything just gets it.
2. Avoid Stepping On Each Other
Have you ever seen the case where multiple applications on the same server want to use different versions of the same dependency? Or, you have two applications that want to bind to the same port. When multiple applications run on the same server there are plenty of opportunities to step on each other. As more applications are installed on a server the opportunity goes up.
In a cloud native setup you have one container (or sometimes a VM) per application instance. It doesn't step on anything else. The opportunity to do that is mostly taken away.
3. Fault Tolerance for Everything
Applications that aren't cloud native need to have fault tolerance written into them. If the application crashes something special needs to be written to detect that and figure out how to handle it. An engineer or an ops person may need to log into the server to figure out how to fix it.
In theory (and for many common issues), the system running the cloud native applications handles a bunch of fault tolerance (if the application isn't poorly built). If an application crashes some detection will see it's a problem, destroy the broken instance, and replace it with a working new one.
Of course, to achieve this elements of the application need to be accessible for monitoring. Some things are easy, such as the port the application is receiving TCP connections on or the amount of memory and processing it's using.
4. One Offs
I've had the case come up where a product owner or manager wanted to see what an idea looked like. We wanted to play with it outside the mainline development.
In a traditional setup I'd need to get a place to host this. Sometimes that wasn't even an option or only available to the really high priority things. Sometimes this is easy but each time it's been easy the path to using it has been a custom setup.
To deploy a cloud native application to a new location is typically a matter of configuration and a few simple steps. It's no big deal. This makes dev/test so much easier.
Cloud native doesn't solve anything that wasn't solved already. I've seen systems and setups that solve all these problems.
Cloud native applications take these things from being special cases implemented by wizards with special knowledge to commonplace and available to all developers. And, the systems they are built upon are similar being underpinned by the same patterns and the same or similar technologies.
Published at DZone with permission of Matt Farina , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.