Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

What is Serverless? — Part 3: Kubernetes and Serverless

DZone's Guide to

What is Serverless? — Part 3: Kubernetes and Serverless

Containers, ahoy!

· Cloud Zone ·
Free Resource

Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform.

This is a 5-part blog series. See part 1 and part 2.


Kubernetes is an open-source solution for automating deployment, scaling, and management of containerized applications. The business value provided by Kubernetes has been extended into the serverless world as well. In general, serverless loves Kubernetes — with Kubernetes being the ideal infrastructure for running serverless applications, because of a few key reasons.

1. Kubernetes Allows You to Easily Integrate Different Types of Services on One Cluster

From a developer's standpoint, apps typically incorporate multiple types of components. Any complex solution will use long-lived services, short-lived functions, and stateful services. Having all the options in one Kubernetes cluster lets you use the right tool for each job and still be able to easily integrate things together (whereas separate clusters will add both an operational and cost overhead.) FaaS works best in combination with other apps that runs natively on containers, such as microservices. For example, it may be the right fit for a small REST API, but it needs to work with other services to store State, or be suitable for event handlers based on triggers from storage, databases, and from Kubernetes itself. Kubernetes is a great platform for all these services to inter-operate on top of.

2. Kubernetes Is Great for Building on Top Of

It provides powerful orthogonal primitives and comprehensive APIs.

3. You Can Benefit from The Vibrant Kubernetes Community

All the work being done in the community in areas such as persistent storage, networking, security, and more, ensures a mature and always up-to-date ecosystem of enhancements and related services. This allows serverless to take advantage of things like Helm, Istio, ConfigMaps, Secrets, Persistent Volumes, and more.

4. Kubernetes Allows Container-Based Applications to Scale Reliably and In a Cost-Effective Manner

By clustering the containers within a container manager where they can be scheduled, orchestrated, and managed, Kubernetes reduces operations cost considerably when compared to not using a cluster manager, and greatly increases the reliability of your service.

5. Kubernetes' Scheduler and Cluster Management, Service Discovery, Networking

All are required in a FaaS framework, and so by running serverless on top of Kubernetes you avoid having to re-invent things, and can focus on the serverless functionality, leaving container orchestration functionality to Kubernetes.

6. Kubernetes Provides Portability

Kubernetes has emerged as the de facto standard for container orchestration across any kind of infrastructure. It thus ensures a consistent and native experience across cloud providers and across environments — from staging, to UAT, to Production. This enables true portability across any infrastructure — private or public. (Keep in mind, though, that as we've seen in Part 2 of this series, depending on your chosen serverless framework, the application may need to be re-written if it is to be migrated to a different environment, not because of the Kubernetes backend, but because of the lock-in to integrated services provided by a specific cloud provider.)

Challenges with Kubernetes for Serverless Applications:

While Kubernetes is a great underlying orchestration layer, it does require extensive set up and management overhead. There is still significant amount of software "plumbing" to be built before deploying a serverless application, even with Kubernetes. The code/function has to be written, the code has to deployed, containers need to be built and registered and then various configuration steps on Kubernetes (e.g. deploy, service, ingress, auto-scaling, logging) have to be carried out. Kubernetes is so robust and complex to manage, that it poses challenges to Ops in terms of the learning curve and the operational complexity — that hinder serverless adoption, particularly for on-prem environments.

What is needed is a solution to help with reducing the time and effort spent on "plumbing" the Kubernetes infrastructure required for developing serverless applications. What would be ideal for developers is to have a framework where functions are deployed instantly with one command. There should be no containers to build and no Docker registries or Kubernetes clusters to manage. Developers should focus only on "the code," while the complex steps involved in packaging, deploying and managing applications are automated by the serverless framework, while being entirely native to Kubernetes.

Enters the open source Kubernetes-native serverless framework, Fission.

More on that, on our next blog in this series.

Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers.

Topics:
serverless ,kubernetes ,microservices ,cluster management ,api ,cloud ,container orchestration

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}