Serverless should not be optional, but instead, it should be there for all cloud-native environments. Of course, not all applications should adopt serverless. But if you look closer, the majority of the modules in the application are stateless, often stashed away in the corner, that is needed occasionally. Some need to handle loads that are highly fluctuated. These are the perfect candidates to run as serverless.
Serverless lets developers focus on code instead of worrying about infrastructural setups. To provide this environment, along with proper monitoring and reliable backbone to handle large throughput of events.
This is what Serverless Integration (Kubernetes based) looks like:
Everything is based on containers. Why care about the underlying technologies for serverless? Shouldn’t it all be transparent? If your goal is to host or build on a hybrid/multi-cloud that is locked-in free from all vendors, it means there are NOT just developers involved in the picture. You will eventually need cooperation between teams and work with all sorts of applications such as traditional services and microservices. Having unification and standardization technology will flatten the learning curve for teams to adopt new kinds of applications and make maintenance less complex.
From the development to the platform everything should seamlessly work together, and being able to automate and manage with ease.
Let’s break down all the elements.
The Platform: A Platform that provides full infrastructure and platform management with self-service capability, service discovery, and applying container policy and compliance.
The Serverless Platform: Handles autoscaling of the functions/application. The abstraction of the underlying infrastructure. Setup a revision of the deployments for easy rollback. And unify events for the publishers and consumers.
The Event Mesh: Events are published to the mesh, and the distributed consumers. The basic structure of the events is consistent and should be portable among platforms. All the events are flexible, governed, and pushed quickly. Powered by a reliable streaming network, it helps to store the streams of events for tracing, auditing, or later replay for big data processing/ AI/ML datasets.
The Integration Functions: Typical characteristics of serverless integration include, small, lightweight, stateless, and event-driven. These characteristics allow the application to be elastic, to tackle the under/over provisioning that we face today. From the operation side, these are the applications that cease and quickly spin up after being triggered by events.
For better resource optimization. And for developers, it is a simple modular code snippet that they write and gets automatically spun up. So they can focus on code instead of deployment-related issues. And Integration functions are the application that typically handles routing, the transformation of data payload in events, and also other composing and orchestrating problems. Also commonly used for connecting to external services, and bridges between systems.
The microservice or the long-running applications: These are the long-running applications that contain states, heavier or that is always being called. Some of them will send events to the mesh to trigger serverless functions to initiate and start or simply another consumer of the events.
The Service Registry: For sharing standard event schemas and API designs across API and event-driven architectures, either for events listening by serverless function or regular applications. Decoupling data structure and managing data types at runtime.
The API management: Gateway to secure and manage outgoing API endpoints. With access control and limits for the consumer, managing consoles, and data analytics for access endpoints.
These are again, my two cents on the components that you need to have to deliver a complete serverless application environment.