Cloud Native PaaS Architecture
Join the DZone community and get the full member experience.Join For Free
Cloud platforms exhibiting Cloud Native PaaS architecture provide an opportunity to increase business innovation and creativity. Cloud native platform solutions shield teams from infrastructure details and inject new behavior into the application.
Cloud native PaaS architecture requires infrastructure innovation in provisioning, service governance, management, deployment, load-balancing, policy enforcement, and tenancy. Cloud native, innovative provisioning infrastructure increases tenant density and streamlines code deployment and synchronization. Multi-tenancy within middleware containers enables teams to customize applications and services per consumer by changing run-time configuration settings instead of provisioning new instances.
A Cloud platform may automate governance and enforce policies (i.e. security, service level management, usage) through enterprise PaaS services. Cloud provisioning may fulfill enterprise deployment requirements across all service providers and technologies used by solution delivery teams.
To re-invent the platform and achieve benefits, new Cloud Native platform architectural components and services are required. Traditional client-server and N-tier web application architectures do not exhibit requisite cloud characteristics (i.e. elastic scalability, multi-tenancy, resource pooling, or self-service). Figure 1 below depicts the new Cloud Platform architectural components and services. The PaaS controller layer deploys, scales, monitors, and manages an elastic middleware Cloud. PaaS Foundation services provide common solution building blocks. A complete, comprehensive, and Cloud-aware middleware container layer delivers new cloud-aware capabilities to business applications.
Figure 1: Cloud Platform Architecture Components and Services
Elastic Load Balancer
Elastic Load Balancer (ELB) balances load across cloud service instances on-premise or in the cloud. The ELB should provide multi-tenancy, fail-over, and auto-scaling of services in line with dynamically changing load characteristics. Cloud Native Elastic Load Balancers are tenant-aware, service-aware, partition-aware, and region-aware. They can direct traffic based on the consuming tenant or target service. Cloud Native Elastic Load Balancers manage traffic across diverse topologies (i.e. private partitions, shared partitions, hybrid cloud), and direct traffic according to performance, cost, and resource pooling policies. A Cloud Native ELB is tightly integrated with the Service Load monitor component and dynamically adjusts to topology changes.
Service Load Monitor
The Service Load Monitor component acquires load information from multiple sources (e.g. app servers, load balancers) and communicates utilization and performance information to an Elastic Load Balancer responsible for distributing requests to the optimal instances, based on tenant association, load balancing policies, service level agreements, and partitioning policies.
When the level of abstraction is raised above Infrastructure as a Service (IaaS) instances, Teams no longer have direct access to specific virtual machines. New Cloud Native components are required to flexibly distribute applications, services, and APIs across a dynamic topology. A Cloud Controller, Artifact Distribution Server, and Deployment Synchronizer perform DevOp activities (i.e. continuous deployment, instance provisioning, automated scaling) without requiring a hard, static binding to run-time instances.
A Cloud Native Cloud Controller (or auto-scaler) component creates and removes cloud instances (virtual machines or Linux containers) based on input from the Load Monitor component. The Cloud Controller right-sizes the instance number to satisfy shifting demand, and conforms instance scaling with quota and reservation thresholds (i.e. minimum instance count, maximum instance count). The Cloud Native Cloud Controller may provision instances on top of bare metal machines, hypervisors, or Infrastructure as a Service offerings (e.g. Amazon EC2, OpenStack, Eucalyptus).
Artifact Distribution Server
The Artifact Distribution Server takes complete applications (i.e. application code, services, mediation flows, business rules, and APIs) and breaks the composite bundle into per-instance components, which are then loaded into instances by a Deployment Synchronizer. The Artifact Distribution Server maintains a versioned repository of run-time artifacts and their association with Cloud service definitions.
The Deployment Synchronizer checks out and deploys the right code for each Cloud application platform instance (e.g. application server, Enterprise Service Bus, API Gateway). With infrastructure and servers abstracted and encapsulated by the Cloud, a Cloud Native PaaS Management Console allows control of tenant partitions, services, quality of service, and code deployment by either Web UI or command-line tooling.
Cloud Native PaaS Architecture Business Benefits
Cloud Native PaaS architecture accelerates innovation, increases operational efficiency, and reduces cost.
The traditional, keep-the-lights-on, operational run-rate consumes precious resources and limits innovative new projects. By optimizing project footprint across pooled resources on a shared Cloud Native PaaS infrastructure, Responsive IT can reduce operational spend, improve total cost of ownership (TCO), and make more projects financially viable. Multi-tenant delivery models create an efficient delivery environment and significant lower solution deployment cost. For more information on the financial benefits of multi-tenant, Cloud Native platforms, read the white paper.
By building a Cloud Native PaaS environment, you provide your teams with a platform to rapidly develop solutions that address connected business use cases (i.e. contextual business delivery, ecosystem development, mobile interactions).
Published at DZone with permission of Chris Haddad, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.