I sometimes struggle to explain Platform-as-a-Service (PaaS) to people who usually equate deploying an application to deploying a virtual machine. This coupling is not an unsophisticated approach. Often these applications run on hundreds of servers with complex dependencies on other applications and data services, but as cloud applications increasingly move towards a microservices architecture, a one-to-one relationship between an application and its host machine can become cumbersome.
PaaS grew out of shared web hosting services, where multiple tenants run applications on shared systems run by a hosting provider. Admins in enterprise IT are understandably hesitant to deploy critical applications on a slice of someone else's pie, but the development of scalable, containerized, multi-stack PaaS software has changed the landscape. It's worth looking at how private PaaS can enable new ways of deploying and hosting applications in enterprise IT environments.
Before Infrastructure-as-a-Service (IaaS) and virtualization, people hosted applications on bare metal. Many still do. Companies would have a rack, server room, or data center full of computers running web servers. Each one would run a single operating system and the necessary supporting software for the application it hosted. The database might be served by another dedicated bare metal server, or cluster of servers.
With the advent of virtualization, instead of dedicating a separate physical machine to serve each application, we started to see virtual machines serving applications from a cluster of physical hosts. Each of those VMs still needed to be configured as before (with an operating system and all the application requirements) but the process could be done remotely and was more easily automated.
Setting up virtual machines was much easier than setting up bare metal servers, but there was still a close coupling between the operating system and the application it hosted. Clustering applications and databases was faster, but not automatic.
Where we are today
With a multi-tenant PaaS, applications become even less tied to the hardware. An additional layer of abstraction decouples the application from the VM, ideally using lightweight containers as the mechanism for allocating computing resources. Stackato uses Docker in the background to provide these application containers.
Application containers share the host operating system resources with other containers, while providing the same level of isolation for the application's processes, network traffic, and filesystem and enforcing equitable sharing of CPU resources.
A container can be created more quickly (around 100 milliseconds) than an operating system can be booted by the hypervisor (around 60 seconds), so applications can be rapidly scaled up and down on demand across a shared pool of VMs.
Virtualization offers the same advantages for setting up database hosts as it does for application hosts, but often at the cost of performance. High performance clusters can certainly be built on virtualized hardware, but it takes planning. The challenges of providing stable, responsive data storage for an application are different than those for running the application itself.
These challenges are sometimes ignored when VM provisioning is handed over to developers without experience in database performance tuning. There are many production databases running on poorly performing VMs in data centers too far (in terms of network latency) from the applications they support.
What is needed is a way for applications to consume database services which are backed by hosts that perform well and don't fail - services set up by a knowledgeable sysadmin, DBA, or even a team of specialists.
Stackato, Cloud Foundry, and most modern PaaS systems are designed to allocate such services semi-automatically. The developers don't need to know about the physical or virtual hardware the database runs on, or even the credentials, they just request a database of a certain type from the PaaS with an API call. The PaaS automatically provisions the service instance and connects it to the application, usually with environment variables in the container.
Applications consume database services, rather than connecting to particular database hosts. Credentials don't need to be hardcoded in the application code, and the database itself can move or scale. All of this is transparent to the developer and the application.
Streamlining the Development/Deployment Process
In many organizations, developers don't have direct access to the IaaS to set up their own virtual environments. There is usually a delineation: IT Ops owns the provisioning of machines; developers own the creation of code. This is a bottleneck when it comes time to deploy that code on production machines.
By exposing an interface to developers for self-service application hosting, IT retains responsibility for the IaaS layer but steps back from micromanaging software deployment.
With Stackato, IT Ops provides the necessary functionality for developers to deploy code. Stackato dynamically configures a hosting environment suitable for whatever code the developer is pushing, and provides them with a way to update and scale that application.
So that's my high-level pitch on how a PaaS can improve on the virtualization status-quo. If I had to distill it further into an elevator pitch:
Containers are "cheaper" than VMs in terms of resource utilization and startup time. You can run several containers per VM, and cluster those VMs to provide a pool of application hosts.
Automatically allocating database instances from a high-performance, high-availability service is better than having developers set up database hosts from scratch themselves.
Giving developers a self-service platform for application hosting gives them control over the deployment process. IT Ops can focus on setting up and managing the platform itself, rather than getting bogged down with individual application deployments.
Containerization on its own provides the first benefit. To get the second two you need a PaaS like Stackato.