Great talking to Gou Rao, co-founder and CTO at Portworx, to understand how their container data services platform helps companies solve data persistence challenges.
With Portworx, users can manage any database or stateful service on any infrastructure, using any container scheduler. They get a single data management layer for all their stateful services, no matter where they run. Not all users are the same, though.
According to Gou, Portworx looks at the enterprise world in three segments:
Data center operators.
End application owners (a.k.a. DevOps teams).
Platform architects building & managing a PaaS.
Portworx helps all three, but the customer lens is a little different. Generally, all these customers are running in a public cloud or a private data center with a desire to run microservices. Data center operators care about resource efficiency and the ability to run heterogeneous workloads on homogeneous infrastructure. Portworx, as a software-defined storage solution, enables just that.
For platform architects, Portworx provides a cloud-like experience. These customers are already using Docker, Kubernetes, or Mesosphere to automate the stateless parts of their applications. By deeply integrating with these schedulers, Portworx provides a cloud-native storage layer that enables customers to use the tools they like and are familiar with for their entire application, not just the stateless parts.
For DevOps teams, Portworx makes stateful containers as easy to deploy and manage as the stateless parts of apps, eliminating the need to wait days or weeks for resources to be provisioned before an application can be deployed.
What all these customers have in common is a desire to automatically deploy entire applications, including data, in a rapidly changing environment – whether it is for production or testing.
All of this is based on schedulers like Kubernetes, DCOS, and Swarm. This is the new way of putting the platform stack together. The scheduler provides the flexibility to run containers anywhere. Portworx, itself deployed as a container, handles the underlying state.
Companies like Lufthansa Systems, GE Digital, Verizon, and Capital One are building service-oriented composable applications with RESTful APIs. The methodology above resonates with their platform architects and DevOps teams because it enables a programmable, self-service system with complete automation as the end-goal. This also helps give enterprise operations control over a cloud-based platform while moving from VMware to container as a service.
At the end of the day, push-button deployments of entire applications including their data help everyone. There is less room for error, which means fewer bugs, and less waiting means better productivity. That’s what it means to have a cloud-native experience.