The Pros and Cons of Running Production Databases as Containers
The Pros and Cons of Running Production Databases as Containers
Some pros and cons of running production databases as containers include databases on demand, high disk usage, automated stateless containers, and more.
Join the DZone community and get the full member experience.Join For Free
Containerization has long been transforming the way we develop and deploy apps and microservices now. Every element of the application can run independently, be scaled as needed, and be configured to consume the most efficient amount of resources.
The real challenge lies in separating the application plane with the data plane since data still requires states in order to maintain consistency. In a traditional app structure, data and database adopt the same monolithic design.
Unfortunately, this is outmoded for how trending cloud infrastructure is now set up. More apps now run production databases as containers in order to gain the same advantages as microservices. So, what are the pros and cons of running your production databases this way?
You might also like: Databases in Containers
Pro: Database on Demand
When databases are deployed as containers, they automatically become on-demand as per the rest of the app. There is no need to maintain a monolithic database instance to maintain data. Instead, applications can utilize their own databases as needed and only when they are needed.
As a result, databases can be made smaller. In fact, databases can be designed using the same principles as microservices by dividing large databases into smaller services that cater to specific parts of the application.
This leads to better-containerized databases in general. However, this implementation is not without its challenges. That brings us to our first con, which is...
Con: Runtime Resource Usage Issues
As mentioned before, databases are meant to be stateful and persistent, which are exactly the opposite of what containers are supposed to be. As a workaround, the lifespan of database containers is extended; this, of course, results in inefficiencies in runtime resource usage.
There is also the fact that databases need to consume a lot of resources to operate optimally. All of the allocated CPU cores and RAM are utilized when databases are running. Combined with the longer lifespan of database containers, you have the perfect recipe for inefficient cloud infrastructure.
Pro: Automated Stateless Containers
To solve the previous challenges, use tools like KubeDB and Portworx in deployment. KubeDB, for example, is designed to automate the process of monitoring database containers, making backups, restoring those backups on-demand, and cloning from existing databases.
Portworx is very similar. It is also designed from the ground up to manage databases and stateful services. it works with a wide range of containers and cloud infrastructure, including Kubernetes and Docker. It is even compatible with IBM DB2.
Automation solves a lot of the issues associated with running databases in containers. It certainly makes using containerized databases in a production environment easier.
Con: High Disk Usage
We really cannot talk about containerized databases without talking about the fact that you need to allocate a large amount of disk space for storing data — of course, the actual size of your storage disk depends on the amount of data you store.
Larger data storage has a negative impact on agility. If you need to move to a different cloud environment, for instance, you have to go through the process of migrating the stored data and making sure that it is moved properly.
Fortunately, you have tools like KubeDB and Portworx simplifying the process. Portworx, in particular, has native data mobility features that make migrating to different environments easier.
Leveraging different storage classes is important too. A StorageClass offers a way for administrators to define the “classes” of storage they offer. Different classes can link to different quality-of-service levels, or to backup policies, or to arbitrary policies. The pre-installed default StorageClass may not match your expected workload; for example, it might run storage that is too expensive for your budget. To avoid this, you can change the default StorageClass or disable it completely.
Pro: Configured for Performance
By breaking down large databases into smaller, more fluid containers, performance becomes a more manageable metric. There is no need to worry about long query queues or slow database performance due to the fact that there are so many ways to solve these problems.
For starters, you can choose to allocate more resources to the database containers when needed. Being on-demand by nature, databases can scale up (or down) as needed. Yes, automation is possible with this function too.
However, sometimes allocating more resources is not enough. Making efficient, optimized queries (at the application side) and the use of caching services is also something to take into consideration.
You can also load balance queries or the containers running the databases. When using a PostgreSQL database in containers, you can turn to tools like Pgpool-II for load balancing.
Con: Backup Is a Necessity
Saving the states of your database containers becomes a necessity when databases are deployed in containers. Without an automated workflow for saving states and backing up databases, you will have to resort to extending the lifespan of your database containers.
One thing to also do is back up the state of your databases after tuning or config changes. While automation tools like Portworx are capable of automating the creation and maintenance of database containers, they don’t always get triggered by config changes and manual updates.
Pro: No Single Point of Failure
Since databases are run in containers, a single point of failure can be eliminated completely. Database containers can be as highly available as other containers in the ecosystem. You can set up multiple redundancies, use load balancing to bridge multiple containers, and maintain performance at all times.
You also have the option to run a complete service mesh with database elements in it. The databases aren’t persistent and are still maintained automatically, but pods can connect to all database instances in a more fluid way.
In a DevOps cycle, deploying databases in containers is still seen as not being agile enough. The missing link though is to leverage tools like KubeDB and Portworx. Database performance, rapid deployment, non-destructive updates, and scalability can all be maintained in a meticulous way using tools provided by these software solutions.
Running databases as containers can also add other benefits, especially when you need on-demand environments for running software testing like integration or performance tests. In fact, moving databases to containers becomes a great way to make the entire system more scalable. Your app, no matter how dependent it is on a persistent database, can cater to more users without experiencing slowdowns and other performance issues. Databases in containers are only one of many approaches you can utilize, and it certainly has its pros and cons. Now that you know how to best implement database containers and the benefits you can gain, deciding to go down this route with the right tools will certainly be easier to do.
Published at DZone with permission of Juan Ignacio Giro . See the original article here.
Opinions expressed by DZone contributors are their own.