Running Databases in the Cloud Era [Video Demo]
Let's take a look at running databases in the cloud era and explore the full demo of how Hedvig enables the reliable deployment of private and hybrid cloud databases.
Join the DZone community and get the full member experience.Join For Free
Public and private clouds empower modern businesses to move away from traditional error-prone architectures and run applications with five nines and six nines availability. Business applications can be spun-up on-demand, instantly, and cost-effectively. Database applications have always been a key component of all enterprise infrastructures, but these applications and relational databases still have a long way to go when it comes to leveraging the power of the cloud. Due to being designed as large monolithic applications they present a significant challenge when you are trying to run them reliably in a scalable manner.
We have created a demo that exhibits how Hedvig enables the reliable deployment of private and hybrid cloud databases. In this demo, we cover two scenarios for running databases, Highly available databases and Test/Dev databases. You can find a link to the actual demo at the end of this article. Feel free to skip reading about the details, and jump ahead to the demo at the bottom of the page.
We'll start with a quick visual on Hedvig's two-tier architecture. For more architectural details, download Hedvig's technical overview whitepaper here.
- Data centersThe Hedvig cluster stretches across three data centers, two of which are on-premises (DC1 and DC2) and one in the cloud (Azure). There are three storage nodes in each of these data centers (west1/2/3, east1/2/3, azurenode1/2/3) with multiple storage disks attached to each node. The Hedvig Web UI can be used to visualize all the disks, nodes, and data centers, along with the storage parameters such as cluster size, usable space, used space, dedupe savings, etc.
- Storage proxiesOne storage proxy in each data center is responsible for Hedvig volume I/O operations for the applications running in the corresponding data center. These storage proxies can present Hedvig volumes to any local or remote client that has access to the iSCSI target port of the storage proxy.
For the production database demo, we set up two Linux clients (dbclient1.dc1, dbclient2.dc2) as an active/passive pair in two different data centers, using corosync and pacemaker. The MySQL database instances on these clients are managed by the corosync process, which guarantees that only one database instance runs on the active client at any point in time. A third remote client accesses the active client using VIP (Virtual IP) so that when the active client fails and the passive client takes over, the remote client keeps accessing the database instance without any interruption.
A Linux client is created in the Azure cloud (azureclient), which can access the Test/Dev database instance cloned from the production database.
Highly Available Databases
We start by provisioning a Hedvig iSCSI volume and set the data replicas as DC1 and DC2. Then, we create an xfs filesystem on it and grant access to our active/passive clients (dbclient1.dc1, dbclient2.dc2). This iSCSI volume is then mounted on the mysql destination directory such that all the data corresponding to the MySQL database resides in this Hedvig volume. The corosync process assures that only the active client has started the database instance. After the database has started running, we insert a few entries into the database and stop the active client. As soon as the corosync process realizes that the active client has failed, it starts the MySQL database on the passive client using the same Hedvig volume. We can confirm that our database has successfully failed over by querying for the entries inserted by the active client.
In the previous scenario, we created a Hedvig volume that is being consumed by the MySQL database in production. We will take a snapshot of that volume, clone it, and change data residence parameters so that now one of the replicas resides in the cloud (Azure). We will provide cloned volume access to our Azure client and query the entries inserted by the production database. In addition, new entries will be inserted in this cloned database, confirming that both production and cloned databases are running independently in parallel.
Published at DZone with permission of Gaurav Yadav, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.