MySQL High Availability Framework Explained: Part 1
MySQL High Availability Framework Explained: Part 1
Explore the basics of High Availability, the components of an HA framework, the HA framework for MySQL.
Join the DZone community and get the full member experience.Join For Free
Built by the engineers behind Netezza and the technology behind Amazon Redshift, AnzoGraph™ is a native, Massively Parallel Processing (MPP) distributed Graph OLAP (GOLAP) database that executes queries more than 100x faster than other vendors.
In this two-part article series, we will explain the details and functionality of a High Availability (HA) framework for MySQL hosting using MySQL semi-synchronous replication and the Corosync plus Pacemaker stack. In Part I, we’ll walk you through the basics of High Availability, the components of an HA framework, and then introduce you to the HA framework for MySQL.
What Is High Availability?
The availability of a computer system is the percentage of time its services are up during a period of time. It’s generally expressed as a series of 9′s. For example, the table below shows availability and the corresponding downtime measured over one year.
|Availability %||Downtime Per Year|
|90% (“one 9“)||36.53 days|
|99% (“two 9s“)||3.65 days|
|99.9% (“three 9s“)||8.77 hours|
|99.99% (“four 9s“)||52.60 minutes|
|99.999% (“five 9s“)||5.26 minutes|
|99.9999% (“six 9s“)||31.56 seconds|
The meaning of High Availability varies depending on the requirements of your application and business. For example, if you cannot afford a downtime of more than a few minutes per year in your service, we say that the service needs to have 99.999% High Availability.
Components of an HA Framework
The essence of being highly available is the ability to instantly recover from failures that can happen in any part of a system. There are four highly essential components in any HA framework that need to work together in an automated fashion to enable this recoverability. Let’s review these components in detail:
1. Redundancy in Infrastructure and Data
For a service to be highly available, we need to ensure that there is a redundancy in the infrastructure hosting as well as an up-to-date redundant copy of data the service uses or provides. This acts as a standby service ready to take over in case the primary is impacted by failures.
2. Failure Detection and Correction Mechanism
It’s extremely important to immediately detect any failures in any part of the primary system that may impact its availability. This will enable the framework to either take corrective actions on the same primary system or failover the services to a standby system.
3. Failover Mechanism
This component handles the responsibility to failover the services to your standby infrastructure. Please note that in case there are multiple redundant systems available, this failover mechanism component has to identify the most suitable system among those and promote it as the primary service.
4. Application/User Redirection Mechanism
Once the standby systems have taken over as the primary, this component ensures that all of the application and user connections start happening to the new primary.
The HA Framework for MySQL
Based on the above model, we use the following HA framework for our MySQL hosting at ScaleGrid:
- A 3-Node Master-Slave setup using MySQL semi-synchronous replication to provide infrastructure and data redundancy.
- The Corosync plus Pacemaker stack to provide failure detection, correction, and failover mechanism.
- A DNS mapping or Virtual IP component to provide the application and user redirection mechanism.
Check out the diagram below to visualize the software stack of this architecture:
CorosyncCorosync provides a communication framework for the nodes with reliable message-passing between them. It forms a cluster ring of nodes, and keeps track of the nodes joining and leaving the cluster through cluster membership. Corosync closely works with Pacemaker to communicate about the node availability so that Pacemaker can take appropriate decisions.
PacemakerAlso known as Cluster Resource Manager (CRM), Pacemaker ensures the high availability for MySQL running on the cluster and detects and handles node-level failures by interfacing with Corosync. It also detects and handles failures of MySQL by interfacing with the Resource Agent (RA). Pacemaker configures and manages the MySQL resource through start, stop, monitor, promote, and demote operations.
Resource AgentThe Resource Agent acts as an interface between MySQL and Pacemaker. It Implements start, stop, promote, demote, and monitor operations that are invoked by the Pacemaker. There is a fully-functional Resource Agent called Percona Replication Manager (PRM) for MySQL implemented by Percona. This has been enhanced by ScaleGrid and is available on our GitHub page.
DNS Mapping ComponentThe Resource Agent, on completing a successful failover, invokes this component which updates the DNS records of the master MySQL server with the IP address of the new master. Note that clients always use a master DNS name to connect with the MySQL server, and by managing the mapping of this DNS name to the IP address of the current master, we can ensure that clients do not have to change their connection strings or properties when there is a failover.
In part two of this article series, you’ll learn about the critical data redundancy component, which is achieved using MySQL semi-synchronous replication. We’ll also dive deep into the semi-synchronous replication details and configurations that we use to achieve our high availability support, and lastly, review various failure scenarios and the way the framework responds and recovers from these conditions.
Published at DZone with permission of Prasad Nagaraj , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.