In this blog post, we’ll look at various MySQL high availability solutions and examine their positives and negatives.
High availability environments provide substantial benefit for databases that must remain available. A high availability database environment co-locates a database across multiple machines, any one of which can assume the functions of the database. In this way, a database doesn’t have a “single point of failure.”
There are many HA strategies and solutions, so how do you choose the best solution among a myriad of options. The first question to ask is: What is the problem you are trying to solve? The answers boil down to redundancy versus scaling versus high availability. These are not necessarily all the same!
- Need multiple copies of data in event of a disaster
- Need to increase read and/or write throughput
- Need to minimize outage duration
When you are planning your database environment, it’s important to remember the CAP Theorem applies. The CAP Theorem breaks problems into three categories: consistency, availability, and partition tolerance. You can pick any two from those three, at the expense of the third.
- Consistency. All nodes see the same data at the same time.
- Availability. Every request receives a response about whether it succeeded or not.
- Partition Tolerance. The system continues to operate despite arbitrary partitioning due to network failures.
Whatever solution you choose, it should maximize consistency. The problem is that although MySQL replication is great, it alone does not guarantee consistency across all nodes. There is always the potential that data is out of sync, since transactions can be lost during failover and other reasons. Galera-based clusters such as Percona XtraDB Cluster are certification-based to prevent this!
The first question you should ask yourself is: Can I afford to lose data?
This often depends on the application. Apps should check status codes on transactions to be sure they were committed. Many do not! It is also possible to lose transactions during a failover. During failover, simple replication schemes have the possibility of losing data
Inconsistent nodes are another problem. Without conflict detection and resolution, inconsistent nodes are unavoidable. One solution is to run pt-table-checksum often to check for inconsistent data across replication nodes. Another option is using a Galera-based Distributed Cluster, such as Percona XtraDB Cluster, with a certification process.
What is watching your system? Or is anything standing ready to intervene in a failure? For replication, take a look at MHA and MySQL Orchestrator. Both are great tools to perform failover of a Replica. There are others.
For Percona XtraDB Cluster, failover is typically much faster, but it is not the perfect solution in every case.
Can I Afford Lost Transactions?
Many MySQL DBAs worry about setting innodb_flush_log_at_trx_commit to 1 for ACID compliance and sync_binlog, but then use replication with no consistency checks! Is this logically consistent? Percona XtraDB Cluster maintains consistency through certification.
Conflict Detection and Resolution
All solutions must have some means of conflict detection and resolutions. Galera’s certification process follows the following method:
- Transaction continues on a node as normal until it reaches COMMIT stage.
- Changes are collected into a writeset.
- Writeset is sent to all nodes for certification.
- PKs are used to determine if the writeset can be applied.
- If certification fails, the writeset is dropped and the transaction is rolled back.
- If it succeeds, the transaction commits and the writesets are applied to all of the nodes.
- All nodes will reach the same decision on every transaction and is thus deterministic.
Do I Want Failover or a Distributed System?
Another important consideration is whether you should have a failover or a distributed system. A failover system runs one instance at a time, and “fails over” to a different instance when an issue occurs. A distributed system runs several instances at one time, all handling different data.
- Failover pitfalls:
- Failover systems have a monitor which detects failed nodes and moves services elsewhere if available
- Failover takes time!
- Distributed systems:
- Distributed systems minimize failover time
- Advantage of Manual Failover
- The primary advantage to failing over manually is that a human usually can make a better decision as to whether failover is necessary.
- Systems rarely get it perfect, but they can be close!
- Advantage of Automatic Failover
- More Nines due to minimized outages
- No need to wait on a DBA to perform
A further question is how fast does failover have to occur? Obviously, the faster it happens, the less time there is for potential data loss.
- Replication / MHA / MMM
- Depends on how long it takes for pending Replica transactions to complete before failover can occur
- Typically around 30 seconds
- Typically between 15 and 30 seconds
- XtraDB Cluster / MySQL Cluster
- VERY fast failover. Typically less than 1 second depending upon Load Balancer
How Many 9’s Do You Really Need?
The "9" measure of accuracy is a standard for how perfect a system is. When it comes to "how many 9s," each 9 is an order of magnitude more accurate. 99.99 is four nines, while 99.999 is five nines.
Every manager response to the question of how many nines is always, “As many as I can get.” That sounds great, but the reality is that tradeoffs are required! Many applications can tolerate a few minutes of downtime with minimal impact. The following tables shows downtime as correlated to each "9":
Do I Need to Scale Reads and/or Writes?
When looking at your environment, it’s important to understand your workload. Is your workload heavy on reads, writes, or both? Knowing whether you’re going to need to scale reads or writes is important to choosing your HA solution:
- Scaling reads
- Most solutions offer ability to read from multiple nodes or replicas
- MHA, XtraDB Cluster, MySQL Cluster, and others are well suited for this
- Scaling writes
- Many people wrongly try to scale writes by writing to multiple nodes in XtraDB Cluster leading to conflicts
- Others try it with Master-Master Replication which Is also problematic
- Possibly the best solution in this regard is MySQL Cluster
What about provisioning new nodes?
- Largely, this is a manual process
- MySQL Utilities makes this easier than ever
- Distributed Clusters
With XtraDB Cluster, try to have three of everything. If you span a data center, have three data centers. If your nodes are on a switch, try to have three switches.
XtraDB Cluster needs at least three nodes in the cluster. An odd number is preferred for voting reasons. Forget about trying to keep a cluster alive during failure with only two data centers. You are better off making one a DR site. Forget about custom weighting to try to get by on two data centers. The 51% rule will get you anyway!
How Many Data Centers Do I Have?
Knowing how many data centers are involved in your environment is a critical factor. Running multiple data centers has implications for the HA solution you adopt.
What if I only have one data center? You can gain protection against a single failed node or more, depending on cluster size. If you have two data centers, you should probably be considering the second data center as a DR solution. Having three or more data centers is the most robust solution when using Galera-based clusters such as XtraDB Cluster.
How Do I Plan for Disaster Recovery?
Planning for disaster recovery is crucial in your HA environment. Make sure the DR node(s) can handle the traffic, if even at a minimized performance level.
- Replicating from a XtraDB Cluster to a DR site
- Asynchronous Replication from XtraDB Cluster to a single node
- Asynchronous Replication from XtraDB Cluster to a replication topology
- Asynchronous Replication from XtraDB Cluster to another XtraDB Cluster
What Storage Engine(s) Do I Need?
Nowadays especially, there is a multitude of storage engines available for a database environment. Which one should you use for your HA solution? Your solution will help determine which storage engine you can employ.
- Not storage engine dependent. Works with all storage engines
- XtraDB Cluster. Requires InnoDB. Support for MyISAM is experimental and should not be used in Production
- MySQL Cluster. Requires NDB Storage Engine
Load balancers provide a means to distribute your workload across your environment resources so as not to create a bottleneck at any one particular point. The following are some load balancing options:
- Open-source software solution
- Cannot split reads and writes. If that is a requirement, the app will need to do it!
- F5 BigIP
- Typical hardware solution
- Can do read/write splitting
- Elastic Load Balancer (ELB)
What Happens If the Cluster Reboots?
Some changes require that the cluster be rebooted for the changes to be applied. For example, changing a parameter value in a parameter group is only applied to the cluster after the cluster is rebooted. A cluster could also reboot due to power interruption or other technology failures.
- A power outage in a single data center could lead to issues
- XtraDB cluster can be configured to auto bootstrap
- May not always work when all nodes lose power simultaneously. While server is running, the grastate.dat file shows -1 for seqno
- Surviving a Reboot
- Helpful if nodes are shutdown by a System Administrator for a reboot or other such process
- Normal shutdown sets seqno properly
Do I Need to be Able to Read After Writing?
Asynchronous Replication does not guarantee consistent views of data across nodes. XtraDB Cluster offers causal reads. Replica will wait for the event to be applied before processing additional queries, guaranteeing a consistent read state across nodes.
What If I Do A Lot of Data Loading?
In the recent past, it was conventional wisdom to use replication in such scenarios over XtraDB Cluster. MTS does help if data is distributed over multiple schemas but is not a fit for all situations. XtraDB Cluster is now a viable option since we discovered a bug in Galera which did not properly split large transactions.
Have I Taken Precautions Against Split Brain?
Split Brain occurs when a cluster has its nodes divided from one another, most often due to network blip, and nodes form two or more new and independent (and thus divergent) clusters. XtraDB Cluster is configured to go into a non-primary state and refuse to take traffic. A newer setting with XtraDB Cluster will allow for dirty reads for non-primary nodes
Does My Application Require High Concurrency?
Newer approaches to replication allow for parallel threads (XtraDB Cluster has had this from the beginning), such as Multi-Thread Slaves (MTS). MTS allows a replica to have multiple SQL threads all with their own relay logs. It enable GTID to make backups via Percona XTRABackup safer due to not being able to trust SHOW SLAVE STATUS to get relay log position.
Am I Limited on RAM?
Some Distributed solutions such as MySQL Cluster require a lot of RAM, even with file-based tables. Be sure to plan appropriately. XtraDB Cluster works much more like a stand-alone node.
How Stable Is My Network?
Networks are never really 100% reliable. Some “Network Problems” are due to outside factors such as system resource contention (especially on virtual machines). Network problems cause inappropriate failover issues. Use LAN segments with XtraDB Cluster to minimize network traffic across the WAN.
Making the right choice depends on:
- Knowing what you really need!
- Knowing your options.
- Knowing your constraints!
- Understanding the pros/cons of each solution
- Setting expectations properly!
For more information on how to plan your HA environment, and what tools are available, sign up for my webinar Choosing a MySQL® High Availability Solution today on June 23, 2016 at 10:00 am. You can also get some great insights by watching these videos on our high availability video playlist.