Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Consul, ProxySQL, and MySQL HA

DZone's Guide to

Consul, ProxySQL, and MySQL HA

In this article, we’ll go through a solution that is suitable for an application that has not been coded to split reads and writes over separate MySQL instances.

· Database Zone
Free Resource

Check out the IT Market Clock report for recommendations on how to consolidate and replace legacy databases. Brought to you in partnership with MariaDB.

When it comes to “decision time” about which type of MySQL HA (high-availability) solution to implement, and how to architect the solution, many questions come to mind. The most important questions are:

  • “What are the best tools to provide HA and Load Balancing?”
  • “Should I be deploying this proxy tool on my application servers or on a standalone server?”

Ultimately, the best tool really depends on the needs of your application and your environment. You might already be using specific tools such as Consul or MHA, or you might be looking to implement tools that provide richer features. The dilemma of deploying a proxy instance per application host versus a standalone proxy instance is usually a trade-off between “a less effective load balancing algorithm” or “a single point of failure.” Neither are desirable, but there are ways to implement a solution that balances all aspects.

In this article, we’ll go through a solution that is suitable for an application that has not been coded to split reads and writes over separate MySQL instances. An application like this would rely on a proxy or third-party tool to split reads/writes, and preferably a solution that has high-availability at the proxy layer. The solution described here is comprised of ProxySQL, Consul, and Master High Availability (MHA). Within this article, we’ll focus on the configuration required for ProxySQL and Consul, as there are many articles that cover MHA configuration (such as Miguel’s recent MHA Quick Start Guide blog post).

When deploying Consul in production, a minimum of 3x instances are recommended — in this example, the Consul agents run on the Application Server (appserver) as well as on the two “ProxySQL servers” mysql1 and mysql2 (which act as the HA proxy pair). This is not a hard requirement, and these instances can easily run on another host or Docker container. MySQL is deployed locally on mysql1 and mysql2. However, this could just as well be 1..n separate standalone DB server instances:

Image title


So let’s move onto the actual configuration of this HA solution, starting with Consul.

Installation of Consul:

Firstly, we’ll need to install the required packages, download the Consul archive and perform the initial configuration. We’ll need to perform the same installation on each of the nodes (i.e., appserver, mysql1, and mysql2).

Now, that we’re done with the installation on each of the hosts, let’s continue with the configuration. In this example we’ll bootstrap the Consul cluster using “appserver:"

Configuration of Consul on Proxy Servers

The next item is to configure each of the proxy Consul agents. Note that the “agent name” and the “IP address” need to be updated for each host (values for both must be unique):

Installation and Configuration of ProxySQL

The same procedure should be run on both mysql1 and mysql2 hosts:

We also need to perform one configuration step on the MySQL servers in order to create a user for ProxySQL to monitor the instances:

### ProxySQL's monitor user on the master MySQL server (default username and password is monitor/monitor)

mysql-h192.168.1.120-P3306-uroot-p123-e"GRANT USAGE ON *.* TO monitor@'%' IDENTIFIED BY 'monitor';"

We can view the configuration of the monitor user on the ProxySQL host by checking the global variables on the admin interface:

mysql>SHOW VARIABLES LIKE'mysql-monitor%';

+----------------------------------------+---------+

|Variable_name|Value|

+----------------------------------------+---------+

|mysql-monitor_enabled|true|

|mysql-monitor_connect_timeout|200|

|mysql-monitor_ping_max_failures|3|

|mysql-monitor_ping_timeout|100|

|mysql-monitor_replication_lag_interval|10000|

|mysql-monitor_replication_lag_timeout|1000|

|mysql-monitor_username|monitor|

|mysql-monitor_password|monitor|

|mysql-monitor_query_interval|60000|

|mysql-monitor_query_timeout|100|

|mysql-monitor_slave_lag_when_null|60|

|mysql-monitor_writer_is_also_reader|true|

|mysql-monitor_history|600000|

|mysql-monitor_connect_interval|60000|

|mysql-monitor_ping_interval|10000|

|mysql-monitor_read_only_interval|1500|

|mysql-monitor_read_only_timeout|500|

+----------------------------------------+---------+

Testing Consul

Now that Consul and ProxySQL are configured we can do some tests from the “appserver”. First, we’ll verify that the hosts we’ve added are both reporting [OK] on our DNS requests:

$dig@127.0.0.1-p53proxysql.service.consul

;<<>>DiG9.9.4-RedHat-9.9.4-29.el7_2.3<<>>@127.0.0.1-p53proxysql.service.consul

;(1server found)

;;global options:+cmd

;;Got answer:

;;->>HEADER<<-opcode:QUERY,status:NOERROR,id:9975

;;flags:qr aa rd ra;QUERY:1,ANSWER:3,AUTHORITY:0,ADDITIONAL:0

;;QUESTION SECTION:

;proxysql.service.consul.INA

;;ANSWER SECTION:

proxysql.service.consul.0INA192.168.1.121

proxysql.service.consul.0INA192.168.1.120

;;Query time:1msec

;;SERVER:127.0.0.1#53(127.0.0.1)

;;WHEN:Mon Sep0519:32:12UTC2016

;;MSG SIZE  rcvd:158

As you can see from the output above, DNS is reporting both 192.168.120 and 192.168.1.121 as available for the ProxySQL service. As soon as the ProxySQL check fails, the nodes will no longer report in the output above.

We can also view the status of our cluster and agents through the Consul Web GUI which runs on port 8500 of all the Consul servers in this configuration (e.g. http://192.168.1.120:8500/):

Consul GUI

Testing ProxySQL

So now that we have this configured we can also do some basic tests to see that ProxySQL is load balancing our connections:

[percona@appserver consul.d]$mysql-hproxysql.service.consul-e"select @@hostname"

+--------------------+

|@@hostname|

+--------------------+

|mysql1.localdomain|

+--------------------+

[percona@appserver consul.d]$mysql-hproxysql.service.consul-e"select @@hostname"

+--------------------+

|@@hostname|

+--------------------+

|mysql2.localdomain|

+--------------------+

Perfect! We’re ready to use the hostname “proxysql.service.consul” to connect to our MySQL instances using a round-robin load balancing and HA proxy solution. If one of the two ProxySQL instances fails, we’ll continue communicating with the database through the other. Of course, this configuration is not limited to just two hosts, so feel free to add as many as you need. Be aware that in this example the two hosts’ replication hierarchy is managed by MHA in order to allow for master/slave promotion. By performing an automatic or manual failover using MHA, ProxySQL automatically detects the change in replication topology and redirect writes to the newly promoted master instance.

To make this configuration more durable, it is encouraged to create a more intelligent Consul check — i.e., a check that checks more than just the availability of the MySQL service (an example would be to select some data from a table). It is also recommended to fine tune the interval of the check to suit the requirements of your application.

Interested in reducing database costs by moving from Oracle Enterprise to open source subscription?  Read the total cost of ownership (TCO) analysis. Brought to you in partnership with MariaDB.

Topics:
proxy ,cluster ,agent ,rpc ,consul ,agents

Published at DZone with permission of Peter Zaitsev, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}