Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Setting Up a Server Cluster for Enterprise Web Apps – Part 1

DZone's Guide to

Setting Up a Server Cluster for Enterprise Web Apps – Part 1

Check out the first tutorial of this three-part series on creating server cluster on the Alibaba Cloud.

· Cloud Zone ·
Free Resource

Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform.

Written by Jeff Cleverley, Alibaba Cloud Tech Share author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

In this series of tutorials, we will set up a Server Cluster on the Alibaba Cloud that is horizontally scalable and suitable for high traffic web applications and enterprise business sites. It will consist of 3 web application servers and 1 load balancing server. Although we will be setting up and installing WordPress on the cluster, the actual cluster configuration detailed here is suitable for most any PHP-based web applications. 

Each server will be running a LEMP Stack (Linux, Nginx, MySQL, PHP). We will use Persona XtraDB Cluster Database as a drop-in replacement for MySQL that will provide the real-time database synchronization between the servers. For web application file replication and synchronization between servers, we will be using GlusterFS.

NGINX as the Load Balancer

For a load balancer, we will use a lightweight NGINX server which will also perform HTTPS termination. This task could be completed with an Alibaba Load Balanced server, but it is a very simple server and I want control over HTTPS termination and SSL, so I have chosen to provision a separate server and demonstrate how this can be configured to do the job. We will be using Let’s Encrypt to issue an SSL certificate to the Load Balancer.

The Cluster’s initial configuration will balance the HTTP request load equally between the 3 application servers like so:


Equally balanced three Node Server Cluster with Load Balancer


This is a great configuration that provides excellent redundancy should any of the servers fail or if you want to stage an upgrade of your servers.

However, one of the other advantages of using a server cluster with load balancing is that we can weight traffic to each server, or route specific traffic to any individual node.

If a web application is being used in enterprise on any scale, there is a high probability that either the administration of the web application or other back-end application users are doing lots of data processing or other CPU intensive tasks. Batch processing jobs, such as importing or exporting, or performing calculations, that may be occurring on the admin side of the web application could overutilize the server's resources and lead to a slowdown on the front visitor-facing end of the site.

Using a server cluster and effective load balancing, we can architect a solution that separates these concerns; directing administration and back-end user traffic to one server, where batch processing and other intensive tasks can be performed, while directing all other site visitors and web traffic to the other servers. Along as we have an effective solution for database and file replication, and the three servers are kept synchronized, then web traffic will not be affected and will receive the results of any work being done on the administration node.

Such a solution can be visualized like so:

Three Node Cluster with Load Balancer redirecting Admin traffic

As you can see from above, Node 1 is now only used for administration, while Nodes 2 and 3 will serve web traffic. In these tutorials, we will configure our cluster in both ways, one after the other.

In this series of tutorials I will be using the root user, if you are using your superuser please remember to add the sudo command before any commands where necessary. I will also be using a test domain yet-another- example.com, you should remember to replace this with your domain when issuing commands.

In the commands, I will also be using my servers private and public IP addresses, please remember to use your own when following along.

Step 1: In the Alibaba Cloud Console

Prepare Your Servers

You will need to provision 4 Alibaba instances, of whatever size best fits your need.

For the purposes of these tutorials, you should upload your public SSH  key to each server as they are created.

It is best to change their hostnames  to better illustrate their part in the cluster. I have chosen to give mine the following hostnames:

  • node1

  • node2

  • node3

  • load-balancer

Since we will be working in the terminal on 4 different servers, it is best to name them appropriately to avoid confusion and mistakenly issuing commands on the wrong instances.

To do that click Modify Information on each server and give each an appropriate hostname:

Modify the information of each server


Name each server appropriately - Node[n] for nodes


Name each server appropriately - Load balancer

You also need to make sure to take a record of each of your servers private IP addresses

Open Ports in Your Security Group

In addition to the usual ports, we will need to open several others. These are required for the Persona XtraDB Cluster Database on each node to communicate with the other nodes via their Private IP addresses.

Percona requires the following ports to be opened:

  • 3306 TCP Inbound/Outbound (Standard MySQL port)

  • 4444 TCP Inbound/Outbound (Percona Cluster port)

  • 4567 TCP Inbound/Outbound (Percona Cluster port)

  • 4568 TCP Inbound/Outbound (Percona Cluster port)

Your security group inbound port configuration should now look like this:

Security Group Inbound Port Configuration


Your security group outbound port configuration should now look like this:

Security Group Outbound Port Configuration


Add Your Domain


You should add the domain you will use in the Alibaba Cloud DNS:

Add Domain in Alibaba Cloud DNS


As the A record set the value as the public IP address of the Server that will be acting as the Load Balancer:

Add A record with IP Address of Load Balancing Server


Make sure to remember to add the CNAME record for the www host, this will be required later when we issue our SSL:

Add the WWW host record too


Step 2: Install and Configure Percona XtraDB Cluster

Percona provides an excellent highly optimized drop-in replacement for MySQL. In today’s tutorial, we will be using their variant that has been created specifically for database replication and synchronization, Percona XtraDB Cluster.

In Percona’s own words:

“Percona XtraDB Cluster is an active/active high availability and high scalability open source solution for MySQL ® clustering. It integrates Percona Server and Percona XtraBackup with the Codership Galera library of MySQL high availability solutions in a single package that enables you to create a cost-effective MySQL high availability cluster.”

But before we install and configure things we have a few things to do.

Check Private Networking is Working

Log in to each of your nodes:

 $ ssh root@node_public_ip_address 

Now on each of your server nodes, you should try to ping the other nodes using ‘ping’. In my case to ping node2 from node1:

 # ping 172.20.213.159 

If you have a private network connection, you should see some metrics confirming the packet send and receive:

Ping your other nodes private IP addresses from each node> 2b. Install Percona XtraDB Cluster

Install Percona XtraDB Cluster

On each of your nodes, issue the following commands:


# wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
# dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
# apt-get update
# apt-get upgrade


And then finally install the package:

 # apt-get install percona-xtradb-cluster-57  

During the installation, you will be asked to set up a new root password for the cluster. Make sure you use the same root password for each installation, otherwise you will cause yourself some difficulties later.

Configure Database Replication

Create a Replication User

To do this we need a replication user, but we only need to configure this on node1.

Log in to MySql using your root password:

 # mysql -u root -p 

Now create a new user and user password for replication purposes, make sure to keep a note of this as we will need it soon. You should use a strong password.

CREATE USER ‘new_user'@'localhost' IDENTIFIED BY ‘new_users_password’;

Then grant all the privileges the replication user requires, before flushing privileges and exiting:

GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO ‘new_user'@'localhost';

Then flush privileges and exit:

FLUSH PRIVILEGES;
EXIT;

Your terminal should look like this:

12
<Create your Database Replication User>

Customise Replication configuration files

On each node open the  wsrep.cnf replication configuration file for editing:

 # nano /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf 

You will need to enter slightly different parameters as follows:

  1. On every node’s configuration file, enter the 3 private IP addresses, separated by commas, for wsrep_cluster_address 

  2. Enter that node’s private IP address for wsrep_node_address

  3. Enter a different node name for wsrep_node_name

  4. On every node’s configuration file, enter the sst password for wsrep_sst_auth

Your wsrep.cnf configuration files should contain the following (remember to swap in your IP addresses and password):

Node1 Configuration


[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.62.56
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-1

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password



Node2 Configuration


[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.213.159
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-2

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password



Node3 Configuration


[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.20.62.56,172.20.213.159,172.20.213.160

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement $
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.20.213.160
# Cluster name
wsrep_cluster_name=pxc-cluster

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-3

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=new_user:new_users_password



In the terminal like so:

Enter all the nodes private IP addresses in each configuration

<Enter all the nodes private IP addresses>

node1 configuration

<node1 configuration>

node2 configuration

<node2 configuration>

node3 configuration

<node3 configuration>


Bootstrap the cluster

Bootstrap the cluster by running the following command on node1:

 # /etc/init.d/mysql bootstrap-pxc 

Your terminal should give you an [ OK ] response:

Bootstrap the Cluster on node1

You can check it’s running, now or at any time, by logging into my MySQL and running the following:

 show status like 'wsrep%'; 

Now on node2 and node3 run the following:

 # /etc/init.d/mysql start 

On each server, your terminal should respond with an OK:

Start the other two nodes in the Percona cluster

<Start the other two nodes in the Percona cluster>

Test Database Replication

You should now have a Percona Database cluster with 3 nodes all replicating data to each other, but let’s test that.

On each node, log into MySQL:

 # mysql -u root -p  

Now on each node, show your databases:  

SHOW DATABASES;

 

Node1 Databases


Node2 Databases


Node3 Databases

On node1 only, create a new database called  test_sync :

 CREATE DATABASE test_sync; 

After doing that, show your databases again on node2 and node3:  

SHOW DATABASES;

You should now see that the test_sync  database you created on node1 has been replicated to both node2 and node3:

Test_sync has been replicated to Node2


Image titleTest_sync has been replicated to node3

Success, we have database replication!

Step 3. Install and Configure GlusterFS File Network

GlusterFS is an open source software scalable network filesystem for data-intensive tasks such as cloud storage and media streaming.

Using GlusterFS ensures that each of our nodes is operating off the same files, it also provides data safety through redundancy. Gluster uses triple redundancy for each file, which means that in our 3 node cluster each node will have an entire copy of all files, but if we scale up to more nodes we will start to see space savings.

Install GlusterFS On every node

First, install the Ubuntu attr package used for extended filesystems:

 # apt-get install attire 

Then install GlusterFS:

 # apt-get install glusterfs-server 

Create a Gluster Volume and attach Directories

On node1 only

Run the following commands using the private IP addresses of the other two nodes:

# gluster peer probe 172.20.213.159
# gluster peer probe 172.20.213.160


Peer probe should respond with a success message like so:

Successful Gluster Peer Probe


After that, create a glustervolume with the following code, using the private IP addresses of all three nodes:

 # gluster volume create glustervolume replica 3 transport tcp 172.20.62.56:/gluster-storage 172.20.213.159:/gluster-storage 172.20.213.160:/gluster-storage force

And then start the volume:

 # gluster volume start glustervolume 

Create and Start a Gluster Volume

On every node

We are now ready to link directories from every node into our glustervolume .

To do that, create a root directory for our web application in the /var/www/   directory on each node:

 # mkdir /var/www/yet-another-example.com -p 

Mount the directory and link it to the glustervolume:

 # mount -t glusterfs localhost:/glustervolume /var/www/yet-another-example.com 


Create Web Application Root Directories on Each Node and link them up

Test File Replication

We should now have a GlusterFS cluster fileserver with redundancies, into which we can install our web Application of choice, WordPress.

However, before we do that, it makes sense to test it.

On any node

Create a test.html file within the mounted /var/www/yet-another-example.com  directory on any of the nodes:

 # touch /var/www/yet-another-example.com/test.html 

Create a test file on any node


On any other node

On one of the other nodes, change directories into the mounted directory and list out the contents to ensure the  test.html  file has been replicated:

 # cd /var/www/yet-another-example.com && ls 

If all is working you should see something like this, showing the test.html  file created on the first node in the other node’s directory:

Check to see if it has been replicated on another node

Success!

We now have a cluster of servers with working database replication and a distributed filesystem (with redundancy), courtesy of Percona XtraDB Cluster and GlusterFS respectively.

In the next tutorial, we will complete the installation of our LEMP stack by installing PHP7 and Nginx. Then we will configure each of our Node servers NGINX virtual Hosts, our Load Balancers Nginx Configuration and its SSL certificate, before installing WordPress. By the end of Part 2, we will have a fully working equally load balanced server cluster running WordPress being served over HTTPS.

In the final Tutorial, we will reconfigure the cluster to reserve resource heavy Administration access on one node and have the other 2 nodes be weighted for Visitor access. We will also enable FastCGI caching, and harden our cluster by securing our Database Ports and restricting access to our GlusterFS filesystem.


Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers.

Topics:
cloud ,nginx ,load balancing ,alibaba cloud ,server cluster

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}