Building a Mission-Critical Open Source Java Platform - The Web Layer
Currently the Java platform is one of the most consolidated in the world, much of this is due to platform's ability to support other languages such as Kotlin...
Join the DZone community and get the full member experience.
Join For Freecurrently the java platform is one of the most consolidated in the world, much of this is due to platform's ability to support other languages such as kotlin, groovy and scala, etc. thousands of web and mobile applications are developed using the platform as a base. the java platform has changed a lot in recent years and probably will keep evolving.
in parallel with these changes, we see the evolution application servers , that increasingly offer highly complex features such as load balancing components, smart asynchronous messaging, transaction control and many other technologies facilitating application development and standardization. they also provide a stable and scalable infrastructure for mission critical applications. one of the biggest challenges for application servers is to couple highly complex services, making them stable and flexible.
if you want a high availability, scalable environment with no vendor lock-in, wildfly is an open source community option.
wildfly
wildfly is an open source application server that is 100% compliant with the java ee ee 8 specification. in previous versions of java ee even using only a few technologies for application development, we were required to deal with all the features deployed on the server. . to solve this problem from java ee 6, the concept of profile was inserted, which aims to create configurations with specific responsibilities, such as the web profile that has technologies for web development.
in this architecture we will use wildfly with the "full-ha" profile to meet our clustering and balancing needs.
domain mode x standalone mode
wildfly has two types of architecture known as "standalone mode" and "domain mode". in standalone mode we can work with a single instance very similar to older (jboss) versions such as jboss as 5 and jboss as 6.
the major problem with using standalone mode is decentralized administration which can cause a lot of headache when replicating the settings on each server.
to solve this problem we have domain mode in which all configurations are centralized in a component known as master and the servers are distributed in the subordinates using the configurations obtained from the connection between master and subordinate .
when we start wildfly in domain mode in the default configuration we have at least four processes. one host controller, one process controller and two servers.
-
domain controller: is the one who controls domain management. here are the settings that are shared between instances that are in this domain
-
process controller: is of great importance because he is responsible for creating the instances and also the host controller that we will talk about next. process controller should not be confused with an instance, it is simply a process in the jvm
-
host controller: as a domain controller, host controller also coordinates domain instances. he is responsible for doing something similar to farm deployment , ie he distributes the deployment file to all instances of the domain
-
servers: these are the instances themselves, where are the deployed applications. an important point is that each server is a java process
among the benefits of using domain mode we can cite include:
- centralized management;
- centralized configuration;
- deploy centralized;
- centralized maintenance;
- high availability management
you can find more information in the administration guide .
architecture
we will use two servers with keepalived + vip to achieve high availability with apache web servers . in addition we will use a domain to act as master and another as backup and to support our applications, two host controllers with two servers each, totaling four wildfly instances.
you can see our architecture in the following image:

all this lab is running on the very reliable ovirt virtualization platform. you can learn more about ovirt here: ovirt.org
let's get started by installing and configuring keepalived and apache web server high availability, and of course getting the mod_cluster setup ready. these settings will be performed on the following servers:
- apache-httpd-01.mmagnani.lab - 10.0.0.191
- apache-httpd-02.mmagnani.lab - 10.0.0.192
- vip - 10.0.0.190
to make it easier to understand where i am executing each command, i will use the prefix of the servers on each line.
these steps must be performed on both apache web servers:
apache web server - apache-httpd-01.mmagnani.lab
[root@apache-httpd-01 ~]# cat /etc/redhat-release
centos linux release 8.2.2004 (core)
[root@apache-httpd-01 ~]# dnf update -y
[root@apache-httpd-01 ~]# dnf install httpd -y
[root@apache-httpd-01 ~]# echo "web server 01" > /var/www/html/index.html
[root@apache-httpd-01 ~]# systemctl enable httpd
[root@apache-httpd-01 ~]# systemctl stop firewalld
[root@apache-httpd-01 ~]# systemctl disable firewalld
[root@apache-httpd-01 ~]# setenforce 0
[root@apache-httpd-01 ~]# sed -i "s/^selinux=enforcing/selinux=disabled/g" /etc/selinux/config
[root@apache-httpd-01 ~]# dnf install keepalived -y
[root@apache-httpd-01 ~]# systemctl enable keepalived
[root@apache-httpd-01 ~]# rm -rf /etc/keepalived/keepalived.conf
[root@apache-httpd-01 ~]# vi /etc/keepalived/keepalived.conf
### active server
vrrp_script chk_httpd {
script "pidof httpd"
interval 2
}
vrrp_instance vi_1 {
interface ens3
state master
advert_int 2
virtual_router_id 51
priority 100
authentication {
auth_type pass
auth_pass simplepassword
}
unicast_src_ip addr:10.0.0.191
unicast_peer {
10.0.0.192
}
track_script {
chk_httpd
}
virtual_ipaddress {
10.0.0.190
}
}
[root@apache-httpd-01 ~]# systemctl restart httpd
[root@apache-httpd-01 ~]# systemctl restart keepalived
apache web server - apache-httpd-02.mmagnani.lab
[root@apache-httpd-02 ~]# cat /etc/redhat-release
centos linux release 8.2.2004 (core)
[root@apache-httpd-02 ~]# dnf update -y
[root@apache-httpd-02 ~]# dnf install httpd -y
[root@apache-httpd-01 ~]# echo "web server 02" > /var/www/html/index.html
[root@apache-httpd-02 ~]# systemctl enable httpd
[root@apache-httpd-02 ~]# systemctl stop firewalld
[root@apache-httpd-02 ~]# systemctl disable firewalld
[root@apache-httpd-02 ~]# setenforce 0
[root@apache-httpd-02 ~]# sed -i "s/^selinux=enforcing/selinux=disabled/g" /etc/selinux/config
[root@apache-httpd-02 ~]# dnf install keepalived -y
[root@apache-httpd-02 ~]# systemctl enable keepalived
[root@apache-httpd-02 ~]# rm -rf /etc/keepalived/keepalived.conf
edit the file and leave it as below (replace as per your settings).
[root@apache-httpd-02 ~]# vi /etc/keepalived/keepalived.conf
### passive server
vrrp_script chk_httpd {
script "pidof httpd"
interval 2
}
vrrp_instance vi_1 {
interface ens3
state backup
advert_int 2
virtual_router_id 51
priority 50
authentication {
auth_type pass
auth_pass simplepassword
}
unicast_src_ip 10.0.0.192
unicast_peer {
10.0.0.191
}
track_script {
chk_httpd
}
virtual_ipaddress {
10.0.0.190
}
}
[root@apache-httpd-02 ~]# systemctl restart httpd
[root@apache-httpd-02 ~]# systemctl restart keepalived
now open the browser and enter the vip ip which in this case is 10.0.0.190. note that the request was directed to apache web server 01.
for a quick test, turn off apache web server 01 and see that our backup server is responding:
[root@apache-httpd-01 ~]# shutdown now
next we'll start mod_cluster configuration and plug it in wildfly.
compiling and configuring mod_cluster
the best way to get updated binaries for centos 8 is by compiling. i know this can cause some stress but follow the steps below that everything will be very simple.
[root@apache-httpd-01 ~]# dnf install git make cmake gcc gcc-c++ apr-util apr-devel apr-util-devel httpd-devel -y
[root@apache-httpd-01 ~]# cd /tmp
[root@apache-httpd-01 ~]# git clone https://github.com/modcluster/mod_cluster.git
[root@apache-httpd-01 ~]# cd mod_cluster
[root@apache-httpd-01 ~]# git checkout 1.3.x
[root@apache-httpd-01 ~]# cd native
[root@apache-httpd-01 ~]# mkdir build
[root@apache-httpd-01 ~]# cd build
[root@apache-httpd-01 build]# cmake ../ -g "unix makefiles"
[root@apache-httpd-01 build]# make
copy the newly built modules to the apache web server module directory:
[root@apache-httpd-01 ~]# pwd
/tmp/mod_proxy_cluster/native/build
[root@apache-httpd-01 ~]# cp modules/mod_proxy_cluster.so /etc/httpd/modules/
[root@apache-httpd-01 ~]# cp modules/mod_cluster_slotmem.so /etc/httpd/modules/
[root@apache-httpd-01 ~]# cp modules/mod_manager.so /etc/httpd/modules/
[root@apache-httpd-01 ~]# cp modules/mod_advertise.so /etc/httpd/modules/
these same files must be copied to the apache web server 02 module directory.
[root@apache-httpd-01 ~]# scp modules/mod_cluster_slotmem.so root@10.0.0.192:/etc/httpd/modules/
[root@apache-httpd-01 ~]# scp modules/mod_proxy_cluster.so root@10.0.0.192:/etc/httpd/modules/
[root@apache-httpd-01 ~]# scp modules/mod_manager.so root@10.0.0.192:/etc/httpd/modules/
[root@apache-httpd-01 ~]# scp modules/mod_advertise.so root@10.0.0.192:/etc/httpd/modules/
the mod_proxy_balancer should be disabled when mod_cluster is used. to do this simply on apache web server 01 edit the 00-proxy.conf file and comment the line referring to this module.
[root@apache-httpd-01 ~]# vi /etc/httpd/conf.modules.d/00-proxy.conf
#loadmodule proxy_balancer_module modules/mod_proxy_balancer.so
do the same for apache web server 02:
[root@apache-httpd-02 ~]# vi /etc/httpd/conf.modules.d/00-proxy.conf
#loadmodule proxy_balancer_module modules/mod_proxy_balancer.so
now is the time to create the mod_cluster configuration file. on apache web server 01 create the file mod_cluster.conf and add the configuration information:
[root@apache-httpd-01 ~]# vi /etc/httpd/conf.d/mod_cluster.conf
loadmodule proxy_cluster_module modules/mod_proxy_cluster.so
loadmodule cluster_slotmem_module modules/mod_cluster_slotmem.so
loadmodule manager_module modules/mod_manager.so
loadmodule advertise_module modules/mod_advertise.so
memmanagerfile cache/mod_cluster
<ifmodule manager_module>
listen 9090
managerbalancername mycluster
maxcontext 200
maxnode 200
maxhost 200
proxytimeout 600
createbalancers 1
<virtualhost *:9090>
<directory />
require all granted
</directory>
<location />
require all granted
</location>
keepalivetimeout 300
maxkeepaliverequests 0
enablemcpmreceive on
allowdisplay on
<location /mod_cluster_manager>
sethandler mod_cluster-manager
require all granted
</location>
</virtualhost>
</ifmodule>
now copy the file to apache web server 02.
[root@apache-httpd-01 ~]# scp /etc/httpd/conf.d/mod_cluster.conf root@10.0.0.192:/etc/httpd/conf.d/
reboot both servers to make sure everything is going up correctly.
[root@apache-httpd-01 ~]# reboot
[root@apache-httpd-02 ~]# reboot
access the vip ip again on the mod cluster port and context: http://10.0.0.190:9090/mod_cluster_manager. note that mod cluster settings have been loaded successfully.
the first part of the setup is done. of course it's not cool to disable selinux / firewalld, but it's just to make our demo life easier. in a production environment, security should be taken seriously. next up in this series, we'll be installing wildfly .
Published at DZone with permission of Mauricio Magnani. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
What Is mTLS? How To Implement It With Istio
-
Auto-Scaling Kinesis Data Streams Applications on Kubernetes
-
Micro Frontends on Monorepo With Remote State Management
-
Redefining DevOps: The Transformative Power of Containerization
Comments