Building a Mission-Critical Open Source Java Platform - The Web Layer
Currently the Java platform is one of the most consolidated in the world, much of this is due to platform's ability to support other languages such as Kotlin...
Join the DZone community and get the full member experience.Join For Free
currently the java platform is one of the most consolidated in the world, much of this is due to platform's ability to support other languages such as kotlin, groovy and scala, etc. thousands of web and mobile applications are developed using the platform as a base. the java platform has changed a lot in recent years and probably will keep evolving.
in parallel with these changes, we see the evolution application servers , that increasingly offer highly complex features such as load balancing components, smart asynchronous messaging, transaction control and many other technologies facilitating application development and standardization. they also provide a stable and scalable infrastructure for mission critical applications. one of the biggest challenges for application servers is to couple highly complex services, making them stable and flexible.
if you want a high availability, scalable environment with no vendor lock-in, wildfly is an open source community option.
wildfly is an open source application server that is 100% compliant with the java ee ee 8 specification. in previous versions of java ee even using only a few technologies for application development, we were required to deal with all the features deployed on the server. . to solve this problem from java ee 6, the concept of profile was inserted, which aims to create configurations with specific responsibilities, such as the web profile that has technologies for web development.
in this architecture we will use wildfly with the "full-ha" profile to meet our clustering and balancing needs.
domain mode x standalone mode
wildfly has two types of architecture known as "standalone mode" and "domain mode". in standalone mode we can work with a single instance very similar to older (jboss) versions such as jboss as 5 and jboss as 6.
the major problem with using standalone mode is decentralized administration which can cause a lot of headache when replicating the settings on each server.
to solve this problem we have domain mode in which all configurations are centralized in a component known as master and the servers are distributed in the subordinates using the configurations obtained from the connection between master and subordinate .
when we start wildfly in domain mode in the default configuration we have at least four processes. one host controller, one process controller and two servers.
domain controller: is the one who controls domain management. here are the settings that are shared between instances that are in this domain
process controller: is of great importance because he is responsible for creating the instances and also the host controller that we will talk about next. process controller should not be confused with an instance, it is simply a process in the jvm
host controller: as a domain controller, host controller also coordinates domain instances. he is responsible for doing something similar to farm deployment , ie he distributes the deployment file to all instances of the domain
servers: these are the instances themselves, where are the deployed applications. an important point is that each server is a java process
among the benefits of using domain mode we can cite include:
- centralized management;
- centralized configuration;
- deploy centralized;
- centralized maintenance;
- high availability management
you can find more information in the administration guide .
we will use two servers with keepalived + vip to achieve high availability with apache web servers . in addition we will use a domain to act as master and another as backup and to support our applications, two host controllers with two servers each, totaling four wildfly instances.
you can see our architecture in the following image:
all this lab is running on the very reliable ovirt virtualization platform. you can learn more about ovirt here: ovirt.org
let's get started by installing and configuring keepalived and apache web server high availability, and of course getting the mod_cluster setup ready. these settings will be performed on the following servers:
- apache-httpd-01.mmagnani.lab - 10.0.0.191
- apache-httpd-02.mmagnani.lab - 10.0.0.192
- vip - 10.0.0.190
to make it easier to understand where i am executing each command, i will use the prefix of the servers on each line.
these steps must be performed on both apache web servers:
apache web server - apache-httpd-01.mmagnani.lab
apache web server - apache-httpd-02.mmagnani.lab
edit the file and leave it as below (replace as per your settings).
now open the browser and enter the vip ip which in this case is 10.0.0.190. note that the request was directed to apache web server 01.
for a quick test, turn off apache web server 01 and see that our backup server is responding:
[root@apache-httpd-01 ~]# shutdown now
next we'll start mod_cluster configuration and plug it in wildfly.
compiling and configuring mod_cluster
the best way to get updated binaries for centos 8 is by compiling. i know this can cause some stress but follow the steps below that everything will be very simple.
copy the newly built modules to the apache web server module directory:
these same files must be copied to the apache web server 02 module directory.
the mod_proxy_balancer should be disabled when mod_cluster is used. to do this simply on apache web server 01 edit the 00-proxy.conf file and comment the line referring to this module.
do the same for apache web server 02:
now is the time to create the mod_cluster configuration file. on apache web server 01 create the file mod_cluster.conf and add the configuration information:
now copy the file to apache web server 02.
reboot both servers to make sure everything is going up correctly.
access the vip ip again on the mod cluster port and context: http://10.0.0.190:9090/mod_cluster_manager. note that mod cluster settings have been loaded successfully.
the first part of the setup is done. of course it's not cool to disable selinux / firewalld, but it's just to make our demo life easier. in a production environment, security should be taken seriously. next up in this series, we'll be installing wildfly .
Published at DZone with permission of Mauricio Magnani. See the original article here.
Opinions expressed by DZone contributors are their own.
What Is mTLS? How To Implement It With Istio
Auto-Scaling Kinesis Data Streams Applications on Kubernetes
Micro Frontends on Monorepo With Remote State Management
Redefining DevOps: The Transformative Power of Containerization