Building a Mission-Critical Open Source Java Platform - Tuning WildFly
It's time for some high performance tuning of our platform with some adjustments to the WildFly installation configuration...
Join the DZone community and get the full member experience.
Join For Freein the previous articles in this series introduced our thoughts on setting up a mission-critical open source java platform with high availability in our web layer , installing wildfly , and configure our platforms topology. it's time for some high performance tuning of our platform with some adjustments to the wildfly installation configuration.
note: the terminology in this article, where possible, has been adjusted from the traditional master / slave descriptions to master / subordinate. some of the images displayed have not yet been updated and show the older terminology.
wildfly with mod cluster
if you followed the previous articles, you will now have apache web server ha in your environment as well as wildfly running in domain mode with four wildfly instances.
the first step will be to add two new properties to our wildfly instances. the properties must be added within each instance (server). so let's start by editing the host-slave.xml file in subordinate 0:
[root@server-subordinate-0 ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.final
[root@server-subordinate-0 ~]# vim subordinate0/configuration/host-slave.xml
##around line 89
<servers>
<server name="server-marketing-0" group="marketing">
<system-properties>
<property name="jboss.node.name" value="node-marketing-0" boot-time="true"/>
<property name="wildfly.balancer.name" value="marketing-lb" boot-time="true"/>
</system-properties>
</server>
<server name="server-accounting-0" group="accounting">
<system-properties>
<property name="jboss.node.name" value="node-accounting-0" boot-time="true"/>
<property name="wildfly.balancer.name" value="accounting-lb" boot-time="true"/>
</system-properties>
<socket-bindings port-offset="100"/>
</server>
</servers>
##suppressed
now perform the same procedure for subordinate 1:
[root@server-subordinate-1 ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.final
[root@server-subordinate-1 ~]# vim subordinate1/configuration/host-slave.xml
##around line 89
<servers>
<server name="server-marketing-1" group="marketing">
<system-properties>
<property name="jboss.node.name" value="node-marketing-1" boot-time="true"/>
<property name="wildfly.balancer.name" value="marketing-lb" boot-time="true"/>
</system-properties>
</server>
<server name="server-accounting-1" group="accounting">
<system-properties>
<property name="jboss.node.name" value="node-accounting-1" boot-time="true"/>
<property name="wildfly.balancer.name" value="accounting-lb" boot-time="true"/>
</system-properties>
<socket-bindings port-offset="100"/>
</server>
</servers>
##suppressed
as you may have noticed, every configuration of the technologies available in wildfly is done in the master . remember that we currently have two groups marketing and accounting that are using the profile full-ha and socket-binding full-ha-sockets .
then in the master edit the domain.xml file and add a new socket with our vip defined in chapter i:
[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.final
[root@server-domain ~]# vim master/configuration/domain.xml
##around line 1883 // full-ha-sockets
<outbound-socket-binding name="proxy">
<remote-destination host="10.0.0.190" port="9090"/>
</outbound-socket-binding>
##suppressed
within the full-ha profile edit the mod-cluster subsystem and add the instance-id and balancer properties.
##around line 1661 // profile full-ha
<proxy name="default" advertise-socket="modcluster" listener="ajp" proxies="proxy" balancer="${wildfly.balancer.name}">
##suppressed
restart the wildfly service on the master and then on the subordinates.
[root@server-domain ~]# systemctl restart wildfly
[root@server-subordinate-0 ~]# systemctl restart wildfly
[root@server-subordinate-1 ~]# systemctl restart wildfly
access the vip ip on the mod cluster port and context:
http://10.0.0.190:9090/mod_cluster_manager
note that our instances are now connected and ready to use:
deploy the cluster.war application. open the management console and go to deployments -> content-repository and upload the application. after that deploy -> deploy content -> choose marketing server group .
if the application is successfully deployed, you will see the context in mod_cluster_manager :
http://10.0.0.190:9090/mod_cluster_manager
if you try to access the application context (http://10.0.0.190/cluster) through vip you will get a 404 because the virtual host for that application has not yet been configured.
so now let's create a new virtual host to make the application that was deployed in the "marketing" group available through vip.
[root@apache-httpd-01 ~]# vim /etc/httpd/conf.d/virtual_host.conf
<virtualhost *:80>
servername marketing.mmagnani.lab
proxypass / balancer://marketing-lb/ stickysession=jsessionid|jsessionid nofailover=on
proxypassreverse / balancer://marketing-lb/
</virtualhost>
[root@apache-httpd-02 ~]# vi /etc/httpd/conf.d/virtual_host.conf
<virtualhost *:80>
servername marketing.mmagnani.lab
proxypass / balancer://marketing-lb/ stickysession=jsessionid|jsessionid nofailover=on
proxypassreverse / balancer://marketing-lb/
</virtualhost>
add an entry in your dns for the name marketing.mmagnani.lab pointing to the vip which in this case is 10.0.0.190.
in the master edit the domain.xml file and update the default virtual host on undertow subsystem.
[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.final
[root@server-domain ~]# vim master/configuration/domain.xml
##around line 1739
<host name="default-host" alias="marketing.mmagnani.lab" default-web-module="cluster.war" />
##suppressed
restart the wildfly service on the master and then on the subordinates.
[root@server-domain ~]# systemctl restart wildfly
[root@server-subordinate-0 ~]# systemctl restart wildfly
[root@server-subordinate-1 ~]# systemctl restart wildfly
finally open the browser and access the application url: http://marketing.mmagnani.lab
this way our applications will respond via virtual host/vip. so let's test the high availability of the application.
the request has been redirected to server server-marketing-0 . so go to runtime -> hosts -> subordinate0 and stop server-marketing-0 .
open the app url again: http://marketing.mmagnani.lab
this time the request was redirected to node-marketing-1 . high availability is working correctly.
unfortunately the session was not maintained, this is because we have not configured our cluster yet. so in the next step we will configure the cluster with jgroups/tcpping .
wildfly and tcpping
the first step is to set up a new stack in the profile we are using, which in this case is full-ha . for this in master host edit the file domain.xml and add the new configuration:
[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.final
[root@server-domain ~]# vi master/configuration/domain.xml
##around line 1574
<channels default="ee">
<channel name="ee" stack="tcpping"/>
</channels>
<stacks>
<stack name="tcpping">
<transport type="tcp" socket-binding="jgroups-tcp"/>
<protocol type="org.jgroups.protocols.tcpping">
<property name="initial_hosts">
${wildfly.cluster.tcp.initial_hosts}
</property>
<property name="port_range">
${wildfly.cluster.tcp.port_range}
</property>
</protocol>
<protocol type="merge3"/>
<protocol type="fd_sock" socket-binding="jgroups-tcp-fd"/>
<protocol type="fd"/>
<protocol type="verify_suspect"/>
<protocol type="pbcast.nakack2"/>
<protocol type="unicast3"/>
<protocol type="pbcast.stable"/>
<protocol type="pbcast.gms"/>
<protocol type="mfc"/>
<protocol type="frag2"/>
</stack>
##suppressed
now in the server-group marketing add a new property with wildfly instance ip and port.
##around line 1924
##10.0.0.67 subordinate0
##10.0.0.66 subordinate1
<server-groups>
<server-group name="marketing" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
<deployments>
<deployment name="cluster.war" runtime-name="cluster.war"/>
</deployments>
<system-properties>
<property name="wildfly.cluster.tcp.initial_hosts" value="10.0.0.67[7600],10.0.0.66[7600]"/>
<property name="wildfly.cluster.tcp.port_range" value="0"/>
</system-properties>
</server-group>
##suppressed
add the above properties to the server-group accounting because as we are using the same profile full-ha the instances will start with some issues because they do not know these properties.
##around line 1934
##10.0.0.67 subordinate0
##10.0.0.66 subordinate1
<server-group name="accounting" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
<system-properties>
<property name="wildfly.cluster.tcp.initial_hosts" value="10.0.0.67[7700],10.0.0.66[7700]"/>
<property name="wildfly.cluster.tcp.port_range" value="0"/>
</system-properties>
</server-group>
##suppressed
for tcping to work correctly you will need to add a private interface to subordinates. so let's start by editing the host-slave.xml file in subordinate 0 .
[root@server-subordinate-0 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.final
[root@server-subordinate-0 ~]# vi subordinate1/configuration/host-slave.xml
edit the
domain.conf
file and add the configuration this interface on the
java_opts
variable:
##around line 50
[root@server-subordinate-0 ~]# vi /usr/local/wildfly/wildfly-18.0.1.final/bin/domain.conf
if [ "x$java_opts" = "x" ]; then
java_opts="-xms64m -xmx512m -xx:maxmetaspacesize=256m -djava.net.preferipv4stack=true"
java_opts="$java_opts -djboss.modules.system.pkgs=$jboss_modules_system_pkgs -djava.awt.headless=true"
#edit this line
java_opts="$java_opts -djboss.domain.base.dir=/usr/local/wildfly/wildfly-18.0.1.final/slave0 -djboss.host.default.config=host-slave.xml -djboss.domain.master.address=10.0.0.68 -djboss.bind.address=10.0.0.67 -djboss.bind.address.private=10.0.0.67"
else
##suppressed
now perform the same procedure for subordinate 1:
[root@server-subordinate-1 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.final
[root@server-subordinate-1 ~]# vi subordinate1/configuration/host-slave.xml
##around line 78
<interface name="private">
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
</interface>
##suppressed
edit the
domain.conf
file and add the configuration this interface on the
java_opts
variable:
##around line 50
[root@server-subnordinate-1 ~]# vi /usr/local/wildfly/wildfly-18.0.1.final/bin/domain.conf
if [ "x$java_opts" = "x" ]; then
java_opts="-xms64m -xmx512m -xx:maxmetaspacesize=256m -djava.net.preferipv4stack=true"
java_opts="$java_opts -djboss.modules.system.pkgs=$jboss_modules_system_pkgs -djava.awt.headless=true"
#edit this line
java_opts="$java_opts -djboss.domain.base.dir=/usr/local/wildfly/wildfly-18.0.1.final/subordinate0 -djboss.host.default.config=host-slave.xml -djboss.domain.master.address=10.0.0.68 -djboss.bind.address=10.0.0.66 -djboss.bind.address.private=10.0.0.66"
else
##suppressed
restart the wildfly service on the master and then on the subordinates:
[root@server-domain ~]# systemctl restart wildfly
[root@server-subordinate-0 ~]# systemctl restart wildfly
[root@server-subordinate-1 ~]# systemctl restart wildfly
now in the server logs you should see both instances forming a new cluster:
[root@server-subordinate-1 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.final
[root@server-subordinate-1 ~]# tailf slave1/servers/server-marketing-1/log/server.log
2019-12-22 18:31:52,673 info [org.infinispan.cluster] (msc service thread 1-1) ispn000094: received new cluster view for channel ee: [node-marketing-0|1] (2) [node-marketing-0, node-marketing-1]
open the browser and access the application url: http://marketing.mmagnani.lab
the request has been redirected to node node-marketing-1 , see also that the request number in session is 8. so go to runtime -> hosts -> subordinate1 and stop server-marketing-1 .
refresh the page a new request will be made: http://marketing.mmagnani.lab
the request has now been redirected to node-marketing-0 and the number of sessions is 9. congratulations! session replication worked successfully.
in the next article we'll examine monitoring this environment using grafana, prometheus and alertmanager.
Published at DZone with permission of Mauricio Magnani. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Breaking Down the Monolith
-
What Is JHipster?
-
Unlocking Game Development: A Review of ‘Learning C# By Developing Games With Unity'
-
Leveraging FastAPI for Building Secure and High-Performance Banking APIs
Comments