Configure a GlassFish Cluster With Automatic Load Balancing
This tutorial guides you through setting up a GlassFish cluster using Jelastic and Docker images while automating your load balancing.
Join the DZone community and get the full member experience.
Join For Freeconfiguring application servers may not be as trivial as it seems. there are some configuration commands and parameters that may not work as the users intuit. to make matters worse, configuring clusters not only involves tuning parameters, but also requires you to deal with the availability of servers, ssh configurations, operating application server in all nodes, etc. thus below we’ll explain how to easily create a glassfish cluster with docker and jelastic using solutions such as cloudscript.
for this example guide, we’ve chosen oracle
glassfish
application server, as it offers reference implementation of java ee 7 and has a centralized way to operate clusters, applications, and configurations without the necessity to manage every cluster node. according to the official glassfish documentation, administration node has the following architecture:
in the picture above, the domain administration server (das) is the administration node of a cluster, which can communicate between cluster nodes by dcom (in case of glassfish clusters in windows nodes) or by ssh (in case of linux-, solaris- and macos-based cluster nodes). also, there exists the third option, named config, intended to manage each node locally. in order to centralize the administration, glassfish docker image is using ssh for communication between das and other glassfish worker nodes. now, let’s describe how the docker images were prepared.
docker images
for this demo, we’ll use two docker images:
- one glassfish docker image ready to form centralized clusters, hosted in this repository .
- one haproxy docker image, provided by jelastic, to work as a load balancer.
the same glassfish docker image can result in containers performing both das and cluster node roles. what makes it possible are several customizations made in the docker image, originally created by bruno borges from his glassfish 4.1.1 image . so, we customized the images to move away from oracle linux to debian, add the openssh install step, and define several configurations in the image provisioning and start up processes.
the containers need to communicate between them through ssh, so installing an ssh server is mandatory (we’ve used openssh in this case). additionally, the entries pubkeyauthentication , strictmodes , authorizedkeysfile , permitrootlogin , and identityfile in /etc/ssh/sshd_config had to be properly set. moreover, the ssh keys are scanned during startup process to avoid ssh getting stuck due to the key’s fingerprint acceptance, by ssh-keyscan doing it for us. once the ssh was properly configured, we are able to automate cluster configuration.
cluster configuration in glassfish 4.1.1 is not a trivial task to automate, requiring some level of expertise. this docker image can work as a das by setting -e ‘das=true’ in the docker run command. if a container is run with this parameter set, it will start the domain, create a cluster named cluster1 , stop the domain, and start it again with -v parameter. if a glassfish container has a das container linked, it assumes itself as a cluster node, creates a cluster node by using asadmin create-local-instance command, stops the domain, updates the node definitions by using the command asadmin update-node-ssh from the das node in order for the das node to convert the callee into a ssh cluster node, and finally starts the instance by using nadmin start-instance command. for a better understanding, please read the run.sh source code in the glassfish docker image repository .
jelastic offers a haproxy docker image ready to be used in jelastic environments. containers from jelastic/haproxy-managed-lb docker image can add or remove nodes from the load balancer configuration by running the shell script /root/lb_manager.sh inside the container, with one of the following parameters:
- /root/lb_manager.sh –addhosts [container lan ip]
- /root/lb_manager.sh –removehost [container lan ip]
running the glassfish cluster
for this demo, we used jelastic as a docker infrastructure and we created a cloudscript of this demo, in this repository . we also used the clusterjsp.ear application to test whether the cluster and the load balancer were working as expected.
before using the json file from https://github.com/jelastic-jps/glassfish-cluster , let’s check what exactly this environment should start for us. in our case, we compose this cloudscript file, also called jps or jelastic packaging standard , that describes the topology that jelastic must create, what should be installed, what should be started, and the responses to the events triggered by jelastic administration — such as scale out and scale in . in the topology section, we see three glassfish nodes, each one in its own group, and an haproxy node. after this section, there is an oninstall javascript object, calling some actions. lastly, there is a procedures object, which defines some routines to run in the installation process, and the in the cloudscript uses a shell script to add glassfish nodes to the cluster.
in this cloudscript, we can notice that there are two events to update the load balancer’s configuration — onafterscaleout and onafterscalein — and one event to update the cluster — onbeforescalein . in the case of an onafterscaleout event, a procedure is called to add the nodes to the load balancer, using the script inside haproxy container. the event onafterscalein just calls the /root/lb_manager.sh directly inside a for loop, and passing all the nodes from event.response.nodes . in the case of an onbeforescalein event, a procedure is called to remove glassfish nodes from the cluster.
now, as you have a better understanding of the proposed json file, let’s import it in jelastic to create an environment. you can access the glassfish das administration console by getting the http url for it — you can obtain it by clicking on ‘open in browser’ button at the right of the das node name — and accessing it using https and in the port 4848. to test whether the load balancer works, deploy the clusterjsp application in cluster1 and set the availability option. accessing the load balancer container url, you may see a glassfish web page indicating that it is running. set the context of the deployed application and you’ll access the application. after one or two page reloads, you’ll see that the node accessed varies.
create clustered environment
you can import the environment by following the steps below:
1. access jelastic console and click
import
.
2. select the
url
tab and set
this url
from the cluster json.
3. set the name for the environment and click
install
.
4. finally, as we can see, the cluster was created.
deploy and configure application
after creating the cluster, we can deploy an application to the cluster and assert that the cluster is working correctly and the load balancer is operating as it should.
1. at the glassfish
das
node row, click the button
open in browser
located at the right of node’s name.
2. once the browser window is open, it shows the default glassfish web page. change the url browser to https://<environment_domain>:4848.
3. enter the glassfish console with the user
admin
and password
glassfish
.
4. go to the
application
option and deploy the
clusterjsp.ear
application:
-
download
clusterjsp.ear
and choose it as
packaged file to be uploaded to the server
.
-
check to have
availability
enabled.
-
finally, set up
cluster1
as the application target and click
ok
to proceed.
5. now open the haproxy node in browser and add
/clusterjsp
at the end of the url.
every time you refresh the page, the executed server ip address changes, indicating that the load balancer is working.
conclusion and future works
docker images and scripting do a great job at automating environment creation together. docker containers from images, which run in jelastic the same way they run in any other infrastructure, and scripting, to capture the correct topology of the servers involved in the environment, are the bread and butter of any docker environment that can be used in jelastic infrastructure. they benefit from its advantages in terms of resource management and are able to migrate from other platforms to jelastic and migrate away if needed.
Published at DZone with permission of Andre Tadeu de Carvalho, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Knowing and Valuing Apache Kafka’s ISR (In-Sync Replicas)
-
Tech Hiring: Trends, Predictions, and Strategies for Success
-
Adding Mermaid Diagrams to Markdown Documents
-
Event-Driven Architecture Using Serverless Technologies
Comments