Auto-Scalable Payara Micro Cluster for Java EE Microservices
An advanced package has been developed to help make Payara Micro Cluster integration with Jelastic Platform smooth and quick.
Join the DZone community and get the full member experience.Join For Free
payara micro is a minimalistic application server based on glassfish 4.1 and with java ee 7 support. having just 70 mb footprint, this server is provisioned with hazelcast for automatic clustering. payara jcache, as a key-value store, benefits from the embedded java api and simple management by allowing running war packages directly from command line. payara micro is optimized for microservices and modern container-based infrastructure. it can be easily run inside the cloud, providing automatic clustering for large-scale javaee applications.
to make payara micro cluster integration with jelastic platform smooth and quick, a dedicated pre-packaged solution has been developed. it automates all the required installation steps, allowing to deploy and launch a fully-functional and highly scalable payara micro cluster in a matter of clicks.
jelastic payara micro cluster specifics
the advanced payara micro cluster package contains the minimal required amount of nodes (containers) by default but includes all the necessary tools to scale horizontally based on the increased amount of incoming traffic.
this solution is built on top of docker containers , leveraging the following images:
- jelastic/payara-micro-cluster : payara micro application server (1 instance by default to be automatically scaled out horizontally when the load raises).
- jelastic/haproxy-managed-lb : haproxy load balancer, which automatically adds and removes application servers within load balancing configs when their number is changed.
- jelastic/storage : dedicated data storage container for storing custom data; by default, it contains a simple load simulation application to test the main cluster scaling possibilities.
as a basic benefit of hosting at jelastic cloud platform, all containers in this package are preconfigured to be automatically scaled vertically (up to 16 cloudlets by default, which stands for 6,400 mhz of cpu and 2,048 mib ram) based on the load. this allows you to utilize resource smartly, using just as much as it’s currently required.
additionally, all servers within a cluster have special load alerts enabled to automatically send notifications when container capacity is almost at its limit. this indicates that more resources have to be allocated for a server or it should be scaled horizontally .
obviously, all these predefined settings can be tuned up to your needs via the dashboard.
quick deploy of payara micro cluster
installation of the payara micro cluster with all additional scaling configurations represents a completely automated process. all you need to do is import the corresponding project from the jelastic jps collection at github.
1. import the url
click the import button on the top pane of the dashboard and insert the jps link in the url tab.
click import at the bottom of the frame to proceed.
2. specify parameters
within the appeared confirmation window, specify some general environment parameters for your cluster.
- environment. type the desired environment name to be used as an internal domain
- display name. alias that will be displayed for the environment within the dashboard.
- region. select the preferable hardware region (if available).
click install to proceed.
wait a few minutes for jelastic to automatically create all of the required instances and configure the cluster for future automation.
that’s it! payara micro cluster with out-of-the-box automated scaling is successfully deployed and ready for work. now, let’s check how it can cope with traffics of varying intensities.
payara cluster load testing and scaling
the presented advanced payara micro cluster package is provisioned with a special inbuilt load testing application, which includes separate options for ram and cpu loading. this tool is physically located on the storage node and is automatically mounted to the app server layer upon cluster installation.
with its help, you can discover cluster scaling possibilities and check its behavior in real conditions of changeable load. to open the load test, access the following link using the appropriate domain name (either an internal or a custom one).
depending on the resource type you’d like to simulate the load for (i.e. ram or cpu ), fill in the required parameters:
- load: amount of resources to generate (in % for ram and threads for cpu).
- duration: testing time (in seconds).
click on run for the appropriate section.
1. launch the tests
so, let’s launch the tests one by one (we’ll use the settings from the image above) and explore the effect they cause:
cpu tests will be run in just two threads to simulate regular conditions of serving a few users and check the automatic vertical scaling . the average load, in this case, equals 11 cloudlets . this corresponds to approximately 4,400 mhz of cpu consumption.
for the ram test, we’ll apply the moderate load (e.g. 80% of allocated memory) to increase the number of payara micro instances due to the automatic horizontal scaling being triggered.
when loading is finished, you’ll see the calling gc string appear, means that garbage сollection is initiating as a part of the test. this jvm memory management tool automatically detects the amount of ram that is no longer needed (in our case, it was used to handle load simulation) and frees it to reduce the spend.
2. see results
to see the result, you can refer to the statistics dashboard section for your payara cluster and follow load changes visually, in the form of graphs. keep in mind that they are built based on average values for the chosen interval , thus diagram peaks can slightly differ from the text test output.
the cpu load graph (on the left) generally corresponds to the consumption values received during the test. such load can be easily handled by a single payara micro container.
as for the green ram graph, its maximum value reaches about 1500 mib. this implies execution of the corresponding predefined memory horizontal scaling trigger, so the amount of payara micro containers is automatically increased.
tip: by referring to the environment settings > monitoring > auto horizontal scaling section, you are able to tune any of the existing triggers or add new scaling conditions.
3. track the results
to track the results of horizontal scaling while running the tests, you can use the following built-in tools at jelastic dashboard.
the monitoring > events history section within environment settings stores details on scaling operations that were automatically executed based on preconfigured scaling triggers.
as you can see, the cluster scaled out up to two payara micro nodes during the test we’ve handled and returned back to the single app server topology when the load fell.
the log section allows you to track what actually happens in your payara micro cluster during such scaling in/out. in particular, you can refer to:
- lb_manager.log at the haproxy node to see the new hosts being automatically added to and removed from the load balancer routing.
- run.log within the application server layer to track the ongoing cluster activities.
also, you’ll be notified of all scaling changes with the corresponding email messages.
in such a way, you can be sure that a java application within your payara cluster will be able to handle a diverse amount of incoming traffic without the necessity to keep an eye on its change. moreover, this will be done with the highest efficiency, automatically adding new resources and servers on load spikes and removing the redundant ones during the time of inactivity, eliminating you from any manual reconfiguration mess.
have any questions or comments for the further improvements? need some additional assistance with the package installation or hosting? let us know with a comment below or refer for the help from our technical experts at stack overflow .
Published at DZone with permission of Tetiana Markova, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.