Glassfish 4 - Performance Tuning, Monitoring and Troubleshooting
Join the DZone community and get the full member experience.Join For Free
This is the third blog in C2B2 series looking at Glassfish 4.
The previous two are available here:
Part 1 - Getting started with Glassfish 4
In this blog I will look at 3 areas:
- Performance Tuning, where I will look at some of the areas to look at when setting up a system for production.
- Monitoring, where I will look at some of the tools we use for monitoring a system both during performance testing and tuning and once a system is up and running.
- Troubleshooting, where I will look at some of the tools you can use to help diagnose and detect performance issues.
Glassfish out of the box (as with most app servers) is optimised for development purposes. Developers want the ability to deploy and undeploy continuously, create and remove resources, debug, etc. However, this configuration is not suitable for a production system.
When configuring any application server you have to take into account what you are trying to achieve and what is best suited for the applications you intend to run. One size does not fit all!
It can be a long and complex process and I'm afraid I can't give you a one-stop solution. However, I can give you some pointers to some of the things you can do to prepare your system for production.
So, what kind of things do we look at when we are looking to performance tune a Glassfish system. Some of the most common things are:
- JVM Settings
- Garbage Collection
- Glassfish Settings
The standard JVM defaults are not suitable for a production system. One of the simplest changes that can be made is to use the -server flag, rather than the default -client. Although the Server and Client VMs are similar, the Server VM has been specially tuned to maximise peak operating speed. It is intended for executing long-running server applications, which need the fastest possible operating speed more than a fast start-up time or smaller runtime memory footprint.
Allocate more memory to the JVM by modifying the value of the -Xmx flag. How much depends on the size and complexity of your enterprise application and how much memory you have available. In addition we also want to make sure we allocate all of the memory on startup. This is done with the -Xms flag.
We set the minimum and maximum perm gen to the same value in order to avoid allocation failures & subsequent full garbage collections.
There are a number of settings that can be tweaked regarding Garbage Collection. I'm not going to cover GC tuning as that is a whole topic all of it's own but here are some of the settings we would always recommend regarding GC in a production environment:
Firstly we want to ensure we log all Garbage Collection information as this can prove extremely useful in diagnosing issues.
Next we want to make sure we log GC information to a file. This will make it easier to separate the GC from other details in the log files.
We also want to ensure we have as much detail as possible.
and that the information is timestamped for easier diagnosis of long running errors and to be able to ascertain what normal levels are over time.
Finally, we want to ensure that developers aren't making explicit calls to System.gc(). Hopefully they don’t anyway and if they are you need to look into why (doing so is a bad idea since this forces major collections) but this will disable it just in case.
Heap dumps can be extremely useful for diagnosing memory issues. There are two settings we would definitely recommend. These tell the JVM to generate a heap dump when an allocation from the Java heap or the permanent generation cannot be satisfied. There is no overhead in running with these options but they can be useful for production systems where OutOfMemoryErrors can take a long time to surface.
There are three ways to configure Glassfish:
- Through the admin console
- By directly editing the config files
- Using the asadmin tool
Although making changes through the admin console can often be the easiest way to make changes we’d recommend where possible to script all changes so you have a repeatable production server build. Also you should ensure copies of all config files are kept in Config Control so you know you have a working copy and can roll back to a previous version when needed.
Turn off development features
Turn off auto-deploy and dynamic application reloading. Both of these features are great for development, but can affect performance.
Configure the JSP servlet not to check JSP files for changes on every request.
Also, set the parameter genStrAsCharArray to true. This will ensure all String values are declared as static char arrays. One reason for this is that the array has less memory overhead than String.
These changes will mean you cannot change JSP pages on your production server without redeploying the application, but on a production system this is generally what you want.
Acceptor Threads and Request Threads
There are two main thread values we would recommend setting, acceptor threads and request threads.
Acceptor threads are used to accept new connections to the server and to schedule existing connections when a new request comes in. Set this value equal to the number of CPU cores in your server. So, if you have two quad core CPUs, this value should be set to eight.
Request threads run HTTP requests. You want enough of these to keep the machine busy, but not so many that they compete for CPU resources which would cause your throughput to suffer greatly.
By default, GlassFish does not tell the client to cache static resources. It is recommended to cache static resources, like CSS files and images particularly if you have a lot of them.
Max thread pool and min pool size should be set to the same value.
Specifying the same value will allow GlassFish to use a slightly more optimised thread pool. This configuration should be considered unless the load on the server varies significantly. Increasing this value will reduce HTTP response latency times.
What to set these values to depends heavily on what your application is doing. In order to get this value right you should look to incrementally increase the thread count and to monitor performance after each incremental increase. When performance stops improving stop increasing the thread count.
You should look to turn off as much logging as possible. In a production environment we would generally recommend logging at WARN and above.
This includes the logging done by Glassfish as well as your own applications.
The fewer monitoring options that are enabled, the better the server's performance.
All Glassfish monitoring is turned off by default. Switching monitoring on can be very useful when diagnosing issues and when doing initial system testing and performance tuning for monitoring what changes.
What to monitor
Used Heap Size - Compare this number with the maximum allowed heap size to see what portion of the heap is in use. If the used heap size nears the max heap size, the garbage collector urgently attempts to free memory and this is something that should be avoided where possible.
Number of loaded classes - Useful for detecting performance and application development trends.
JVM Threads - Important for performance tuning and for troubleshooting JVM crashes. Some of the most essential indicators are the current active JVM thread count and the peak values.
Thread pools - You should compare a pools current usage with the maximum number allowed. Problems can start to occur when the current count nears the max threads number.
JVM Tools for Monitoring
The following is a list of a a few of the tools that come with the JDK that are useful for monitoring information from the JVM.
jstat - This tool displays performance statistics regarding usage of the perm gen, new gen and old gen. It also provides class loading and compilation statistics
jmap - Gives you visibility of memory usage, can produce a class histogram and can dump the memory to a file
jconsole/jvisualvm - These tools can display all the previously mentioned monitoring indicators and graph them over time. This allows you to spot trends and to get a better overall picture of your normal performance levels and changes over time.
Note - These should NOT be left running permanently on a production system!
Unfortunately, no matter how much tuning and testing you do all systems WILL go wrong from time to time.
So, what should you do when your production server bursts into flames?
Well, in that situation you should call the fire service but for more general problems:
- Gather data - get as much data as you can, there is no such thing as too much!
- Analyse that data - Data is worthless when you don’t know what it means. Visualise where possible – graphs and charts reveal trends and patterns over time
- Make educated decisions - Only make decisions based on data. If you go with your “gut instinct” and what “feels right” you will probably make things worse
First up, for most of the JVM tools you will need the process ID of the server. You can get this information in various ways. Two of the simplest are:
This will list all current running Java processes. The -v flag is for verbose output.
ps aux | grep glassfish
The ps command with the options aux will show all processes from all users. This will display a LOT of information so pipe it through grep to filter for the glassfish process
As mentioned earlier the jstat tool can be used for gathering info on JVM performance. Other useful tools include:
This will produce thread stack dumps for all threads running in the JVM. This can be very useful for discovering stuck threads or long running threads.
This tool can be used to create a heap dump. It outputs to a file in .hprof format which can be read by a number of analysis tools
jrcmd and jrmc
These tools are only available with the jRockit JDK. I won't go into any detail here as I have previously blogged about jrcmd here:
and my colleague has blogged about jrmc here:
The Glassfish asadmin tool has a built in command which will provide similar functionality to the above tools but without the need for the PID.
asadmin generate-jvm-report --type=[type]
Analysing the data
There are various tools available for analysing performance data. The following are some of the most useful:
IBM Support Assistant is a free troubleshooting application that helps you research, analyze, and resolve problems using various support features and tools. It contains a Garbage Collection and Memory Visualiser as well as a Heap Analyser. It will also provide a report telling you where issues might exist, and listing red flags with advice on what to change in your applications
jRockit Mission Control is a very powerful tool which can be used to monitor live systems or analyse historical data in the form of flight recordings.
JVisualVM GCViewer is an optional plugin for jVisualVM which can transform a tool which is already great for live monitoring into a powerful analysis tool
jhat is a Java Heap Analysis Tool. It processes heap dump files and produces HTML reports. There are better analysis tools, but it’s always freely available if you’re running a JDK.
There are many open source and freely available tools and projects to help you, here we’ve covered some very common and widely used ones, but our list is by no means exhaustive!
Remember, Glassfish out of the box (or out of the zip file!) is not designed to be run 'as is'. You should also note that there is no ideal configuration that will work for all systems. It will take time and effort to get the best configuration for what you require. Hopefully in this blog I have given you some useful guidelines and pointers.
You should take time to work out what you want in terms of services, then strip back your config to match that.
You should test, test and test again to ensure that your configuration matches the requirements with regards to the applications you will be running on your server.
You should tune your JVM to ensure you have the best settings for your particular configuration.
You should ensure you have monitoring in place to keep a check on everything and ensure that if your server does crash you have as much information as possible at hand to diagnose what caused it.
The next blog in this series looks at Migrating to Glassfish 4: http://blog.c2b2.co.uk/2013/07/glassfish-4-migrating-to-glassfish.html
Published at DZone with permission of Andy Overton. See the original article here.
Opinions expressed by DZone contributors are their own.