Why Many Java Performance Tests are Wrong
Join the DZone community and get the full member experience.
Join For Free
a lot of ‘performance tests’ are posted online lately. many times these performance tests are implemented and executed in a way that completely ignores the inner workings of the java vm. in this post you can find some basic knowledge to improve your performance testing. remember, i am not a professional performance tester, so
put your tips in the comments
!
an example
for example, some days ago a ‘performance test’ on while loops, iterators and for loops was posted. this test is wrong and inaccurate. i will use this test as an example, but there are many other tests that suffer from the same problems.
so, let’s execute this test for the first time. it tests the relative performance on some loop constructs on the java vm. the first results:
iterator - elapsed time in milliseconds: 78
for - elapsed time in milliseconds: 28
while - elapsed time in milliseconds: 30
allright, looks interesting. let’s change the test a bit. when i reshuffle the code, putting the iterator test at the end, i get:
for - elapsed time in milliseconds: 37
while - elapsed time in milliseconds: 28
iterator - elapsed time in milliseconds: 30
hey, suddenly the for loop is the slowest! that’s weird!
so, when i run the test again, the results should be the same, right?
for - elapsed time in milliseconds: 37
while - elapsed time in milliseconds: 32
iterator - elapsed time in milliseconds: 33
and now the while loop is slower! why is that?
getting valid test results is not that easy!
the example above shows that obtaining valid test results can be hard. you have to know something about the java vm to get more accurate numbers, and you have to prepare a good test environment.
some tips and tricks
- quit all other applications. it is a no-brainer, but many people are testing with their systems loaded with music players, rss-feed readers and word processors still active. background processes can reduce the amount of resources available to your program in an unpredictable way. for example, when you have a limited amount of memory available, your system may start swapping memory content to disk. this will have not only a negative effect on your test results, it also makes these results non-reproducible.
- use a dedicated system . even better than testing on your developer system is to use a dedicated testing system. do a clean install of the operating system and the minimum amount of tools needed. make sure the system stays as clean as possible. if you make an image of the system you can restore it in a previous known state.
- repeat your tests . a single test result is worthless without knowing if it is accurate (as you have seen in the example above). therefore, to draw any conclusions from a test, repeat it and use the average result. when the numbers of the test vary too much from run to run, your test is wrong. something in your test is not predictable or consistent. try to fix your test first.
- investigate memory usage . if your code under test is memory intensive, the amount of available memory will have a large impact on your test results. increase the amount of memory available. buy new memory, fix your program under test.
- investigate cpu usage . if your code under test is cpu intensive, try to determine which part of your test uses the most cpu time. if the cpu graphs are fluctuating much, try to determine the root cause. for example garbage collection, thread-locking or dependencies on external systems can have a big impact.
- investigate dependencies on external systems. if your application does not seem to be cpu-bound or memory intensive, try looking into thread-locking or dependencies on external systems (network connections, database servers, etcetera)
- thread-locking can have a big impact, to the extent that running your test on multiple cores will decrease performance. threads that are waiting on each other are really bad for performance.
the java hotspot compiler
the java hotspot compiler kicks in when it sees a ‘hot spot’ in your code . it is therefore quite common that your code will run faster over time! so, you should adapt your testing methods.
the hotspot compiler compiles in the background, eating away cpu cycles. so when the compiler is busy, your program is temporarily slower. but after compiling some hot spots, your program will suddenly run faster!
when you make a graph of the througput of your application over time, you can see when the hotspot compiler is active:
througput of a running application over time
the warm up period shows the time the hotspot compiler needs to get your application up to speed.
do not draw conclusions from the performance statistics during the warm up time!
- execute your test, measure the throughput until it stabilizes. the statistics you get during the warm up time should be discarded.
- make sure you know how long the warm up time is for your test scenario. we use a warm up time of 10-15 minutes, which is enough for our needs. but test this yourself! it takes time for the jvm to detect the hot spots and compile the running code.
remember, i am not a professional performance tester, so put your tips in the comments !
Opinions expressed by DZone contributors are their own.
Trending
-
From On-Prem to SaaS
-
Essential Architecture Framework: In the World of Overengineering, Being Essential Is the Answer
-
Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline
-
Using Render Log Streams to Log to Papertrail
Comments