ICEfaces Performance Report - Corrected
ICEfaces Performance Report - Corrected
Join the DZone community and get the full member experience.Join For Free
Built by operators for operators, the Sensu monitoring event pipeline empowers businesses to automate their monitoring workflows and gain deep visibility into their multi-cloud environments. Get started for free today.
Problems With the TestsAll three tests follow a similar methodology that involves loading the system by repeatedly refreshing a page in the application under a variable number of simulated users. Strictly speaking, this test does not exercise the Ajax functionality of the various Ajax frameworks, since the majority of Ajax interaction does not take place via full-page refresh. For this reason, the latency measurement in this test should be interpreted as the latency of the initial page view under load, not as typical interaction latency. The heap measurement is likely similar to steady-state heap use under the number of simulated users, but it does not take into account different memory use patterns during Ajax interaction or input processing. Furthermore, because the simulated clients interact with the server so frequently, more working memory is required in the heap than may be necessary to handle a real-world load. This allows the load test to complete more rapidly, but if you are load testing a production application this would not be a recommended approach, as it does not simulate realistic delays between user requests.
Despite the limitations and lack of scope existing in these tests, they do exercise the initial page load characteristics of the framework, provided that they properly simulate real-world client interaction with the application. If a simulated browser simply sends a series of GET requests, while neglecting to send messages generated for window onunload(), it appears to the Ajax framework as if the user has opened, and expects to interact with, a great many separate browser windows. ICEfaces includes logic to clean up resources related to window onunload() events that should occur with each page refresh, but the test methodology did not account for this. The problem is compounded by misconfiguration of ICEfaces for this type of test. ICEfaces includes a feature called Concurrent DOM Views that supports opening multiple windows or tabs onto the same ICEfaces application, maintaining independent state for each view, and sharing a single Ajax Push connection between all views. The original test configuration for ICEfaces was concurrentDOMviews="true", but under the test conditions every GET request created a new view that would consume resources for the duration of the test, as the necessary onunload() events never occurred. This is the root cause of the erroneous results originally reported.
Test Configuration - CorrectedIn order to conduct the tests properly we worked with the ICEfaces 1.8.1 release, and set concurrentDOMviews="false" to compensate for the missing onunload() events. This ensures that the ICEfaces framework only maintains one unique view for each simulated test client. We also took the liberty of testing two different configurations of ICEfaces related to maintaining the state of the server-side DOM. ICEfaces supports DOM compression using Fast Infoset XML compression to reduce server-side memory use between requests, so you can see the comparative trade off between memory and computation. The rest of the test configuration is as follows.
• MacBook Pro
• CPU: Intel Core 2 Duo 2.2 GHz
• Memory: 2GB
• ICEfaces 1.8.1
• JDK 1.6.0_07
• Apache Tomcat 6.0.18
• Jmeter 2.3.2
• Session timeout: 3 minutes
• CATALINA_OPTS: -Xms256m -Xmx1600m
• maxThreads: 1000
• acceptCount: 100
Source code from the original article can be downloaded from the following url: https://zkforge.svn.sourceforge.net/svnroot/zkforge/trunk/zkTest
MeasurementsLatency measurements represent an average response from measurements gathered using JMeter. The HTTP Client sampler is configured with a Response Timeout of 30 seconds, as response times in excess of that are far from usable. Memory measurements are obtained by fetching a simple JSP page after each test phase that returns Runtime.totalMemory() and Runtime.freeMemory() after garbage collection.
We simply present our ICEfaces test results and compare with the originally published results for ICEfaces to illustrated the gross inaccuracies in the original publication. In the case of memory usage, it was necessary to use a logrithmic scale to even illustrate the accurate and errored results on the same graph.
Test 1: Simple Form
Test 2: Grid 15
Test 3: Grid 150
In our testing this example became unstable for all frameworks as you approach 1000 threads, so results are truncated at 700 threads.
Concluding CommentsThe results presented here clearly illustrate that gross errors occured in the orginal testing of the ICEfaces framework, and that the conclusions drawn by the author of the original article are completely compromised. We are certain that the Java EE community is better servered when accurate results are presented, and trust that the data provided here will allow Java EE developers to make more informed decisions when selecting technologies for RIA development.
And with regard to the original articles proposition, "This could be a battle between comfort, and standards.", it is clear that standards-based approaches are comparably performant to one particular propriatary approach, but offer much greater community adoption and support. You will have to decide for yourself which community offers the greatest "comfort" for your development efforts.
Opinions expressed by DZone contributors are their own.