Web performance in seven steps
Join the DZone community and get the full member experience.Join For Free
more and more internet users buy in web shops these days. research shows that the part of european internet users that buys on-line has grown from 40% in 2004 to 84% in 2008. additionally, the large web retailers in my country saw their revenue grow in 2009 and in the first part of 2010 just as if the crisis never materialized.
i also like to shop on the web, to buy electronics, books or tickets. now and then i enter a web shop where i have to wait before pages appear fully. most of the time i’ll move away: with just one click i’m off to the competitor. the increased comparison possibilities and freedom of choice offered by the internet are not only valid for the products, but also for the web shops themselves. therefore, it has become crucial for the success of the web shop to have a responsive web site.
with only a few concurrent visitors, it is usually not so hard to have a quick website. however, with the growing trend of internet sales, the increasing integration and complexity of back-end systems and the by marketing demanded ever increasing richness of the user experience, this often becomes a big challenge for developers and operators. this may result in systems blacking out under high load or responding too slowly.
so the question is: how can we prevent these performance and availability problems and how can we assure that a web site is always quick and available?
on the basis of real life, trial and error experience we’ve come to an approach which can be described as: measure, don’t guess; seven steps to performance success.
how performance problems get to you
and loss of revenue
when internal applications are slow, this is frustrating for the users. they c a n n ot do their job efficiently anymore and will be de-motivated. call center agents have to apologize continuously to the customer on the other end of the line for their slow systems. the customer in turn will be frustrated by the long waiting times and long phone calls. when external apps are slow, this will have direct consequences for the revenue of the company. for instance, if i want to buy a book or insure my car, i compare online and choose a shop. if i have to wait when i am on such a site, i simply browse to the competitor to buy there. since i will not be the only one behaving like that, this has its effect on the company revenue.
disruption of regular development
slowness problems most of the time manifest themselves unexpectedly, such as after the introduction of a new application or new release. a cause of this is that the non-functional aspects of the software usually get attention too late and too little. the difficulties which turn up in production put a high pressure on both the operators as well as the developers to solve the usually difficult to find problems. this will have its disruptive effect on the regular development of new releases: the development team is only busy firefighting.
just adding hardware: a cheap solution?
the solution to the slowness is regularly sought in putting in more hardware: load balancing over more servers or modern fashioned: run the app in an elastic cloud. however, if the bottleneck turns out not to be located in the web tier but somewhere else, this investment in more servers will turn out to be just wasted money. moreover, yearly returning licensing and operational costs are more than once under estimated. so, in case extra hardware is a solution it may an easy solution, it is certainly not always a cheap solution.
to performance success
it can be a valid choice to run the risk of performance problems in production and deal with them in a re-active manner. however, it is usually wiser to be pro-active and prevent them. this approach brings more certainty, peace of mind and also saves money. it consists of the following seven steps.
step 1: define performance requirements
defining the performance requirements well usually is a neglected activity. most of the time the requirement is formulated as: it just has to be fast or: at least as fast as the previous platform. with such vague definitions the confusion starts. the goal is unclear and is typically explained very differently by the business and the it department. to prevent this, the goals should be formulated in a smart way and be prioritized. speed will be more important for a shop homepage than for a page where a customer can change his profile. by defining priorities, this order of importance is made explicit and clear. from smart, the a stands for attainable and the r for realistic. these aspects are often ignored by the non-technical contributors to these requirements. in that case, a short response time will lead to an extended development time or expensive hardware. half a second slower during peak hours can be acceptable if this saves tons of money. on the other hand, reducing the response time of an important page from 4 to 2 seconds can lead to a substantial growth in revenue. so, a solid analysis of the impact of performance on the business is needed to be able to clearly define the performance requirements in a smart way, prioritized and be able to balance the cost and benefits.
step 2: execute a poc for performance
the it world is very sensitive to trends. having been around in the it industry for 15 years, i’ve seen a few. a technology is hot for a while, and then quickly become out-of-fashion and yesterdays news. it will be replaced by something which is much better and which everyone seems to follow blindly. such fashionable topics are, to name a few, corba, cgi, applets, ejb, struts, spring, server faces, xml, soa, omt, uml and ria. often new, bleeding edge technology is used in a project just for the sake of being fashionable or for getting it on the developers resume. in addition, each technology or framework comes with its own teething troubles and most of the time uses more resources than its predecessor. the goal of such a new technology is generally improvement of flexibility, productivity or maintainability, and performance usually has no priority or has not been considered at all.
therefore, it is questionable if the chosen new technology and architecture will meet the specified performance requirements. in practice, this regularly becomes only evident in a late stage of the project: when it has already slipped beyond the planned production date. only then it may become clear that the chosen technology or architecture is just not sufficient. and switching to a different technology or architecture usually results in high cost and long delays. therefore it is essential to execute a proof of concept for performance, in which all technology and architecture components are touched, in a vertical slice of the application. it is important that this benchmark is performed in a sufficiently representative manner, which i will elaborate in my next post. by executing this poc and understanding and using the results of it, the project can early be corrected in the right direction to prevent excessive cost and delay.
step 3: test representatively
applications in development environments is often neglected with the rationale
that faster hardware in the production environment will solve this problem. however,
whether this is really true can only be predicted with a test on a
representative environment and in a representative way. in such an environment,
there needs to be more representative than just the hardware. i have
experienced multiple times that a database query on the test database with 1000
customers took only less than 10 ms., while on the production database with
100.000 customers this turned out to take tens of seconds. so, if the
development team does not test with a full, complete database, going to
production may lead to some surprises. it is also important that the number of
concurrent users and their behavior is well simulated in the test. furthermore,
care should be taken to take caching effects into account: if the test
continuously requests for the same product by the same customer, this data will
be in database or query cache the second and following times. this will speed
up the request considerably and be much faster than with many customers and
products. this test is therefore not representative for the real situation. a
suitable performance test tool and performance expertise is necessary to create
a valuable test. the most popular open source performance test tool is apache
figure 1. screenshot of a run of a performance test in apache jmeter.
this is a tool made by programmers, for programmers. test scripts can be created with visual elements like a http request, which can be recorded and configured. many are available and if you need more, you can always fall back on a beanshell element in which you can manipulate the request, response and various jmeter variables. if that even does not meet your needs yet, you can extend jmeter source code and develop your own elements. because of its for-programmers nature, it is less suited for the average tester. also reporting features and maintainability of the scripts are both not so great. therefore, commercial tools like hp mercury loadrunner, borland silkperformer or neotys’ neoload may be good alternatives for companies.
performance testing from the cloud
the emergence of cloud computing adds new possibilities for performance testing. an elastic compute cloud like amazon ec2 provides the ability to scale up quickly with the number of application deployments because of increasing load. for performance testing the cloud can be used the other way around: for temporary use of many load generating test clients to generate expected and peak loads for your application. this saves you from having to buy many servers to run the load generating clients and if you run these performance tests only periodically, this can be an economical solution. quite some information is available how to run various performance tools in the cloud.
step 4: test continuously
with a representative test as one of the last steps before going live we prevent that expensive bad-performance surprises will pop up in production. however, the same surprises will pop-up, only earlier and with less impact. to save costs and prevent large architectural refactoring, it is crucial to test for performance as soon as possible. this is just like any other software defects and quality assurance: the later in the development process defects are detected, the more costly these defects are.
at a popular web shop i had the following challenge: we wrote the performance tests only at the end of the six-weekly release period, after functional testing had taken place and functional defects were corrected. in case serious performance defects popped up, a crisis team was gathered and we found ourselves in a stressful situation. there was usually not enough time to fix the defect before the release date, so my recommendation at times was to defer the release date. however, deferring the release date often just was not possible, because tv or radio adds were bought to promote the new functionality. so, how to solve this dilemma? we found the solution in applying agile principles: test early and the team is responsible. we included meeting performance requirements of the new or changed feature in the definition of done. the development process included a common automatic build. unit tests of a feature were written as usual by the developer. we now introduced performance tests to the spectrum: the developer writes the performance test script for his feature (service, page) in jmeter, side-by-side to his unit tests on the classes. when the nightly build with maven has taken place, the application is deployed on websphere and the performance tests are run by the jmeter ant script. this script generates a report which is emailed to stakeholders. in this way, the it department gets early insight into new and changed features, it can adapt its course quicker, back-off early from an unfortunate architecture or approach, minimize surprises and also have lower costs. additional benefit is that writing test scripts gets done more quickly than before, because the developer has all details of the new feature still fresh in his memory. these details are for instance the conditions under which the service may be called and with which parameters, variations and special cases. this way, communication overhead between a performance tester and a developer on these details is drastically reduced, further improving productivity.
step 5: monitor and diagnose
when a new version of the software is released into the production environment, the question always is: will it actually perform like we saw in testing and acceptance environments? and we keep our fingers crossed. it is therefore important in such cases to monitor carefully what happens with the performance and availability.
there are all sorts of tools and services available to monitor your web site for availability and response times of web pages, like uptrends, site24x7 and dotcom-monitor. they look at the application as a black box and measure once in several minutes. however, to be able to take the right measures in case of a fatal incident, it is necessary to be able to pin-point the problem.
it is essential to monitor on multiple levels and on multiple application parts. for levels, think of hardware, os, app server, web server, database and application. this can be achieved with jamon inside a java application. jamon is an open source timing api. it basically works like a stopwatch with a start() and stop() call. every method which you want to measure gets its own stopwatch (or counter) . each counter maintains statistics like the number of calls, average, maximum, standard deviation, etc. , and this information can be requested for. the individual calls are not stored. this approach results in low memory usage and a low performance overhead, at the cost of some information loss.
figure 2. jamon api start() and stop() calls in a spring interceptor
recently, a new competitor of jamon appeared: simon . it claims to be jamon’s successor, although it has (had) some issues . then there is the question: where to measure? it makes most sense to measure all incoming calls like web requests and outgoing calls to for instance the database. furthermore, parts like spring beans, ejb’s and dao’s. measuring these parts is not only relevant with new releases, but also trends and usage spikes are useful to monitor in order to solve quickly and prevent various problems. open source tool jarep offers the possibility to store jamon data from a cluster in a database and monitor trends and changes graphically.
figure 3. jarep shows the increasing response time trend starting october 15, on two of the four production jvms.
we had the following situation at my customer . processing an order slowly took more and more time over a period of several weeks. this happened while no new release was introduced and no other page became slower. this behavior was a complete mystery, until we looked deeper in our jarep monitoring tool. the troublemaker turned out to be a dao executing a prepared statement with only part of the variables being bind-variables. with help of jarep, we could look back to where the trend of increasing response time started so when the problems started. we could also see that this problem was only present at one of the two machines. with this knowledge and his log book, the operator could remember that on the start date he had experimented with a new jdbc driver to try to solve a memory leak. this seemed not to change anything concerning performance, what actually was the case in the beginning. problems only appeared slowly during the following weeks. they had left the new driver in place, which manifested itself as a time bomb later. when we put back the old driver, the problem just disappeared! this real life experience shows the usefulness of monitoring and trend analyses on application internals.
step 6: tune based on evidence
if an application turns out to be too slow, tuning can provide a solution. tuning can take place on multiple levels. adding hardware can be a cheap solution. however, when h ardware is added at a place where the bottleneck is not located, this has little use.
important steps of tuning are therefore the following five steps. identifying which pages or services do not meet stated requirements and isolating the problem: where is it located, in which layer, in which component. this can be made clear with testing and monitoring on parts. the next step is diagnosing. in fact, this comes down to making up a hypothesis why this component is so slow. this can for instance be a missing or wrong index on a database table or the invocation of too many small queries. next, the component is improved based on this hypothesis. finally, one needs to verify whether the improvement actually brings the expected speedup. if so, then the proposed hypothesis is true and the speedup is the result. if not, then there is something wrong with the hypothesis and we need an alternative hypothesis. as soon as the performance of the system meets its requirements, tuning is finished.
figure 5. finding evidence
right tools for the right job
the right tools are indispensable: performance test tool, enterprise profiler, heap monitor, etc. i have seen several developers work multiple days on assumed performance improvements which turned out not to help at all, or even slowed down the application and also deteriorate the maintainability and flexibility. this is caused by the fact that developers are used to mould functionality from source code and therefore work from source code to improve performance. what is missing here is: measure, don’t guess . this is something developers learn in my performance training. experience also has taught me to judge every proposed improvement separately and to only implement the improvement when we have proven that it really helps.
there are many tools to choose from. live monitoring is essential to see the actual performance problems. being able to do root cause analyses and to find the needed evidence is essential to effectively solve those problems. on the open source front there is visualvm to the rescue , my favorite open source performance tool. on the commercial apm front there are the big vendors like hp, ca (wily) and quest which can provide an extensive solution including some or all from: end user experience, transaction profiling, infrastructure and database performance.there are also smaller, more specialized vendors like dynatrace and appdynamics . i like their products because they are innovative and really effective at finding the root causes.
when an incident happens in production, this usually means stress. a performance problem in production often leads to finger pointing. the dba says that he has looked and nothing is wrong with his database. the network operator concludes the same thing about his network. the app server operator about his app server, the software developer about his source code and the back end operator about his back end. it is never them, it is always the other guy.
the application often gets thrown over the wall to the operation department. responsibilities then hold only at one side of that wall. if software development, maintenance, testing and/or operation is outsourced to external parties this can lead to tricky situations. before you know it, contracts and legal procedures are at play and cooperation is far away. both parties stick to their position, costs will raise and precious time gets lost.
finding out which part of the chain is responsible for the slowness can partly be solved with proper tools that monitor the whole chain and tools which are used from early on in the development process. but there is more to it than just tooling. experience with and knowledge of tooling and technology is inevitable just as priority for the proper utilization of the tools. it is most important to prevent formation of separate kingdoms and finger pointing between them; and rather to operate together as a multi-disciplined performance team and share the responsibility for the whole chain.
summery and conclusions
in this growing on line world with demanding customers it has become crucial that services provided on the web are always available and always fast enough. this is often challenging to developers and operators: performance problems manifest themselves in various ways, like in frustration, loss of revenue and disruption of development; and just adding hardware is a doubtful solution. the question is: how can we as developers and operators assure that our web site is always available and available fast? my answer is: you need the right approach. the approach is: measure, don’t guess; seven steps to performance success. these seven steps are as follows:
step 1: define performance requirements;
step 2: execute a proof of concept;
step 3: test representatively;
step 4: test continuously;
step 5: monitor and diagnose;
step 6: tune based on evidence;
step 7: share the responsibility for the whole chain.
this approach provides a pro-active way of working which my customers appreciate as valuable. it can actually be leveraged to assure high performance, all of the time, not only for web apps, but rather for any on- and off-line application.
this article and blog series has been an interesting journey for me. some time ago we at xebia presented our ejapp top 10 about performance problems. now we have added this approach of seven steps to help assure your applications performance. it has worked for us. how does this all work for you in practice? i’d like to hear your feedback.
Opinions expressed by DZone contributors are their own.