Do you really even need performance monitoring for your website? In actuality, you probably don’t. Of all the goals of performance monitoring, performance itself is the least important of them.
Now, that I have your attention, here’s my reasoning—successful website performance monitoring focuses on three aspects of website performance:
First and foremost, your website needs to be online. It doesn’t matter how many hours you‘ve invested into making your site look great and run smoothly if it isn’t online and available.
When it comes to availability, you need to know two things:
- Is my site available right now?
- What percentage of time has my site been available during the past 24 hours/week/month?
Or, instead of scripting your own Wget command, you can just invest in a decent tool that handles all this for you automatically. You see, checking availability at specific times is easy. However as soon as reporting becomes a requirement, things get more complicated. And if it’s not just availability that you’re tracking (i.e., you want response times or functional testing), your challenges become exponentionally more complicated.
For this sort of monitoring you really need a browser engine because entire HTML pages including all JS, CSS, and image files need to be downloaded, evaluated, executed, and displayed to confirm correct functionality.
And then there’s business functionality. This entails multi-step transaction testing that involves sequences of user actions, for example logging into a website or completing a purchase order. I’ve talked to some people who don’t think that periodic checking of website functionality is a requirement because such checks are performed during development. While it’s true that testing of business functionality should be performed during development, there is serious risk in not monitoring the accuracy of business functions once your application is in production. This is because your business processes may be dependent on other services (3rd party services or your own services) that may fail at some point. And then your application’s business functionality will fail as well, regardless of the testing you did before your application was deployed.
So, Wget simply can’t get the job done. Often, tools like Selenium are employed. These tools do exactly what you tell them to do. The downside is that any changes you make to your website require changes to your tests, which can be very tedious in the long run. There are new monitoring solutions out there, most notably some SaaS offerings that have entered the market recently. If you have any experience with these SaaS-based monitoring tools, please share your experiences in the comments below.
So, you’ve confirmed that your site is online and you know that, from both a technical and a business perspective, your application’s most important functionality is working as designed. Only at this point does it make sense to focus on the performance of your website.
Some performance problems are so obvious that you don’t even need a monitoring tool to detect them (having your site crash entirely upon deployment is one such example). And then there are those more subtle performance problems that creep up on you. For example, a few days or weeks after deployment you might notice that some transactions take longer to execute than they once did. This sort of problem is best detected through long-term monitoring (3-4 weeks) that allows you to analyze performance degradations over time. It’s particularly helpful if you can compare time intervals (for example, today’s performance compared with performance on the same day last week).
For measuring of browser performance over long periods you really should take advantage of synthetic test agents so that you have enough real-user data to draw conclusions from. It doesn’t matter if synthetic performance varies from real-user performance; it’s the relative performance that counts here, not the absolute performance.
Synthetic clients and real users: a game-winning combo
All three aspects of performance monitoring (availability, functionality, and performance) require the use of both real-user data and data from synthetic clients. Data from synthetic sources is helpful for identifying performance trends over long periods. It doesn’t matter if synthetic clients perform better or worse than real clients—what matters are performance changes experienced by synthetic agents (which is why you need continuous data). To improve the user experience of your site and see how your customers are using your website, you absolutely must have access to real-user data.
Website performance is something you should focus on only after you’ve confirmed that your site is available and functioning correctly. In the words of Donald Knuth , “Premature optimization is the root of all evil”
The screenshots shown in this post were taken from Ruxit .