Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Why ‘Performance’ is the Least Important Aspect of Performance Monitoring

DZone's Guide to

Why ‘Performance’ is the Least Important Aspect of Performance Monitoring

Do you think performance is the most essential aspect of performance monitoring? Performance might be the least important part of performance monitoring.

· Performance Zone
Free Resource

Evolve your approach to Application Performance Monitoring by adopting five best practices that are outlined and explored in this e-book, brought to you in partnership with BMC.

Do you really even need performance monitoring for your website? In actuality, you probably don’t. Of all the goals of performance monitoring, performance itself is the least important of them.

Now, that I have your attention, here’s my reasoning—successful website performance monitoring focuses on three aspects of website performance:

Availability

First and foremost, your website needs to be online. It doesn’t matter how many hours you‘ve invested into making your site look great and run smoothly if it isn’t online and available.

When it comes to availability, you need to know two things:

  1. Is my site available right now?
  2. What percentage of time has my site been available during the past 24 hours/week/month?


To find out if your site is available right now, all you really need to do is open your website in a browser and take a look. If however you want to be alerted when your site goes offline, you’ll need some sort of automated mechanism. This isn’t much of a challenge though, is it? There’s nothing easier than setting up a cron job that pings your server, right? While this may be true, this isn’t a good solution as pinging is simply not enough.
A ping can only tell you if a server is reachable, not if your website is up. So you should at least have a Wget command in place to check for HTTP 400 and 500 response codes. For analysis of your site’s availability over long periods of time, you’ll need to keep a record of the responses you receive and compile a report.

Or, instead of scripting your own Wget command, you can just invest in a decent tool that handles all this for you automatically. You see, checking availability at specific times is easy. However as soon as reporting becomes a requirement, things get more complicated. And if it’s not just availability that you’re tracking (i.e., you want response times or functional testing), your challenges become exponentionally more complicated.

Functionality

Functionality comes in two flavors: First off, there’s technical functionality. For example, confirming that your JavaScript code (or whatever) isn’t throwing exceptions. Just Wget-ing that index file won’t tell you about failing scripts though as they’re not downloaded, let alone evaluated or executed.


For this sort of monitoring you really need a browser engine because entire HTML pages including all JS, CSS, and image files need to be downloaded, evaluated, executed, and displayed to confirm correct functionality.

And then there’s business functionality. This entails multi-step transaction testing that involves sequences of user actions, for example logging into a website or completing a purchase order. I’ve talked to some people who don’t think that periodic checking of website functionality is a requirement because such checks are performed during development. While it’s true that testing of business functionality should be performed during development, there is serious risk in not monitoring the accuracy of business functions once your application is in production. This is because your business processes may be dependent on other services (3rd party services or your own services) that may fail at some point. And then your application’s business functionality will fail as well, regardless of the testing you did before your application was deployed.

So, Wget simply can’t get the job done. Often, tools like Selenium are employed. These tools do exactly what you tell them to do. The downside is that any changes you make to your website require changes to your tests, which can be very tedious in the long run. There are new monitoring solutions out there, most notably some SaaS offerings that have entered the market recently. If you have any experience with these SaaS-based monitoring tools, please share your experiences in the comments below.

Performance

So, you’ve confirmed that your site is online and you know that, from both a technical and a business perspective, your application’s most important functionality is working as designed. Only at this point does it make sense to focus on the performance of your website.

Some performance problems are so obvious that you don’t even need a monitoring tool to detect them (having your site crash entirely upon deployment is one such example). And then there are those more subtle performance problems that creep up on you. For example, a few days or weeks after deployment you might notice that some transactions take longer to execute than they once did. This sort of problem is best detected through long-term monitoring (3-4 weeks) that allows you to analyze performance degradations over time. It’s particularly helpful if you can compare time intervals (for example, today’s performance compared with performance on the same day last week).


For measuring of browser performance over long periods you really should take advantage of synthetic test agents so that you have enough real-user data to draw conclusions from. It doesn’t matter if synthetic performance varies from real-user performance; it’s the relative performance that counts here, not the absolute performance.

Understanding your site’s overall performance is extremely helpful, but if you need to dig deeper into your monitoring results to solve discovered performance problems, you’ll need lots more information. It wouldn’t hurt to have information from the client’s browser available to check if JavaScript executions or XHR requests are contributing to response time.

Synthetic clients and real users: a game-winning combo

All three aspects of performance monitoring (availability, functionality, and performance) require the use of both real-user data and data from synthetic clients. Data from synthetic sources is helpful for identifying performance trends over long periods. It doesn’t matter if synthetic clients perform better or worse than real clients—what matters are performance changes experienced by synthetic agents (which is why you need continuous data). To improve the user experience of your site and see how your customers are using your website, you absolutely must have access to real-user data.

Conclusion

Website performance is something you should focus on only after you’ve confirmed that your site is available and functioning correctly. In the words of Donald Knuth , “Premature optimization is the root of all evil”

The screenshots shown in this post were taken from Ruxit .

Evolve your approach to Application Performance Monitoring by adopting five best practices that are outlined and explored in this e-book, brought to you in partnership with BMC.

Topics:
performance ,real user monitoring ,webmonitoring ,synthetic

Published at DZone with permission of Martin Goodwell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}