Web performance is a growing field within the greater operations community. This year it gained its own WebPerfDays – modeled after the highly regarded DevOpsDays – both of which followed the main Velocity conference. You couldn’t attend a #webperf session without some discussion about the evolution of performance benchmarking. In the “early” days, web performance was measured using clean-room markers such as synthetic tests, time-to-first-byte and onLoad events. If the sessions are a sign of things to come, performance benchmarking will soon focus on the user experience. This isn’t to say that the previous benchmarks were wrong. Instead it shows we’re gaining a broader perspective on web performance.
Mike Brittain, Director of Engineering at Etsy, reminded us to focus on our end user experience. At Etsy, features such as ads, login systems and social sharing can fail to load without degrading the user’s primary experience. The only features that must load (and can therefore block loading), are the product itself and the “Add to Cart” button. Everything else is tertiary. In this way, Etsy has made sure that the features that are core to its user’s experience are also core to its overall business.
Patrick Meenan, of Google and webpagetest.org, took another look at the user experience argument. In his talk, Patrick discussed how an arbitrary marker, such as onLoad, is insufficient in determining performance by showing a filmstrip view of Amazon and Twitter side-by-side. As you can see below, users have an opportunity to engage with Amazon much sooner than with Twitter. Even though the onLoad event is the same on both sites, the user experience is drastically different.
In a standing room only panel discussion on web performance monitoring tools, our own Lew Cirne joined web performance gurus Patrick Meenan, Joshua Bixby (of StrangeLoop), Marty Kagan (of Cedexis), and Patrick Lightbody (of Neustar) for a final look at the real user versus synthetic monitoring theme. The key takeaway is that there is room and need for both. Synthetic monitoring cannot replicate how users are actually using our applications; what’s important is how our users really use our applications, not how we think they use them. On the other hand, most sites do not generate enough data with real user monitoring to have a statistically relevant sample and so any conclusions are usually accompanied by an asterisk. As with most things, there is no single right or wrong option but rather some combination of both.
And our own Director of Product Marketing, Bill Hodak, gave a quick overview of New Relic and our Application Speed Index feature as part of O’Reilly’s Best of Velocity from the show floor.
In the end, we can all agree that what is critical is to find a metric that is most important to you. For Twitter, it is time-to-first-tweet that drives their performance focus. What metric is most important to your business?
Can’t wait to see you at Velocity 2013!