Increase Performance in Cross-Browser Testing With Zero Effort — Here’s How
Did you know that today you can extract more data from your existing testing practice…with zero additional effort?
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Are you used to getting a certain amount of data from your testing practices? Did you know that today you can extract more data from your existing testing practice…with zero additional effort? This all plays into the shift left movement that delivers insight – earlier and easier. When thinking about shifting left, you should answer one of these questions:
1. What new insights can I gain earlier?
2. How easy is it to implement?
Shifting left performance activities are top of mind for many engineering teams. The reason for this trend is because late discovery of extreme application latency typically leads to brand compromises on user experience in favor of time to market and/or the release may be delayed allowing an extended code undoing, a very expensive task for developers, and one teams are looking for eliminate.
The Challenge
There are various reasons why performance activities are usually done late, or outside the development cycle. Some of these include team structure, outdated perception of performance tests or the tools that are being used. This article will describe a web timing approach vs. the motivation and approaches to shifting left performance activities.
Web Page Timing
These are page level stats. Webpage timers, defined in W3C Navigation Timing Specification isn’t necessarily new, however, they are very helpful in optimizing web content for various pages and browsers. The data is extremely detailed and readily available for analysis with almost all browsers supporting the API so you don’t need any special setup to collect and report these metrics.
Grabbing the page timers is fairly easy, simply leverage the following:
Map<String,String> pageTimers = new HashMap<String,String>();
Object pageTimersO = w.executeScript(“var a = window.performance.timing ; return a; “, pageTimers);
Here’s an example of the timers resulting from a single page load:
Processing the timers can be done as follows:
long navStart = data.get(“navigationStart”);
long loadEventEnd = data.get(“loadEventEnd”);
long connectEnd = data.get(“connectEnd”);
long requestStart = data.get(“requestStart”);
long responseStart = data.get(“responseStart”);
long responseEnd = data.get(“responseEnd”);
long domLoaded = data.get(“domContentLoadedEventStart”);
this.duration = loadEventEnd – navStart;
this.networkTime = connectEnd – navStart;
this.httpRequest = responseStart – requestStart;
this.httpResponse = responseEnd – responseStart;
this.buildDOM = domLoaded – responseEnd;
this.render = loadEventEnd – domLoaded;
Now that we’ve got the page-level timers, we can store them and drive some offline analysis:
You can even decide within the test if you want to examine the current page load time or size, and pass/fail the test based on that:
// compare current page load time vs. what’s been recorded in past runs
public boolean comparePagePerformance(int KPI, CompareMethod method, WebPageTimersClass reference, Long min, Long max, Long avg){
switch(method){
case VS_BASE:
System.out.println(“comparing current: “+duration +” against base reference: “+ reference.duration);
return (duration – reference.duration) > KPI;
case VS_AVG:
System.out.println(“comparing current: “+duration +” against AVG: “+ avg);
return (duration – avg) > KPI;
case VS_MAX:
System.out.println(“comparing current: \”+duration +\” against AVG: “+ max);
return (duration – max) > KPI;
case VS_MIN:
System.out.println(“comparing current: \”+duration +\” against min: “+ min);
return (duration – min) > KPI;
default:
System.out.println(“comparing current: \”+duration +\” against AVG method was not defined N/A: “+ avg);
return false;}}
Web Page Resource Timing
So far, we’ve been talking about the page level timing. Web page resource timing is a more in depth review of both your code as well as any third party code that you are using. This is good data because you can detect latency in page performance across any page and any browser, and already get a direction whether the issue relates to DNS discovery, content lookup, download, etc.
In reality, when you’re doing this in cycle, the big changes will come from the content that is being downloaded. Large images downloaded to small screens over cellular networks, downloads of non-compressed content, repeated downloads of JS or CSS etc.
Expert Tip:
(How can developers get immediate actionable insight to optimize the page performance? This is where the resource timing API comes to play. There are great insights about every object that the browser requests: the server, timing, size, type etc.)
Again, to obtain access to the resource timing object, all that needs to be done is follow the following:
List<Map<String, String>> resourceTimers = new ArrayList<Map<String, String>>();
ArrayList<Map<String, Object>> resourceTimersO = (ArrayList<Map<String,Object>>) w.executeScript(“var a = window.performance.getEntriesByType(\”resource\”) ; return a; “, resourceTimers);
And here’s an example of the data that is available. Lots of good stuff in here:
Each page would have a long list of resources like the above. You can summarize all the objects into types and produce a summary of totals and some distribution stats:
Below, for example, one can summarize the resources by type for each execution:
Or finally, simply gain access to all the resources directly:
Execution Time Comparison/Benchmarking
So far, we’ve gotten access to the raw data and conducted some level of analysis with it. At the beginning of this article, we defined shift left as "deliver insight, early and easily." Now, how about, given a web page, we set a "baseline," and from then on, every execution, we would measure the responsiveness, provide a pass/fail, and provide a full comparison of the current page data vs. the "baseline." Well, with a little code, that’s possible too:
Here’s the top-level, page level summary of current vs. "baseline" run:
There isn’t a material difference in the number of items, but you can see that the page load time is almost 3 seconds longer. From a first look, it seems the rendering time is the one extended.
Now, here’s the comparison between the type summary:
This table compares the total items, size and duration by type against the baseline. It’s not surprising there aren’t any new types of content introduced in this page, nor are there massive changes in the number of elements per type given the last run was just a few days earlier.
Still, despite the fact that in total there is only one additional image, it appears images drive the most latency in loading the page.
To take a closer look, here are the images with the largest load time.
Interestingly, also images that were part of the older page, still took longer:
Putting It All Together
As we’ve seen, it’s possible to examine page responsiveness across different browsers. It’s also possible to compare the page and resource metrics against a previous run to extract action for optimization, or detect a defect. The nice thing is that this can be done for any test: smoke, regression, even production. It does not require any additional infrastructure as it simply runs within the target browser. Results can be embedded into your reporting solution and overall, performance can be part of your agile quality activity.
Code reference
The code used for this project is available as open source at https://github.com/AmirAtPerfecto/WebTimers
Follow up projects
- More Performance Activities
- HAR file: In addition to direct analysis of the page resources and metrics, it is also possible to analyze the HAR file. Unfortunately, it doesn’t seem like there are API-based analyzers readily available (most are web UI-based tools) but perhaps one can be built.
- OCR-Based Analysis: Some tools (including Perfecto) offer visual based analysis for measurement of actual content render time. The accuracy of such measurement isn’t as high and the details available aren’t as easily translatable into action. Still, it’s a good method to measure user experience performance across screens. The OCR approach works well also for native apps.
- Other tools: Google page speed, YSlow etc.
- Other
- Security: Similar to the performance testing, given the servers and resources downloaded are detailed in the logs, it should be possible to indicate the set of servers and countries contributing to this web page. Possibly not all are acceptable, that would be good to know and easy to add to the agile cycle.
Published at DZone with permission of Amir Rozenberg, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Extending Java APIs: Add Missing Features Without the Hassle
-
Observability Architecture: Financial Payments Introduction
-
Competing Consumers With Spring Boot and Hazelcast
-
Effortlessly Streamlining Test-Driven Development and CI Testing for Kafka Developers
Comments