DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. Data
  4. The Simple Anatomy of a Good Performance Report

The Simple Anatomy of a Good Performance Report

Write your best performance report with these simple guidelines.

Dragos Campean user avatar by
Dragos Campean
·
Feb. 13, 19 · Presentation
Like (1)
Save
Tweet
Share
5.91K Views

Join the DZone community and get the full member experience.

Join For Free

Performance and load testing are not just about being able to create and run scripts that generate massive levels of strain on an application. This is just the first, and arguably more fun, part of the equation.

The second, and possibly more difficult, part would be delivering the results and observations generated from the tests.

This article aims to explore the process of documenting our findings and to offer guidelines to help you define your report from start to end.

Why Would Reporting Be More Difficult?

There are three important reasons that led me to this conclusion.

1. Interpreting the Results Is Not as Straightforward as it Looks

Showing an automatically-generated graph presenting high load times is often times not sufficient to draw a relevant conclusion. Further investigation should ideally be done to better isolate where the issues are coming from.

It could be that the maximum number of connections the server has been configured to accept has been reached. Or maybe the server bandwidth is not large enough.

The possible causes and correlations are diverse.

2. Not Everyone Is as Prolific at Reading a Result Log as the Person Doing the Testing

While the result logs might seem clear to us, a person with a different area of expertise might look at them and see nothing relevant.

That’s why they need to be translated from the cluttered rows and columns of a .jtl or other log formats to human readable, modular elements depicting no more than 1 single KPI (ideally).

There are various tools for doing this, and with an application like Apache JMeter, you are offered a variety of custom graphs. They are not the best looking graphs though, so if you are not OK with them, you can always generate your own custom graphs. As long as you have the log file, you can create almost any graph based on the data (but more on this in a bit).

3. We Don’t Ask Enough Questions Before Starting the Tests

This scenario often leads to information gaps between what the client wants and what we think he wants. The outcome, in this case, is that we are not specifically focused on certain aspects of performance; we try to aggregate everything.

Then, when we start writing the report, we are unsure of what information should be present. This leads to either an overload of data and KPIs jammed in a lengthy document no one will probably bother to read until the end. Or, on the opposite end, we make some assumptions of what we think is important and miss some KPI that should have been showcased.

In order to avoid this, make sure to ask as many clarification questions as possible before starting the tests. Maybe the client just wants to know what the maximum throughput of the server is and you generate 10 graphs for 10 different KPIs instead of just 1.

Moving Forward

Now that you have a general image of the things, we should keep in mind that while working on a performance testing project, we can fast forward to the main part and explore a few ideas and guidelines that could help create a good report.

I’m usually an advocate of simple and clean designs. The document should not be anything too flashy.

Also, all sections are intended to be as modular as possible, mostly for clarity; it’s easier to follow a certain paragraph’s train of thought if that paragraph focuses on a single center point.

The modules that I have defined, or the foundation on which I base most of my reports, are as follows:

  1. Cover
  2. Table of contents
  3. Overview and scope
  4. Glossary of terms
  5. Run configuration
  6. Specific test runs and analyses
  7. Suggested next steps
  8. Conclusion

Now, let's go through each of them and elaborate on what information each module needs.

The Cover

You know that the cover you are looking at is good if, at a first glance, you can already paint a mental picture of what the document is about and whether it’s an official document, book, or magazine.

In the simplest form, this first page would contain a title, the logo of your company or your personal logo in the header, and some other relevant information like the author, company name, and the date that the report is made available to the client.

That’s it! No short descriptions or pompous titles. At best, a suggestive image or background that suggests something related to performance testing, if you’re feeling brave, but in my opinion, that is a bit much.

Here’s how I designed the cover of a few reports following the above guidelines:

Performance Report Cover


Table of Contents

This section is equally important and should not be neglected.

Even if some people like to dive directly into the information, a table of contents should be present for every non-fiction document.

There are two main reasons a TOC is so useful.

The first is that the reader gets a mental map of what information they will encounter, gaining insight into the focus of the document.

The second would be that they can skip to specific sections that are of interest to them, thus making the navigation through the file a lot easier.

Here is a sample for the TOC page:

Performance Report Table of Contents

Overview and Scope

This section showcases what the purpose of the document and the test is.

This not only clarifies what specific objectives we are trying to achieve, but it also makes our work more relatable — the client feels that we are working together towards the same goal.

The data you include here can vary from one project to another, but some general objectives to include might be:

  • Identify the lower limit where users start experiencing loading problems
  • Determine the throughput of the server under real user-generated load
  • Isolate application modules which cause bottlenecks
  • Determine the % of errors generated under a load of n concurrent users
  • Define some benchmarks to compare future releases with

Glossary of Terms

The fact that the clients who are reading the report are not always technical people should be taken into account.

That is the main reason for this section: to explain all the technical jargon used throughout the report.

This, again, can be different from one project to another and is correlated to your vocabulary, but there are probably some terms that will be encountered in most reports.

A few examples of such terms are:

  • AUT = application under test
  • Virtual user = an instance of a semi-automated program meant to simulate the behavior of a real user inside the AUT (from the server’s perspective). Each virtual user has his own separate cookies and other session related data.
  • Threads = virtual users
  • Think time = a pause period that simulated the actual time a user pauses between actions in the application (e.g., the time it takes a user to scroll through a web page before they navigate to the next screen is considered a think time).
  • User flow = a dynamic script that the virtual user will follow. The user flow is composed of a set of actions.
  • DB = Database
  • UI = User interface

The list can go on and on, but you get the point. The technical information you mention in the document should have a brief description here.

Run Configuration

This would be the paragraph where I break my own rule, which I mentioned at the beginning of the article regarding modularity.

This is because the elements found under the category of ‘configuration’ are interconnected and should, in my opinion, be grouped together. This does not mean it’s the only way to go; feel free to create separate sections for each of the items presented below if you see fit:

Tested Environment

Here, you mention general information related to where your tests are aimed at. It could be some staging environment or temporary infrastructure, which will be adopted if the tests are considered ‘passed,’ or it could be that we will run the tests directly into the production environment.

Execution Environment

This segment should describe the machine that is used to generate the load from or whatever setup you use. It can be a simple setup of multiple machines in your network, a more complex Docker-based infrastructure generating the load, or even your local machine if the application is small.

A robust setup denotes trust and lowers the risk of the client believing that an artificial bottleneck was created on the test machines and that the results are inaccurate.

Tools Used

The way I see it, there are three main categories of tools you would use in a scenario like this, each for different parts of the process:

  • For generating the load tests — it can be the popular JMeter, the cloud-based load testing tool Blazemeter, Gatling, or any other tool you prefer.
  • For monitoring the server resources: this can also vary from the basic htop on a Unix server or task manager on a Windows server to the more costly options like Nagios or New Relic. Some hosting services, like Google, offer custom monitoring options for their clients.
  • For reporting: I would include a document editor in this category and maybe a graph generator as a bare minimum.
  • Other types of tools — you could use to this gain some insight and possible suggestions related to the application performance are webpage analyzers. These are tools generate suggestions for improving the overall performance of a webpage, and they can also vary from browser extensions like YSlow to other tools like GTmetrix or Google’s PageSpeed Insights.

Not taking into account the monitoring tools used on the servers, you have maximum flexibility to use whatever suits your needs best.

Script Execution Settings

Here, you would include particularities, which are relevant to how your tests have been run.

For example, some settings that might be relevant are:

  • How cache and cookies are managed per thread
  • Details about how the ramp up is calculated
  • What ‘think times’ were used
  • The execution time

I would also include what actions the virtual users simulate and what specific requests are nested beneath each ‘action.’ This can be in the form of a text or a screenshot from your load testing app if the GUI is suggestive enough.

A short example of such a user flow description is:

-> Access app
[POST] /api/MainPage
[GET] /api/LoginMessages

-> Tap ‘Enter Account’
[POST] /api/user/CheckState

-> Login
[POST] /api/user/Login
[POST] /api/user/Authenticate
[POST] /api/user/CheckState

-> Reserve Car
[POST] /api/car/CheckState
[POST] /api/car/StartReservation

-> Start Ride
[POST] /api/car/StartRide

-> Finish Ride
[POST] /api/car/StoptRide
[POST] /api/car/CheckState

-> Logout
[POST] /api/user/Logout
[POST] /api/LoginMessages

NOTE: These were some real actions from a real application with modified endpoint paths and names for confidentiality reasons.

Specific Test Runs and Analyses

This is where you would include information about all test runs. Each run will generate a specific set of results and observations, which are aggregated here.

In my view, each run has a mandatory set of three relevant topics, including conclusions, client-side KPIs, and the server-side KPIs.

General Conclusions

These are based on our observations of the system during testing and should include all data that might be relevant to the context. These conclusions are detailed further in the next two sections.

Client-Side KPIs

They represent the data collected from our load testing tool. This might come in various forms, depending on the tool.

If you use JMeter, for example, there are a variety of plugins specifically designed for generating graphs. If you don’t mind the slightly archaic look, then these will probably be a good fit for your report. Another alternative is to generate a more modern, interactive HTML report.

If you decide to use other tools, they most likely have some graphs generating module (see Blazemeter graphs or how Gatling charts look).

Of course, you can always generate your own custom graphs with well-known tools like Microsoft Excel, Google Sheets, or Apple’s Numbers app. The advantage for doing this is full control of the graphing aspect, but you have the disadvantage of a larger time investment, especially if you’re not familiarized with these tools.

Here are two examples of graphs I’ve made with the numbers app from Apple because I wanted it to match the company colors (the request names are blurred):

Custom Graph — Average Response Times


Custom Graph — No. of Errors

Server-Side KPIs

Advanced tools that were mentioned earlier, like Nagios and New Relic, offer a complete experience when it comes to tracking the KPIs of your application and server. The graphs they generate should be simply included in this section with a short description.

There are other tools that have the same function, some are even free like the JMeter perfmon monitor, but that implies installing a server agent on the server, and most of the time, you will not be allowed to do that.

Suggested Next Steps

The title is pretty self-explanatory of the content of this section. Basically, together with the development team, you would analyze the data you gathered in the test phase and put together a set of suggestions.

These suggestions can also come from some of the page analyzer tools, which were mentioned earlier.

Some examples of such suggestions would be:

  • Images http://example.org/imageA.png and http://example.org/imageB.png are too large and can be optimized without affecting user perception.
  • There are 29 components in the page that can be minified.
  • Avoid triggering unnecessary requests when performing certain actions (e.g. [POST] /api/test/exampleRequest1 in the login screen and [POST] /api/test/exampleRequest2 in the profile screen).
  • After further optimizations have been made, a good practice is to repeat the tests and compare the results to the historical data available.

Final Conclusion

Imagine this last part as a grading area for the entire app. It’s where you label the tests ‘passed’ or ‘failed’ based on all observations that were made thus far.

If you have multiple runs, let’s say each with its own set of optimizations and server configurations, the ‘conclusions’ area would hold a separate paragraph for each separate run. This provides historical data as well as an overview of the progress or decline of the application performance.

Other Elements to Consider

The font: aim for something simple and professional, avoid the fonts presented in this list.

Your company logo (in the header): use a high-quality png. You don’t want a pixelated image with a background slightly different than the rest of the document pinned to every page.

Spell check: Read and reread the text, passing on an official document with grammatical errors reflects poorly on your professionalism.

With this, your document should be complete.

You might have noticed that up until now, I’ve mentioned creating merely a ‘good report’ a few times.

This was done on purpose mainly because if you adopt some of the ideas presented here, odds are that the resulting report will reach the level of ‘good,’ and that is, most of the times, sufficient.

Specific additions and customizations you will implement, together will these general guidelines, have the potential of transforming the resulting document from just ‘good’ to great.

Be sure to use your creativity in this regard, and if you have some cool ideas or improvement suggestions for my report template, please leave a comment below. I’m sure there’s plenty of inspiration ready to be exploited.

Thanks for reading!

Testing Database application code style Load testing Table of contents Data (computing) app guidelines

Published at DZone with permission of Dragos Campean, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • What Was the Question Again, ChatGPT?
  • A Brief Overview of the Spring Cloud Framework
  • Microservices Discovery With Eureka
  • How To Check Docker Images for Vulnerabilities

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: