How to Write Good Bug Reports and Gather Quality Metrics Data
Learn how to write better bug reports and gather quality metrics.
Join the DZone community and get the full member experience.
Join For FreeOne of the essential tasks every QA engineer should master is how to log bug reports properly. Many people are confused about what information to include in such reports. This is why I decided to create an article discussing what the crucial fields of an issue report are. Moreover, we will look into bug statuses and upgraded statuses workflow. I say 'upgraded' since it is a bit more complicated than usual, but I will explain why I added additional statuses. Also, you will find information about bug taxonomy fields, which can help you to calculate later various quality metrics that can be used to improve the QA process in the future. I will write a dedicated article about quality metrics and how to calculate and visualize them.
After the initial review process and giving improvement recommendations for how to improve their QA process. Sometimes, they include changes on how the automated tests are executed or complete refactoring. However, there are times when, before upgrading the test automation, we need to improve the manual functional testing. Below you will read about the bug tracking strategy we proposed to a client of ours.
Bug Statuses
For Triage – once the bug is created, it goes in this status. There is a meeting each day where a group of senior people (senior QA + senior dev), which goes through all bugs for triage and decide whether something is a bug or not. Next, if they agree that the problem is a bug, they choose what happens with it whether it will be analyzed immediately or whether it will be archived or deferred.
Analysis – before the actual fixing, we come to the analysis phase. A developer reads the issue description thoroughly and starts debugging or searching where the problem is. Here is the time when various issue fields are populated, like what was the reason for the bug (root cause analysis) and other bug taxonomy grouping, which we later use for various reporting measurements. If the problem is located, depending on the time needed for the fix, the bug can go in Fixing or Deferred status. If the problem cannot be reproduced, we move the issue to Cannot Reproduce status, and after that, we start to monitor it (moving it to Monitoring status).
Deferred – we set bugs in this status if they will be considered later to be fixed, e.g., they are not high priority to be fixed as soon as possible. If we plan to make a new development to certain feature before planning all the work, we can check all deferred bugs for it and see if we will fix some of them.
Archived – we agreed that the problem is a real bug, but we don’t set it to Deferred because we decided that no matter how many times, we return to it, it won’t be so important to be fixed. However, we keep track of all these bugs for logging purposes. They can help us to decrease the number of duplicate bugs.
Communicate — some logged observed problems are problematic, but it is hard to decide whether they are a real issue or not. Most often there isn’t anything related to them in the documentation or requirements if we decide that the problem worth the time to investigate, we first need to ask a product owner what he thinks and how should the feature behave.
Cannot Reproduce — when the analyzation process often starts, it is hard to reproduce some more complex issues even if they are perfectly described in the report. We set the status for reporting purposes only. Usually, such bugs have been firstly monitored for some time if we have agreed that we need to spend time for further analyzation.
Monitoring- if a bug cannot be reproduced during the analyzation process so that the root cause can be found, we can agree that we will spend some time to monitor whether the bug will be reproduced by someone. If, for example, 2 weeks pass and nobody can reproduce it, we move it to Cannot Reproduce status.
Reopened- before logging a bug, we need to check whether such bugs have not been already logged. Sometimes, we will find that such bugs exist but are in Done status. After that, instead of logging a new bug, we will move the bug in Reopened status, which saves us time to populate all fields but more importantly gives us information for some quality measurements.
Fixing – we set an issue in this status once it is clear what the root cause is and we agreed that there is enough time to be spent for fixing it.
Code Review — once the bug is fixed and tested locally by the dev, he makes a pull-request and asks a colleague to perform a code review.
Integration Testing — once the bug passes the code review process, the bug is deployed on a DEV environment where it can be retested/regression tested by the bug reporter in most cases QA.
Failed Testing — it is possible the retesting phase to observe that the bug was not fixed; then, it is not returned immediately to Fixing but instead set to the Failed Testing status, which we use again to gather some quality metrics.
Integration Testing — tested again on the TEST environment integrated with other stories under development.
For Deployment — once we verify that the bug is fixed, we can deploy it LIVE.
Bug Workflow
Bug Fields
Main Fields
Title — meaningful short explanation what is the problem
Description — fully describe what is the problem.
Actual Results – what are the observed results of the test.
Expected Results — what is the expected behavior of the tested functionality.
Steps to Reproduce — list all steps needed to reproduce the issue- login, click forecast button, etc.
Environment — on which environment was the problem observed. Give all relevant details about the setup if it is required.
Assignee — who will be responsible for analyzing and fixing the issue.
Reporter — who reported and logged the bug.
Attachments — attach a screenshot if the problem is UI related. If the bug is related to complex workflow- record a video. You can add any relevant dumps or other files.
Priority — the level of (business) importance assigned to an item, e.g. defect. Urgency to fix.
Priority Levels
In decreasing order of importance, the available priorities are:
1. Blocks development and/or testing work, production could not run, causes crashes, loss of data, or a significant memory leak. Any problem that prevents the project from being built is automatically selected with priority:
1. It required immediate action.
2. Major or important loss of function, that need urgent action.
3. Low or less important loss of function, that do not need urgent action.
4. A small problem with no functionality impact.
Severity — the degree of impact that a defect has on the development or operation of a component or system.
Severity Levels
Severity |
Why the QA should assign the status? |
Example |
What should be done by QA? |
What should be done by DEV? |
|
---|---|---|---|---|---|
Blocking |
|
Functional/Configuration issues that lead to: Yellow screen, Missing module, etc. |
Log a bug or send email + verbal notification |
Resolve as soon as possible |
|
Critical |
Main part of feature is not working as expected |
Core Functional/Unusable UI issues that lead to: Form is not submitting, Buy button not working, Form is not syncing data, etc. |
Log a bug + notification |
Give high attention |
|
High |
It is not recommended to release without this fixed |
Functional/Broken UI - UI differences from design, Validations, etc. |
Log a bug |
Do this after all Blocking and Critical bugs |
|
Medium |
Good to be fixed if we have time |
Minor Functional/Minor UI issues - Off the happy path scenario, Some responsive issues for specific resolutions/browsers, etc. |
Log a bug |
Do this if the story estimated time is not reached |
|
Low |
|
URL with . |
Log a bug |
Check this out |
Bug Taxonomy Fields
All fields below help us to categorize the bugs, providing meaningful statistics and metrics which can be later used to improve the overall quality/development processes.
Root Cause Analysis — after the initial analysis, the developer who leads the fixing should describe what he found, like what is the actual reason for the observed behavior.
Root Cause Reason — category for grouping bugs by root cause reason, like missing requirements, not cleared requirements, missed during code review, not enough knowledge about a specific technology, or something else.
Later, the grouping by this field can help us spot problems in certain areas of our workflow for example code review, requirement phase, or testing.
Root Cause Type— category for grouping by more technical type- DB, UI Components, API Integration and so on.
Grouping by this field later can help us to see if any of the problems are related to specific technical area where we can refactor or do more training.
Bug Appearance Phase — describe in which phase of the process the bug appeared- requirements, code review, DEV testing, integration testing, system tests, etc.
Later we will use the field to see which phase help us most and rethink whether we should invest in some practices or not.
Bug Origin — contains information on whether the bug is caught during our internal processes or clients reported it.
It is used to calculating an important metric measuring the ration of PROD vs. caught bugs. It usually should be less than 5 percent.
Functional Area — specify in which area of the product the bug appeared — ticket submission, order creation, invoice generation, etc.
When we have more data, we can see in which area the most of the bugs appear. We can optimize it this way so that the estimation process contains certain refactoring methods of the code if this is the reason for issues.
Published at DZone with permission of Anton Angelov, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments