{{announcement.body}}
{{announcement.title}}

How to Organize a Bug Hunt

DZone 's Guide to

How to Organize a Bug Hunt

Bug hunts are one of the best ways to discover and fix vulnerabilities.

· Performance Zone ·
Free Resource

Let’s talk about bugs – the grody, disgusting, overwhelming technical glitches — that cause hurdles and headaches for technical teams in countless organizations the world over. One of the best ways to do so is through a bug hunt. Bug hunts are exploratory tests designed to find and identify bugs and glitches in your technologies, so you can get rid of them quickly and efficiently. Bug hunts are one of the best ways to discover a solution’s vulnerabilities so that they can be eradicated, removed, and fixed. The hunts can be conducted in nearly any number of technical environments beyond software, including websites and mobile apps.  

Simple concept; effective, measurable results.

The hunts can include “attacks” or deliberately using a variety of workflows than the one suggested by an application. For example, hunt teams may fill in a form incorrectly to expose errors and security vulnerabilities or testers may enter alphabetic or special characters into a form field that’s designed only to handle numeric characters.

Bug hunters also use test plans and use cases to test software from the perspective of a user, all with the goal of discovering bugs that can affect the user experience. At TOPdesk, we are bug hunters. We love killing bugs if we find them. We organize regular hunts. We’ve gotten pretty good at organizing internal hunt events to burning out any bugs we find. In the following, I’ll try to help you organize and conduct your own hunts on whatever technology platform your creating. For us, bug hunts can be a great learning experience for everyone involved, and something our team leaders highly recommend. Doing so also helps us improve the technology we provide to the market.

Preparing Your Test Object

My colleague, Hazel Hollis, a senior software tester here at TOPdesk, suggests that the first time you plan a hunt is the hardest, primarily because it can be difficult to determine how complex to make the test. The first step to organizing a hunt: prepare the test object. Determine the ground or territory you want to hunt. Once you determine your hunting grounds, you can move to the next phase — establishing a productive environment to conduct the challenge.

Each team participating in the hunt needs an environment in which they can focus during the hunting challenge and one that allows them to get straight to work hunting. A large conference room or an auditorium works well. These areas allow you to subdivide the teams but they don’t separate each team entirely from each other, which can hurt, even stifle, the flow of the challenge.

When starting, use a stable version of your program for the test, you want to know it’s not going to fail. Next, set up a database for each hunting team that includes basic information including a range of objects, settings, and users with logins to support user stories.

Then, ensure that the personas have a user with the correct roles and permissions, providing login data for these. Hollis recommends letting testing teams know where they can find the test version and corresponding database. In many cases, organizations can turn these events in competitions, creating an organizationally sanctioned challenge of it. Doing so can make these events more fun and, ultimately, more rewarding for everyone involved. If you decide to create a bug hunt challenge, consider letting teams know about the database, logins, and details required to start the hunt so that valuable time during the challenge getting everyone ready to go.    

Introduce the Test Object

When starting, prepare a demo presentation of the challenge. This should provide information about features and specifications in the system in which you are bug hunting. Consider also preparing a complementary document that contains an outline of the purpose and function of the test object. Using the pre-defined personas, you’re able to highlight the common user stories and cases in which a solution is being used. However, Hollis recommends not making specifications too detailed or the hunters may get bogged down in them when hunting through scenarios.

Provide Ample But Not Too Much Documentation

During a bug hunt, don’t get too caught up in the specifications as this might limit your testers’ freedom and creativity in approaching the test object. Setting up too many specifications for your hunting teams to follow can cause a good deal of confusion and burden, and it can take longer than required to push testing towards simply executing the specifications. Instead, consider providing user stories and bullet points to describe what the user wants to achieve. By allowing your testing/hunting teams to choose their own starting point and structure you can shoot for a description of service and the approach to no more than two instructional pages. Anything beyond this is too much to manage and too cumbersome to be effective for the organization and for the health of the system in which you are trying to get the bugs out.  

You Can Define the Scope of the Hunt

While you keep your scope narrowed and the description of the project focused, you do not necessarily need to limit the entire project’s scope. I was curious to see how the teams reacted to this ambiguity. You can always choose to intervene and limit the scope if needed (it probably will be necessary). One or two complex features are enough. For example, my documentation stated that objects appeared elsewhere in the software, but teams did not have time to look into this. In the future, I would leave these out, and perhaps even limit personas to two. In so doing, be careful not to make the scope too big. Allow for slightly more than would fit in the allotted time, which mean the participating teams must prioritize their test planning.

Release the Hounds!

Participating hunt teams must create a plan of attack so they can get after the bugs. They must determine what they going to test and how; each team creates its own plan for the bug hunt. As a facilitator, answer questions and serve as a guidepost, but don’t discuss existing bugs, test cases or what to test. When the teams present their test plan, discuss any particularly risky areas, but don’t steer the team toward any decisions or outcomes.

Evaluate the Take

When done with the hunt, it’s time for evaluation of the bugs hunted. During this phase, ask teams how they went about their testing. You may wish to have the team present their approach to the group as a learning opportunity. Take note of any issues or bugs found during testing to get them phased out of the product as the product owner.

Of the bugs hunted and identified, you can let the testers know whether an issue found is already on your backlog or not. Any new bugs hunted might be worth a prize to the teams that find them. You might also explain why certain decisions were made. This has the benefit that others learn about your team’s features, but also that you learn new things about your own features.

Questions for Teams

During post-hunt evaluation, ask questions such as: “Did you think about what happens when X,” or “Which situations have you considered?” The following questions – and others as appropriate – can help you evaluate each team’s approach to the hunt. For example:

  • How did you prioritize your task and approach?
  • How did you decide where to start and where to ratchet down?
  • Has the team thought about paths and divergence?
  • Was risk taken into account? Why or why not?
  • How did the risk pay off, or not?
  • How did the team collaborate? Did the members sit and test together, test individually, timebox, etc.?
  • How did the teams take notes?
  • How did the team keep track of what was tested?

Keeping the Goal in Mind

Before, during, and after the hunt, keep in mind that this exercise is designed to be a productive learning experience for everyone involved. Sharing approaches, methods, and ideas lets the team learn from each other. Concluding the bug hut, thank the teams, and follow up on the issues reported. Conduct hunts regularly – once a quarter, twice a year, or as needed based on product development.

Most bug hunts, if organized properly, take only a few hours from start to finish — they are events that you can use to build your teams. The entire challenge doesn’t have to take more than three or four total hours.

Topics:
service management ,itsm ,bug hunting ,team building ,bug ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}