DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Shifting Left: A Culture Change Plan for Early Bug Detection
  • Creating MVPs on a Budget: A Guide for Tech Startups
  • Project Hygiene, Part 2: Combatting Goodhart’s Law and Other “Project Smells”
  • Maximizing Efficiency With the Test Automation Pyramid: Leveraging API Tests for Optimal Results

Trending

  • How to Introduce a New API Quickly Using Micronaut
  • Memory-Optimized Tables: Implementation Strategies for SQL Server
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • Simpler Data Transfer Objects With Java Records
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. How to Organize a Bug Hunt

How to Organize a Bug Hunt

Bug hunts are one of the best ways to discover and fix vulnerabilities.

By 
Jeroen Boks user avatar
Jeroen Boks
·
May. 06, 19 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
6.2K Views

Join the DZone community and get the full member experience.

Join For Free

Let’s talk about bugs – the grody, disgusting, overwhelming technical glitches — that cause hurdles and headaches for technical teams in countless organizations the world over. One of the best ways to do so is through a bug hunt. Bug hunts are exploratory tests designed to find and identify bugs and glitches in your technologies, so you can get rid of them quickly and efficiently. Bug hunts are one of the best ways to discover a solution’s vulnerabilities so that they can be eradicated, removed, and fixed. The hunts can be conducted in nearly any number of technical environments beyond software, including websites and mobile apps.  

Simple concept; effective, measurable results.

The hunts can include “attacks” or deliberately using a variety of workflows than the one suggested by an application. For example, hunt teams may fill in a form incorrectly to expose errors and security vulnerabilities or testers may enter alphabetic or special characters into a form field that’s designed only to handle numeric characters.

Bug hunters also use test plans and use cases to test software from the perspective of a user, all with the goal of discovering bugs that can affect the user experience. At TOPdesk, we are bug hunters. We love killing bugs if we find them. We organize regular hunts. We’ve gotten pretty good at organizing internal hunt events to burning out any bugs we find. In the following, I’ll try to help you organize and conduct your own hunts on whatever technology platform your creating. For us, bug hunts can be a great learning experience for everyone involved, and something our team leaders highly recommend. Doing so also helps us improve the technology we provide to the market.

Preparing Your Test Object

My colleague, Hazel Hollis, a senior software tester here at TOPdesk, suggests that the first time you plan a hunt is the hardest, primarily because it can be difficult to determine how complex to make the test. The first step to organizing a hunt: prepare the test object. Determine the ground or territory you want to hunt. Once you determine your hunting grounds, you can move to the next phase — establishing a productive environment to conduct the challenge.

Each team participating in the hunt needs an environment in which they can focus during the hunting challenge and one that allows them to get straight to work hunting. A large conference room or an auditorium works well. These areas allow you to subdivide the teams but they don’t separate each team entirely from each other, which can hurt, even stifle, the flow of the challenge.

When starting, use a stable version of your program for the test, you want to know it’s not going to fail. Next, set up a database for each hunting team that includes basic information including a range of objects, settings, and users with logins to support user stories.

Then, ensure that the personas have a user with the correct roles and permissions, providing login data for these. Hollis recommends letting testing teams know where they can find the test version and corresponding database. In many cases, organizations can turn these events in competitions, creating an organizationally sanctioned challenge of it. Doing so can make these events more fun and, ultimately, more rewarding for everyone involved. If you decide to create a bug hunt challenge, consider letting teams know about the database, logins, and details required to start the hunt so that valuable time during the challenge getting everyone ready to go.    

Introduce the Test Object

When starting, prepare a demo presentation of the challenge. This should provide information about features and specifications in the system in which you are bug hunting. Consider also preparing a complementary document that contains an outline of the purpose and function of the test object. Using the pre-defined personas, you’re able to highlight the common user stories and cases in which a solution is being used. However, Hollis recommends not making specifications too detailed or the hunters may get bogged down in them when hunting through scenarios.

Provide Ample But Not Too Much Documentation

During a bug hunt, don’t get too caught up in the specifications as this might limit your testers’ freedom and creativity in approaching the test object. Setting up too many specifications for your hunting teams to follow can cause a good deal of confusion and burden, and it can take longer than required to push testing towards simply executing the specifications. Instead, consider providing user stories and bullet points to describe what the user wants to achieve. By allowing your testing/hunting teams to choose their own starting point and structure you can shoot for a description of service and the approach to no more than two instructional pages. Anything beyond this is too much to manage and too cumbersome to be effective for the organization and for the health of the system in which you are trying to get the bugs out.  

You Can Define the Scope of the Hunt

While you keep your scope narrowed and the description of the project focused, you do not necessarily need to limit the entire project’s scope. I was curious to see how the teams reacted to this ambiguity. You can always choose to intervene and limit the scope if needed (it probably will be necessary). One or two complex features are enough. For example, my documentation stated that objects appeared elsewhere in the software, but teams did not have time to look into this. In the future, I would leave these out, and perhaps even limit personas to two. In so doing, be careful not to make the scope too big. Allow for slightly more than would fit in the allotted time, which mean the participating teams must prioritize their test planning.

Release the Hounds!

Participating hunt teams must create a plan of attack so they can get after the bugs. They must determine what they going to test and how; each team creates its own plan for the bug hunt. As a facilitator, answer questions and serve as a guidepost, but don’t discuss existing bugs, test cases or what to test. When the teams present their test plan, discuss any particularly risky areas, but don’t steer the team toward any decisions or outcomes.

Evaluate the Take

When done with the hunt, it’s time for evaluation of the bugs hunted. During this phase, ask teams how they went about their testing. You may wish to have the team present their approach to the group as a learning opportunity. Take note of any issues or bugs found during testing to get them phased out of the product as the product owner.

Of the bugs hunted and identified, you can let the testers know whether an issue found is already on your backlog or not. Any new bugs hunted might be worth a prize to the teams that find them. You might also explain why certain decisions were made. This has the benefit that others learn about your team’s features, but also that you learn new things about your own features.

Questions for Teams

During post-hunt evaluation, ask questions such as: “Did you think about what happens when X,” or “Which situations have you considered?” The following questions – and others as appropriate – can help you evaluate each team’s approach to the hunt. For example:

  • How did you prioritize your task and approach?
  • How did you decide where to start and where to ratchet down?
  • Has the team thought about paths and divergence?
  • Was risk taken into account? Why or why not?
  • How did the risk pay off, or not?
  • How did the team collaborate? Did the members sit and test together, test individually, timebox, etc.?
  • How did the teams take notes?
  • How did the team keep track of what was tested?

Keeping the Goal in Mind

Before, during, and after the hunt, keep in mind that this exercise is designed to be a productive learning experience for everyone involved. Sharing approaches, methods, and ideas lets the team learn from each other. Concluding the bug hut, thank the teams, and follow up on the issues reported. Conduct hunts regularly – once a quarter, twice a year, or as needed based on product development.

Most bug hunts, if organized properly, take only a few hours from start to finish — they are events that you can use to build your teams. The entire challenge doesn’t have to take more than three or four total hours.

teams Testing

Opinions expressed by DZone contributors are their own.

Related

  • Shifting Left: A Culture Change Plan for Early Bug Detection
  • Creating MVPs on a Budget: A Guide for Tech Startups
  • Project Hygiene, Part 2: Combatting Goodhart’s Law and Other “Project Smells”
  • Maximizing Efficiency With the Test Automation Pyramid: Leveraging API Tests for Optimal Results

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!