Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Testing Implicit Requirements

DZone's Guide to

Testing Implicit Requirements

Having a knowledge of the product and its domain, and collaborating closely with QA during development, can streamline the Agile process and improve the end product.

· Agile Zone
Free Resource

See how three solutions work together to help your teams have the tools they need to deliver quality software quickly. Brought to you in partnership with CA Technologies

With the wider adaptation of Agile software development, the quality of system requirements has improved significantly, especially within teams where there are proactive backlog groomings and sprint plannings. But requirements still remain the main source of defects, mainly because we write software code to satisfy a set of expectations and it becomes an issue if any expectation is missing, incomplete, or ambiguous. And there is also the factor of implicit requirements, which we will discuss in this article.

What Are Implicit Requirements?

There is really no such terminology in the software industry - I came up with this term and I hope that its meaning will become clear in the next few paragraphs.

As humans, we take a lot of things for granted. For example, when you use “Tab” on your keyboard you would expect the cursor to move left to right and then down. Another example is a user will expect a mobile app to mute, go to the background, and display an incoming call. Sometimes the operating system will enforce certain behaviors on the application to meet these requirements.

There are expectations from products that are driven by what used to be core requirements in the past and with time they became so fundamental that we don’t specify them anymore. They become implicit requirements because intentionally or not, we don’t specify them. Also, specifying all these types of requirements may be too much noise and may even distract from the core features of the application we are trying to implement. This is the part where Agile teams excel because such details are spelled out during discussions rather than in complicated documentation.

The above examples were very trivial and they mostly constitute positive scenarios. There are more serious cases of software defects which result from implicit requirements. Let’s have a look at three similar cases:

Case Study 1

A supermarket promotion in one store got propagated to several other stores across the country, and the chain only realized after suffering some financial loses because those other stores were not aware of the promotion and therefore did not stop it.

Case Study 2

I have had a Vodafone small business account for about 10 years, but last November I decided to port a newly added mobile number to another Vodafone mobile number, which happened to be a consumer number. My whole account was turned from business to consumer, they conducted a credit check on me, and a credit reference account was created for me with major credit reference agencies. All my lines were deactivated except the new one and were restored only after I formally launched a complaint. Vodafone is still working on restoring my account back to business.

These cases are very serious because of their impacts on those affected and the majority go unreported. There is a pattern in these cases: the software was working according to written requirements, it was doing what it was expected to do and presumably all acceptance criteria were met. But it was doing more than what it was expected to do because it did not satisfy some implicit requirements. Let’s try to identify the implicit requirements in the above examples:

Case 1: A store manager is responsible for that particular store and has no visibility with, or direct interest in, other stores. They want to get rid of a few products in their stores, probably because they want to bring in some new stock or the expiration date is approaching. Their implicit requirement is that their authority will be executed locally, not regionally or nationally.

Case 2: Number portability is a legal requirement, which every network provider must comply with; it has nothing to do with the type of service account. My expectation was that after the operation the source number would be deactivated and all services transferred to the target number (at least that is how it was tested it in the early days of my career), nothing more.

Not Enough Negative Testing

The list of things a particular software should not do can be long, and it is typically never, or minimally, documented. This is partly because we cannot predict every possible route of action, especially when this is driven by events and user actions. Due to this, the quality of negative testing heavily depends on the experience and imagination of the QA involved. This is where automation cannot replace humans because creativity and critical thinking is not something that can be programmed.

While some negative testing can and should be specified, many more scenarios come to life when we do exploratory testing, asking 'what-if' questions and exploring the different threads. Having already confirmed the software can do what it is expected to do, with the working product in their hands, QA will deeply explore the behavior of the software under various conditions, chains of actions, combinations of input data, etc.

Challenges for QA

To be effective in their work, QA, like any another team member needs space and support, especially from management. But there are other challenges too. These two are the most common ones I have come across:

  1. Most defects found through exploratory testing cannot be tied to documented requirements. From the developer's point of view, this is an additional requirement. And to make things worse, project managers and product owners tend to give low priority to these defects, arguing that they are edge cases, which will never happen in production. True, they may not happen in production. What we don’t realize is that a defect may be small but it might be hiding other, more serious defects down the line. The more defects we have unresolved regardless of their impact, the more the risk is building. Furthermore, the user can explore those extreme conditions (I am always tempted to enter 1 billion carrier bags when prompted at self-checkout in the supermarket just see what will happen).
  2. There has always been pressure on QA to deliver on time; development delays squeeze out testing time. With Agile the case may even be worse, especially with teams that follow Scrum. Many times I have seen stories being completed in the last day or hours of the sprint and rushed through QA in order to hit the velocity. There is barely any time left for proper exploratory testing; in fact, exploratory testing is replaced with ad-hoc testing confined to confirming acceptance criteria.

Conclusions

I don’t think there are golden rules is the software industry and we can only talk about best practices; each team will need to find out what works best for them and keep optimizing. There are a few things that have worked for me and are worth trying:

  1. Ensuring functional testing and automation are part of development, so both developers and QA can work on it; this will free up time for exploratory testing when the story is completed. I have seen some brilliant QA people who actually test at the developer's desk, and together they fix issues right there unless it is more complicated. When the story moves to QA, it is not about validating the acceptance criteria but exploring the functionality under different conditions and data.
  2. QA need to be pragmatic and be able to look beyond their desk for the benefit of the team & company as a whole. However, we also need to challenge decisions and strive to have a lower number of unresolved defects regardless of their priorities. Other than prioritization, window fixing and refactoring are two ways of getting defects resolved, especially minor ones.
  3. Risk analysis of defects to facilitate better decision making. On one occasion we found that a store supervisor could delete their own account in a POS system, which locks out all cashiers and other staff under their supervision (this was an irreversible operation). The project manager said “no one is so stupid to delete their own account” and did not want to prioritize the defect. I argued if an employee who has been laid off can come back with a gun to kill their co-workers, I see no reason why they would not take advantage of this easy target and delete their account while clearing their desk. The message got across and the defect was fixed.
  4. Reduce waste. Your time is precious, s0 use it wisely; protest against any bureaucracies. I still see a lot of teams who document complicated test results, some spending considerable time on this. Sure, there are exceptions like compliance testing, where you need this evidence, but my view is that rules and regulations and created by and for the benefit of human beings and so they need to change, adapt, or sometimes even drop out if the benefit is diminished.
  5. Build sound product knowledge. The better the QA knows the domain, the application of the system and business context, the more effective they become. Case studies 1 and 2 are classic examples of when a good knowledge of the product and domain would alert QA what scenarios to explore. Someone with less product knowledge may not know the difference between store and regional managers (and the scope of their responsibilities) or small business, corporate and consumer accounts, and their impact on number portability.
  6. Technical knowledge. Understanding the underlying technology and architecture is critical, especially in complex systems. With good technical knowledge, QA will not forget to test things like flooding and draining queues, log rotations, data integrity, race conditions and deadlocks, circuit breakers, application properties, etc.

Note: By technical knowledge I don’t mean the ability to write a Java/Scala program, but rather the ability to understand the architectural landscape of the system, the components, and how each work (individually and together), the operating systems, databases and data structure, queues and topics, integration points and mechanisms, etc.

7. The bigger picture. This is one of the most difficult challenges I struggle with personally within Agile teams, especially when I join the project a bit late. Due to the nature of Agile (iterative development), we tend to lose the bigger picture and focus on narrow vertical functionalities, sometimes unknowingly forgetting the impacts and implications of a change. Knowledge sharing, via QA forums for example, is one way to help with this. Checking the architectural diagrams, reminding ourselves about the personas of the system, and sneaking into Dev forums and Code reviews can all help. I would really love to hear how others cope with this.

We may not be able to spell out all the things a software should not do and there is no defect-free software, but we can reduce the risk by doing more and proper exploratory testing. We should always ask and look out for what the software can do to discover as many implicit requirements as possible.

Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies

Topics:
automated acceptance testing ,agile ,software requirements ,qa

Published at DZone with permission of Charles Moga. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}