Over a million developers have joined DZone.

From Acceptance Tests to User Guides: Living Documentation with Serenity BDD and the Screenplay Pattern

Jan Molak explores how reports generated by automated acceptance tests can provide similar quality as hand-crafted manuals at a fraction of the cost.

· DevOps Zone

The DevOps Zone is brought to you in partnership with Sonatype Nexus. The Nexus Suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today

I don’t even remember how many teams I worked with that responded with laughter when asked for some good documentation of the system they’re working on.

How would your team respond?

Numerous projects have conditioned us to know that asking such a question is completely and utterly pointless. Even if you manage to find any documentation, it will most likely be out of date. More importantly, it will be incomplete at best and deceiving at worst. One way or another — useless and not worth one’s time.

The Problem

Good documentation, such as user guides, is an invaluable source of information for the end users of your system, a great starting point for the new joiners on your team and an excellent record of the system’s capabilities to be verified in regression testing.

The trouble is that good documentation takes writing skills and hard work.

It doesn’t have to be like that, though. Far from it — it can be a natural part of your development process.

In this article, I’ll show you how to join a few dots and significantly reduce the amount of effort needed to produce a high-quality record of your system’s functionality and automate the maintenance.

The What

Here comes a statement some might consider controversial. I like to think of “good documentation” not in terms of how comprehensive it is, but how much return on my “reading investment” it offers:

Good documentation explains what the system can do to help me accomplish my tasks.

This definition focuses on documentation being primarily an exploratory tool, which helps interested parties gain an understanding of what your system can do for them rather than how it’s going to do it. Note that this applies to all levels of functional documentation and not just the user guides. The same goes for acceptance tests, unit tests, and pretty much anything that is set to demonstrate the functionality of your system and its components.

Additionally, the above definition also requires the documentation to be user-centred and example-based so that it’s easier for the reader to absorb.

The Scope of a User Guide

What should a user guide cover?

A user guide should only cover what the user of a system can interact with.

Fairly obvious?

A user will only interact with a software system using the interfaces that we provided for them. Those interfaces tend to be quite varied and cover things like a Web UI of a web application one interacts with on their smartphone or a REST API of a web service a developer uses to make her system interact with yours. Because of this variety, I tend to define those interfaces quite broadly and transgress the limits of what’s typically considered a “UI”:

The Human-Machine Interface is any human-facing interface of the system.

So a user guide should only cover the Human-Machine Interfaces of the system?

Cool, that’s what automated acceptance tests should do too!

Human-Machine Interfaces and Automated Acceptance Testing

Keeping the above definition of the Human-Machine Interface in mind—to paraphrase John:

If a scenario you want to automate wouldn’t contribute to a user guide, it shouldn’t be part of your automated acceptance tests [using a Human-Machine Interface].

If we agree with the above, then we can also infer that:

All automated acceptance test scenarios validating the Human-Machine Interface should contribute to a user guide.

This makes things interesting, doesn’t it?

And what if you have too many acceptance tests? Wouldn’t the user guide become bloated? Of course it would!

The solution to that problem is to limit the number of acceptance tests to only those that validate features valuable to their users. Not only does this make the resulting “user guide” shorter and more likely to be read, it also limits the number of acceptance tests you need to maintain. Win-win!

Needless to say, if the acceptance tests are automated and executed whenever the system changes, you’ve also dealt with the problem of keeping the “user guide” up to date!

Having said that, testers often like to have visibility on a few more tests around edge cases, error conditions and so forth, that would not normally be of much interest to an end user. These tests should certainly be automated if the QA folk find value in their automation. But if they were to appear in the user guide, they would probably add volume without adding much value for the end user. This is why we like to make the distinction between “public-facing acceptance tests” that are designed to appear in the living documentation and “internal-facing acceptance tests” that are more useful for developers and testers than end users or business.

Cucumber and User Guides

Acceptance tests describe examples of how users interact with the system.

What’s the best way to capture and express those examples then?

Many teams like to use Cucumber as a collaboration tool because it has the added benefit of producing executable test scenarios [3].

A scenario expressed in Gherkin might look as follows:

Feature: Flash Sales
  Scenario: One click purchase
    Given that Ian sees a flash sale of a discounted MacBook
      And he has a valid credit card he can use
     When he attempts to buy the item with one click
     Then he should see that the purchase was successful

As you can see, a decent scenario represents a level of abstraction that’s high enough to allow us to focus only on those details that are important from the point of view of the user (“Relevant” is the “R” in the Pirate Rule).

This level of abstraction helps keep the scenario short and precise and simplifies the communication between the business and developers.

Of course, keeping the scenario steps this simple requires us to avoid revealing too much detail in the scenarios and hide all the complexity somewhere in the step definitions.

So here’s the question:

How do we keep the complexity of test scenarios to a minimum, yet produce living documentation that’s as detailed as the user needs it to be?

A user guide needs to show how exactly a given step should be accomplished by the user. For example: what does it mean to “have a valid card”?

Let’s see how we can answer this question by turbocharging Cucumber with Serenity BDD and the Screenplay Pattern!

The Screenplay Pattern

The Screenplay Pattern is a user-centred model of expressing, architecting, and scaling automated functional tests of any human-machine interface, such as a Web UI, REST API, etc.

The Screenplay Pattern was originally created by Antony Marcano and later extended by Andy Palmer, John, and myself. We have been working on it for some time now and have successfully implemented it on several projects ranging from small ones developed by a handful of people to large ones employing hundreds. We wrote about the pattern, its origins, and how we’ve arrived at it on DZone and InfoQ. You also might have also heard about it on one of our talks or workshops.

In short, the idea is that the external parties interacting with a software system are represented as Actors who perform Tasks in order to accomplish their Business Goals.

Those Actors can represent all the different user personas or external systems interacting with the one you’re building. Using such a model makes it easier to reason about the system and what the externals parties want to achieve as a result of their interactions with it [1].

For example, let’s say that we’re working on an e-commerce project and we’ve identified a persona of “Ian, the Impulse Buyer”[2].

The Tasks he’ll attempt to perform are high-level and business-focused instructions we could also ask of a human, such as:

Make a Payment

As those high-level Tasks are usually pretty complex operations that might require several intermediate steps, they will typically be composed of either some lower-level Tasks such as

Enter the Credit Card Number

or low-level and specific Human-Machine Interface-focused Actions, such as

Enter the value '4111 1234 1234 1234' into the Credit Card Number field

Once the Actor performs all the actions, it can also ask Questions about the state of the system in order to validate its correctness:

Ian should see that the Purchase was Successful

The important characteristic of the Screenplay Pattern is that it lets you structure the acceptance tests using levels of abstraction that reduce the cognitive load not only on developers writing and maintaining the tests, but more importantly on anyone who reads them.

Those exactly are the properties we came to expect from a good user guide!

The pattern helps lead the readers from general to specific, from understanding what a capability is to how to use it, always focusing on their business goals.

From Scenario Steps to User Tasks

You already know that the Screenplay Pattern and its implementation in Serenity help represent both the high-level and the intermediate scenario steps as Tasks to allow for easy composition and reuse.

A scenario step such as:

Given Ian has a valid credit card he can use

(which can also be a Task itself) would be composed of other Tasks:

Given Ian has a valid credit card he can use
│ 
├─  Ian adds a default credit card '4111 1234 1234 1234' 
│   with expiry date '02/2021' and security code '123'
│└─  Ian adds a default billing address of    
     '123 Cherry Blossom Av., A1 1AA, London, UK'

As you can see, there’s no mention of “clicking the buttons” or “entering values into fields” here. Those interface-specific Actions belong to a lower level of abstraction:

Given Ian has  a valid credit card he can use
│
├─  Ian adds a default credit card ‘4111 1234 1234 1234‘
│   with expiry date ‘02/2021' and security code ‘123'
│   │
│   ├─ Ian visits his accounts settings
│   │
│   ├─ Ian adds a new card ‘4111 1234 1234 1234‘ 
│   │  with expiry date ‘02/2021' and security code ‘123'
│   │  │
│   │  ├─ Ian enters ‘4111 1234 1234 1234‘ 
│   │  │  into the credit card number field       
│   │  │
│   │  ├─ Ian enters ‘02/2021' into the card expiry field
│   │  │
│   │  ├─ Ian enters ‘123' into the security code field
│   │  │
│   │  └─ Ian clicks the Add Card button
│   └─ ...
└─ ...

…and so on.

Having a clean, scalable, and composable structure for the acceptance tests is obviously a great win on its own, but not enough to make them act as a user guide or living documentation.

That’s because our puzzle is missing one last crucial piece: user-friendly reporting.

From Acceptance Tests to User Guides

For the acceptance tests to become a user guide, the tests and their results need to be accessible to their intended audiences such as the business, fellow developers, testers, and so on (you might remember that “Accessible” is the “A” in the “Pirate Rule”).

Also, they need to be presented in a user-friendly format such as HTML and ideally contain screenshots so that the steps are easy to relate back to the system being documented.

Suppose you start out with an argument along the following lines:

Feature: Flash Sales   
  Business wants to increase sales by allowing customers 
  to purchase discounted items within a limited period of time. 
  These "Flash Sales" only last a few hours and are widely
  publicised via social media. 
  The target audience is impulse buyers who did not necessarily 
  plan to purchase the discounted items.
  Scenario: One click purchase     
    Ian is an impulse buyer who likes to buy things easily and
    without hassle. He is more likely to buy if he thinks he 
    is getting a good deal.
    Given that Ian sees a flash sale of a discounted MacBook
    And he has a valid credit card he can use
    When he attempts to buy the item with one click
    Then he should see that the purchase was successful

Here’s where Serenity BDD comes into play again. The powerful reporting capabilities of this open source library together with support for the Screenplay Pattern make it easy to write scalable, user-centered acceptance tests and generate reports that, akin to a user guide, reduce the cognitive load on the reader and help them understand your system, one level of abstraction at a time.

Here we are using Cucumber’s ability to record not only the classic “Given-When-Then” Gherkin syntax but also free text to provide business-focused context and background.

When you run a scenario like this with Serenity, the documentation you noted in the feature file is combined with an illustration of the test execution, to give a complete picture of both what tasks a user needs to perform to achieve this goal, as well as the business motivation behind the scenario:

Want to Learn More?

If you like the idea of the Screenplay Pattern and would like to learn how to implement it using the Serenity BDD library, we spoke about it during the London Tester Gathering Workshops 2016.

If you can’t join us at the LTG — get in touch, we speak at companies too!

The DevOps Zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

Topics:
serenity bdd ,screenplay

Published at DZone with permission of Jan Molak, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}