If you read the documentation for Cucumber, it has some specific definitions of the concepts it has called features and scenarios.
The documentation describes features like this:
A .feature file is supposed to describe a single feature of the system, or a particular aspect of a feature. It's just a way to provide a high-level description of a software feature, and to group related scenarios.
And scenarios like this:
A scenario is a concrete example that illustrates a business rule.
To date, the examples that have been presented for Iridium don’t hold strictly true to these interpretations of features and scenarios. Before we dive into why, let’s take a look at some options to can use with Iridium that will allow you to more easily structure your tests to conform to the literal definitions of features and scenarios.
The first system property is newBrowserPerScenario. When set to true (the default is false), Iridium will destroy (unless the leaveWindowsOpen system property is also set to true) and recreate the Selenium WebDriver instance for each scenario. This means each scenario is executed in a fresh browser window with no ties back to any actions performed by other scenarios.
The second system property is failAllAfterFirstScenarioError. When set to false (the default is true), Iridium will allow all scenarios to be completed regardless of whether any of the preceding scenarios failed or not.
With newBrowserPerScenario set to true and failAllAfterFirstScenarioError set to false, we can write tests like this in Iridium. You can run this example for yourself by right clicking, saving and running this web start file.
Feature: Open an application # This is where we give readable names to the xpaths, ids, classes, name attributes or # css selectors that this test will be interacting with. Scenario: Generate Page Object Given the alias mappings | SearchMenu | dropdownMenu2 | | SearchField | search | Scenario Outline: Test the search box And I open the application And I maximise the window And I click the element found by alias "SearchMenu" And I clear the hidden element with the ID alias of "SearchField" And I populate the element with the ID alias of "SearchField" with "<search>" Examples: | search | | Java | | Devops | | Linux | | Agile |
The most important aspect of this test is that each scenario (or in our case each iteration over the scenario outline) is an independent test. No state is carried between tests, and no tests relies on any of the previous tests completing successfully. This means each scenario is a concrete, and independent, example that illustrates a business rule.
But, this is not actually how we have been demonstrating Iridium so far.
By default, Iridium uses a single browser window per feature, and will halt the execution of scenarios when any of the previous scenarios fail. This allows us to write features where scenarios are executed in a top to bottom fashion, with each scenario inheriting the browser window (and all the state in the browser) from the last scenario. These kinds of tests look like this:
Feature: Open an application # This is where we give readable names to the xpaths, ids, classes, name attributes or # css selectors that this test will be interacting with. Scenario: Generate Page Object Given the alias mappings | SearchMenu | dropdownMenu2 | | SearchField | search | # Open up the web page Scenario: Launch App And I set the default wait time between steps to "2" And I open the application And I maximise the window And I click the element found by alias "SearchMenu" Scenario Outline: Test the search box And I clear the hidden element with the ID alias of "SearchField" And I populate the element with the ID alias of "SearchField" with "<search>" Examples: | search | | Java | | Devops | | Linux | | Agile |
The difference is subtle, but significant. In this example we use scenarios to open the web browser and prepare the web application so that subsequent scenarios can then perform their business logic or validation. The scenarios are dependant on all those that come before it, and the failure of any scenario means than all subsequent scenarios also fail.
The default operation of Iridium is intended to address the issue of how a test is expected to interact with parts of a web application that can only be accessed via a complex set of preceding interactions.
Take an insurance quoting web app as an example. These are typically broken up into a set of pages that ask a bunch of related questions, much like the wizard UI pattern that was introduced with early versions of Windows. The implication of this style of web app is that often the parts that you want to test are buried on page 8 of a 10 page wizard, and there is no way to simply jump to page 8 without first completing the previous 7 pages.
Let’s imagine that page 8 includes a text box with some validation rules applied to it, and we want to test all the edge cases that the validation rules need to account for.
If we treat each scenario as an individual test that can be run in isolation, and we don’t want to introduce a ridiculous amount of copy and pasted code into our test, the most practical way to implement the test is with a custom step that fills out all preceding 7 pages, leaving the browser window at page 8 ready to test.
We could also use the Cucumber background keyword to reduce copy and paste, although the actual execution of the steps doesn't change, as steps in a background are run before each scenario anyway.
Feature: Test Validation Rules Scenario Outline: Validation Edge Cases Given a browser window on page “8” of the wizard And I populate the element found by “Item Value” with “<value>” Then I verify that the element found by "Validation State" should have a class of "error" Examples: | value | | -100 | | 0 | | 100000000 |
The custom step in this case is the one that is executed in response to the step Given a browser window on page “8” of the wizard. This gives us a concise and clean test that was very easy to write.
But there are two issues that you’ll run into in practice with these kinds of self contained scenarios.
The first is that they are very slow to run. If each scenario is truly independent and isolated from the others, then each iteration over the scenario outline will result in the first 7 pages being repeatedly processed. If you had 10 scenarios, and each had 10 examples, then running one feature means that the first 7 pages will be filled in 100 times. This time starts to add up very quickly, and will soon reach a point where you just don’t have enough hours in the day to run all your tests.
The second is that you are forced to maintain a lot of custom steps, written in a language like Java, in order to jump your scenarios to the point in your application where you actually want to begin testing. This will probably be done via page objects. Essentially every point in your app that exposes some business logic requires a custom step acting as a bookmark that can establish the required state for a scenario to operate on. If you had 10 such applications, each having 10 pages, and user journeys that weave through these pages depending on the inputs, you will quickly find that you are forced to maintain a library with 100's of bookmark custom steps, or a small library of complex steps that can account for each path that a journey could take.
In fact, this was the very situation that we found ourselves in when writing Iridium. All of our applications were wizard driven with multiple pages, and there was no easy way to jump to the page we wanted in order to perform some localized testing. We also did not have dedicated resources that could write and maintain the library of custom steps that would allow us to write traditional, independant scenarios.
Instead, we chose to treat features as end to end journies through our applications, with scenarios run in a top to bottom fashion and containing steps that interacted with elements on a given page. While this is not in keeping with the literal interpretation of features and scenarios, it did allow us to write tests that were comparatively quick to complete without the cost of dedicating expensive Java developers to maintain custom steps that ultimately would have just replicated the same interactions that are now just included in the feature.
With the ability to extend Iridium with custom steps, and options to control how browser windows are shared between scenarios, Iridium supports both the traditional implementation of features and scenarios, as well as the ability to treat scripts as a top to bottom execution of scenarios. Which style of test you choose is up to you.