Over a million developers have joined DZone.

The Nightmare of End-to-End Testing

DZone 's Guide to

The Nightmare of End-to-End Testing

In order to perform end-to-end testing, we need an execution environment and a testing framework... Here's where the Nightmare.js begins.

· Performance Zone ·
Free Resource

Modern applications get complex, and we cannot go without automated testing. The canonical agile testing quadrants are split to technology-facing and business-facing tests. As for technology-facing testing, I believe nowadays everybody has dealt with unit-tests. Thus we make sure that the smallest parts of the system act as intended in isolation. Also, we use component tests to verify the behavior of large parts of the system and integration tests to check if communication between object isn't broken. The entire quadrant is all about programmer low-level tests proving that the code meets all the design requirements. These tests are meant to control internal quality, to minimizes technical debt, and to inform dev-team members of a problem of early stages.

Business-facing (or acceptance) tests describe the system in non-programming terms and they ignore the component architecture of the system. Here we know testing techniques such as functional, story-based, prototypes, simulations. Apparently, in web-development, the most demanded approach is end-to-end testing. This one is performed on the application level and tests whether the business requirements are met regardless of app internal architecture, dependencies, data integrity and such. Actually, we make the test runner to follow the end-user flows and assert they get the intended experience.

In order to perform end-to-end testing, we need an execution environment (browser automation library) and a testing framework. It seems like today the most popular way to get in-browser testing API is by using Selenium WebDriver. Here goes a family of related frameworks: Selenium WebDriverIO, WebDriverJs, wd, and Nightwatch.js. I personally not a big fan of WebDriver. Debugging and tracing what happening in the browser during the test with WebDriver is quite problematic. One cannot run the tests until the local server is started. The scraping process is considerably slow.

As an alternative, one can take a look at Zombie.js and Casper.js. Both are testing frameworks using headless browsers. Zombie.js has own browser and Casper.js supports PhantomJS (WebKit) and Slimer (Gecko). While fiddling with Zombie.js I found it interesting in general, but too verbose when it comes to real test-suits. Casper.js provides you own execution context. I would prefer to stay with node.js like I do for example with Mocha.

Another framework DalekJS allows you run tests under PhantomJS or in a real browser. I was quite impressed by it a year ago, but giving up waiting for a stable release.

In the Angular community, the undoubted leader Protractor. Though it's too much Angular to my taste.

I've been looking for a solution that is easy to use and to debug and after all the research I selected an automating library with unsavory name Nightmare.js. Their approach just hit me by its simplicity. Instead of wrapping WebDriver or PhantomJS they just run an instance of Electron. So it gives us a headless browser with extensive browser API. Nightmare.js is framework agnostic, so I can use it with Mocha and with assertion library of my choice. What I like most one can enable a mode where Nightmare.js shows you whatever is happening in the browser. Besides, you can break the test and examine the browser window with DevTools.

Yeah, it doesn't allow to run tests in real browsers, but that's something I can sacrifice in my case.

Starting up

As an example, we will write a few tests for TODOMVC app.

I suggest to enter the project directory and install the dependencies:

npm i mocha
npm i chai
npm i nightmare

Now we can create a subdirectory for the tests:

mkdir -p tests/end-to-end

Let's start our first test there todo.spec.js

"use strict";
const Nightmare = require( "nightmare" ),
      expect = require( "chai" ).expect,
      BASE_URL = "http://todomvc.com/examples/backbone/",
      onError = ( err ) => {
        console.error( "Test-runner failed:", err );
      browser = new Nightmare({
        show: true,
        typeInterval: 20,
        pollInterval: 50

Here we obtain references to Nightmare and chai.expect libraries. We set BASE_URL constant to address the endpoint from the tests. I also suggest to have a common handler for possible testing errors onError. Eventually we create an instance of Nightmare. In the options we ask Nightmare to show the browser during testing, set in-input typing interval 20ms and wait polling interval 50ms.

Now we extend todo.spec.js with first specification:

describe( "TODO", function(){
  this.timeout( 15000 );
  // start up with the blank list
  before(( done ) => {
        .goto( BASE_URL )
        .evaluate(() => {
          return localStorage.clear();
        .then(() => {
  // disconnect and close Electron process
  after(() => {
// insert here the tests

Here we extend the default timeout as end-to-end tests may take much longer than unit-tests. I prefer to clean up any possible products of previous tests in setup rather that on tear-down. This way we start with a blank list even after the spec broke never reaching after() method. So within before method we open out page under test and evaluate JavaScript that cleans up localStorage where the app stores user input.

Now we can add to todo.spec.js the first test.

it( "should add an item to the list", ( done ) => {
  const NEWTODO_INPUT = ".new-todo";
    .wait( NEWTODO_INPUT )
    // type a todo and press ENTER
    .type( NEWTODO_INPUT, "watch GoT" )
    .type( NEWTODO_INPUT, '\u000d')
    // wait until the list receives the item
    .wait( ".todo-list li" )
    // get the number of available items
    .evaluate(() => {
      return document.querySelectorAll( ".todo-list li" ).length;
    .then(( res ) => {
      expect( res ).to.eql( 1 );
    }).catch( onError );

Here we test if a new item can be added to the list. First, we refresh the page (after the localStorage was cleaned) and we wait .new-todo element to get available in the DOM. It is supposed to be there as soon as the page is loaded. Then we emulate user typing in the input "watch GoT" and pressing Enter. Now it has to wait until any item appears in the list. So the Nightmare will poll every 50ms (as specified in pollInterval option) until the condition is met. When the list is rendered we can evaluate JavaScript querying for list items and assert that only 1 item was added.

As we run the test

mocha tests/end-to-end/todo.spec.js

A browser window pops up showing the bot typing in input as we described. When we finish writing the tests we disable it with Nightmare initialization option show: false. Besides the mocha output the test results:

Let's now add a second test to todo.spec.js:

it( "should remove an item from the list", ( done ) => {
  const REMOVE_BTN = "button.destroy";
    // click of the first item fo the list
    .click( ".todo-list li:first-child " + REMOVE_BTN )
    // wait until the list is hidden (happens when it gets empty)
    .wait(() => {
      return document.querySelector( ".main" ).style.display === "none";
    .evaluate(() => {
      return document.querySelectorAll( ".todo-list li" ).length;
    .then(( res ) => {
      expect( res ).to.eql( 0 );
    }).catch( onError );

Here we test if the added item can be removed. So we emulate a click on the remove button of the first list item. Then we wait until .main section gets hidden (that's from app specification ). For that, we use a callback function. Nightmare will poll every 50ms until it returns a truthy value. Then we can assert the list is empty.


As I already mentioned whenever anything goes wrong with the test we can examine the page for exact break point with DevTools. We just need to enable it during Nightmare initialization:

const browser = new Nightmare({
  openDevTools: {
    mode: "detach"
  show: true

In the body of a test we can stop the flow .wait( 600000 ) (do not forget to extend timeout respectively) and during the pause use the DevTools.

In a real application, we have flows like submit a form and update a component. When testing we need to know when the page reloads or component rendering really happens. As for the page readiness, we can listen from wait() callback for the next page loaded event. In order to know when  a component gets updated, within the app I increment the value of component bounding element's data-rev attribute with every rendering. Therefore I can watch from wait() callback for the next number on it.

end to end testing ,performance ,dev ops ,testing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}