DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Culture and Methodologies
  3. Agile
  4. Manual Test-Driven Development

Manual Test-Driven Development

Giorgio Sironi user avatar by
Giorgio Sironi
·
Sep. 03, 12 · Interview
Like (0)
Save
Tweet
Share
7.79K Views

Join the DZone community and get the full member experience.

Join For Free
Test-Driven Development is a code-level practice, based on running automated tests that are written before the production code they exercise. But practices can be applied only in the context where they were developed: when some premises are not present is difficult to apply TDD as-is.

Automated specification

For example, consider the premise of assertion automation: it is possible to write a (hopefully) small algorithm that is able to check the result of running production code and return true or false. In the case the problem is:
Draw an antialiased circle on this blank canvas. -- Carlo Pescio

it is not immediately clear how to define automated tests for this behavior. We could check that some pixels are still blank inside or outside the circle, or that there is a bound number of pixels of black color; or even that they are contiguous.

An opinion I've heard (that I try not to misrepresent) is that we only need to write some looser tests in these cases, checking only a few pixels of the circle.

This process will give us a little feedback on the API of our Canvas or Circle object, but not much on the algorithm we are implementing inside it. Are we going in the right direction? Have new test cases correctly been satisfied without a large intervention on the existing code? Are we painting some unrelated pixels due to an hidden bug?

What I argument here is instead that we should change the nature of the feedback mechanism. Speaking in control theory terms, change the block that acquires the output and influences the input to our design process.

Develop in the browser

When I was developing a Couchapp, a kind of web application served directly from a CouchDB database, I was appaled by the difficulty of testing it. While the production code was composed of ~100 lines, it was a complex mix of technologies: HTML and CSS code, client-side JavaScript for managing user events and some server-side JavaScript for the "queries" (actually the server-side only consists of the database in Couchapps.)

Some of this logic could be tested in automation, like the result of queries over views. Yet much of it was related to a user interface, and as such requiring a large time investment to automate.

Instead of waking up my Selenium server and start to manipulate a browser with code, I noticed that this UI was almost read-only; there were a few cases where a new document would have to be inserted, but a manual test of them was short and did not even required to reload the page. The whole application state was observable.

Summing it up, I performed a frequent manual test that took a few seconds instead of trying to define complex and brittle automation logic for testing the UI. Now that I've been introduced to a simple qualitative ROI model by Carlo Pescio's article, I would do the same for every context where:

  • a large time investment is needed for automating tests.
  • it is possible to perform manual tests quickly.

as the only logic conclusion.

A word of caution

TDD has many benefits (including catching regressions early) so I'm not prepared to give it up just because it is difficult to test. These are technical scenarios where I have successfully followed TDD by the book:

  • multithreaded and multiprocess code
  • applications distributed over multiple machines
  • computer vision (object recognition and tracking)
  • image manipulation code (via comparison testing)
  • development of browser bindings for Selenium

And even in the case the big picture is not easy to test-first (like in the case of image manipulation), we can benefit from TDD the pieces of the solution. For example, in the computer vision case I wasn't able to write a test beforehand for tracking a car inside a movie. But I was able to TDD the objects that the algorithmic solution to the problem called for: Patch, Area, Cluster, Movement, and so on. End-to-end TDD is not always cheap but unit level TDD can often be, if it considers testability as a relevant property (while regression testing even at the end-to-end level is always possible, in the worst case with record and replay.)

End-to-end specifications

If we can't define automated assertions for our "big picture" problem, it doesn't mean that we cannot apply the TDD approach, by substituting a manual step.

Going back to the circle problem, I would define manual test cases on an inspection page seen by a human. I've seen this done with layouts and multiple browsers to catch CSS rendering bugs, for example:

 

It would be very difficult to check these screenshots automatically, as each browser renders pages a bit differently from the others.

The iterative process becomes:

  1. Define a cheap manual test, automating the arrange and act phases but not the assertion.
  2. Write only the code necessary to make it pass.
  3. Refactor.

As long as the number of tests does not increase without limit and the manual check can be performed quickly, this approach does not slow you down with respect to TDD by-the-book. You'll have to take care of regression with other means; but at least you define a set of manual test cases.

Feedback!

TDD is an instrument of feedback: if feedback cannot be gathered in an automated way, we have to resort to manual checking of the specifications.

Here are other examples of manual tools for generating feedback:

  • Read-Eval-Print Loops: you can experimenting with existing classes and functions, and easily repeat steps thanks to history.
  • the browser refresh button: the fastest way to transform a PSD into an HTML and CSS template.
  • MongoDB console for learning the database API; other kinds of consoles like Firebug and Chrome's, or Clojure's.
Test-driven development Testing Database

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Testing Level Dynamics: Achieving Confidence From Testing
  • 11 Observability Tools You Should Know
  • Chaos Engineering Tutorial: Comprehensive Guide With Best Practices
  • A Gentle Introduction to Kubernetes

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: