DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Testing and the Single Responsibility Principle

Juri Strumpflohner user avatar by
Juri Strumpflohner
·
Jul. 27, 12 · Interview
Like (0)
Save
Tweet
Share
6.49K Views

Join the DZone community and get the full member experience.

Join For Free

Automated testing is hard! Therefore, if you're about to learn it, just keep going. Resist the initial learning curve as afterwards it'll allow you to adopt a completely different programming style. A common problem I've observed is that (mainly) "testing newbies" tend to create huge, bloated tests cases which quickly start to get unmanageable. Usually this discourages them and they abandon their tests. So what's the key?

When you start with automated (unit) testing, you usually apply a test-after approach. That's natural. While I think you should jump into testing in a test-first manner immediately, you will naturally end up doing test-after and then eventually slowly move back to real test-first. This test-after approach however is what lets you create big huge test cases normally as it is extremely difficult to focus on the single test cases.

An Example

Maintainability is one important aspect of creating successful tests, however, also feedback is extremely important. When you create automated tests, what you want to achieve in the end is to build a system that will notify you about any problems as early as possible: you increase the feedback loop, basically. Instead waiting for some human tester to provide you feedback in the form of a bug report, you let the automated test case give you that feedback while coding and adding new stuff.

For example:
[TestMethod]
public void ShouldReturnCleanedConnectionStringAndSchemaTranslations()
{
    //snip snip snip
                 
    Assert.AreEqual(cleanConnString, result);
    Assert.AreEqual(1, schemaTranslations.Count(), "There should be 1 schema translation");
}

When this test fails, do you know what went wrong?? Yes, you have to open the details of the test failure and see which assert failed. But wouldn't it be better to split them (you see the "AND" in the test name) and create two, like

[TestMethod]
public void ShouldReturnCleanedConnectionString()
{
    //snip snip snip
                 
    Assert.AreEqual(cleanConnString, result);}
}
 
[TestMethod]
public void ShouldReturnSchemaTranslations()
{
    //snip snip snip
                 
    Assert.AreEqual(1, schemaTranslations.Count(), "There should be 1 schema translation");
}

Now when one of those two tests fails you immediately see which kind of code is probably broken, right?This kind of approach has several advantages. You now even optimized the provided feedback in that - in the best case - you don't have to even open the details and start to debug in order to understand what broke your test. Moreover when you refactor your code you have to touch less tests. Remember, small tests tend to break less often than large, big fat test methods. And finally, small tests are much easier to understand, fix, adapt etc...

Naming Conventions May Help

Good naming may actually help you identify tests with multiple responsibilities. These worked well for me:

  • GivenSomethingIsTrueThenItShouldReturnSomeSpecificResult
  • ShouldReturnSomeSpecificResult

Already when formulating the test name you quite quickly see any multiple expectations. What do you think?




Testing

Published at DZone with permission of Juri Strumpflohner. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Why You Should Automate Code Reviews
  • Data Mesh vs. Data Fabric: A Tale of Two New Data Paradigms
  • Spring Cloud: How To Deal With Microservice Configuration (Part 1)
  • Top Authentication Trends to Watch Out for in 2023

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: