Why Are You Testing Your Software?
Why Are You Testing Your Software?
Whether to test or not should be a matter of ROI. It's all about money, not cargo culting or doing whatever everyone around you is doing.
Join the DZone community and get the full member experience.Join For Free
15 years ago, automated tests didn’t exist in the Java ecosystem. One had to build the application and painfully test it manually by using it. I was later introduced to the practice of adding a
main method to every class and putting some testing code there. That was only marginally better, as it still required you to manually run the methods. Then came JUnit, the reference unit testing framework in Java which brought test execution automation. At that point, I had to convince teams I was part of that we had to use it to create an automated test harness to prevent regression bugs. Later, this became an expectation: no tests meant no changes in the code for fear of breaking something.
More recently, however, it happened that I sometimes have to advocate for the opposite: do not write too many tests. Yes, you read that right, and yet I’m no turncoat. The reason for this lies in the title of the post: why are you testing your software? It may sound like the answer to this question is pretty obvious - but it’s not, and the answer is tightly coupled to the concept of quality.
In the context of software engineering, software quality refers to two related but distinct notions that exist wherever quality is defined in a business context:
- Software functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product. It is the degree to which the correct software was produced.
- Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability. It is the degree to which the software was produced correctly.
- Because everyone does it.
- Because the boss/the lead/colleagues/authority figures say so.
- To achieve 100% of code coverage.
- To achieve more code coverage than another team/colleague.
- And so on, and so forth.
All in all, those "reasons" boil down to either plain cargo culting or mistaking a metric for the goal. Which brings us back to the question, why do you test software?
The only valid reason for testing is that resources spent in making sure the software conforms to non-/functional requirements will be less over the course of time than the resources spent if this is not done.
That’s pure and simple Return Over Investment. If the ROI is positive, do the test; if it’s negative, don’t. It’s as simple as that. Perfect is the enemy of good.
The real difficulty lies in estimating the cost of testing vs. the cost of not testing. The following is a non-exhaustive list of ROI-influencing parameters:
In some industries e.g. medical, airlines, banking institutions, bugs cannot occur without bringing about serious consequences to the business. This is less critical, however, for others, such as mobile gaming.
- Estimated Lifespan of the App
The longer the lifespan of an app, the better the ROI because you will, out of necessity, have to test the code more; e.g. nearly no tests for one-shot single-event apps vs. more traditional testing for tradition business apps running for a decade or so.
- Nature of the App
Some technologies are more mature than others, allowing for easier automated testing. The testing ecosystem around web apps is richer than around native or mobile apps.
- The Architecture of the App
The more distributed the app, the harder it is to test. In particular, the migration from monoliths to microservices has some interesting side-effects on the testing side. It’s easier to test each component separately, but harder to test the whole system. Also, testing specific scenarios in clustered/distributed environments, such as node failure in a cluster, increase the overall cost.
- Nature and Number of Infrastructure Dependencies
The higher the number of dependencies, the more test doubles are required to test the app in isolation, which in turn drives up testing costs. Also, some dependencies are more widespread, e.g. databases and web services, with many available tools while some are not, e.g. FTP servers.
- The Size of the App
Of course, the bigger the app, the bigger the number of possible combinations that need to be tested.
- The Maturity of the Developers, and the Size of the Team(s)
Obviously, developers range from the ones who don’t care about testing to those who integrate testing requirements in their code from the start. Also, just for developers, adding more testers is subject to the law of diminishing returns.
- Nature of the Tests
I don’t want to start a war, so suffice it to say there are many kinds of tests - unit, integration, end-to-end, performance, penetration, etc. Each one is good at one specific thing and has pros and cons. Get to know them and use them wisely.
- The Strength of the Type System
Developing in dynamically-typed languages requires more tests to handle the job of the compiler in comparison to more statically-typed languages.
While it’s good to listen to other’s advice, including well-established authority figures (and this post), it’s up to every delivery team to draw the line between not enough testing and too much testing according to its own context.
Published at DZone with permission of Nicolas Frankel . See the original article here.
Opinions expressed by DZone contributors are their own.