Why Developers Don’t Use TDD
Look at the common reasons given for avoiding Test Driven Development and how to overcome these excuses to reap its benefits.
Join the DZone community and get the full member experience.Join For Free
although the number of acronyms in the programming industry has probably already exceeded the number of stars observable in the moonless night sky, only a subset has gained popularity and recognition. tdd definitely belongs to this group. judging by numerous conference lectures, books, podcasts, and blog posts, the fact that test driven development is a widely known technique is undeniable. yet, when you consider its adoption and actual usage the reality might look a bit different. in this article, we will take a look at the different reasons for avoiding tdd that have been presented by candidates during several conducted technical interviews and try to disprove if they are real obstacles.
1. "management doesn’t allow us”
the question here is why management decides whether tdd should be used by developers. does a doctor ask a hospital director if he’s allowed to use medical gloves? of course not, because he uses them for his own protection. he also doesn’t consult on each drug he prescribes to his patients. as a specialist, he is allowed to make own decisions in the area of his accountability.
by the same token developers can look at their toolset and techniques. your goal is to effectively deliver software and as a professional, it’s your responsibility to find out how to achieve that. nothing prevents you from making a dry run on a small feature in order to assess whether the technique works for you or is it more like a ball and chain.
nontechnical managers can’t actually grasp what tdd is all about. many of them associate the word test with quality analysis and since in many organizations there are dedicated qa people for that job, they don’t understand why developers should duplicate their efforts. there is no point in trying to explain what the difference is. after adding a new feature, you probably don’t explain to your manager which data structures have been chosen to fulfill the task, because it isn’t the level of details they are interested in. tdd belongs to the same level.
2. “not enough time to write tests”
the second excuse (which applies to all kinds of automated testing, not only tdd) is very similar to the first one, but it deserves a separate look because the reason might not be the same. while managers often try to set tight deadlines, developers are responsible for evaluating their exaggerated optimism and disagreeing when appropriate. you are the best one to assess the amount of work needed to complete a task. make sure that your estimates include writing automated tests.
the issue here is that developers like to impress others with their effectiveness and sometimes accept more assignments than can actually be completed in the estimated time. if a task takes more than initially expected, instead of admitting their underestimation, they very often resign from writing tests in order to meet the “promised” delivery time. if you don’t postpone writing a set of tests for a feature to the end, but write them together with production code as tdd suggests, you will never have a chance to resign from testing.
3. “my team disagrees on whether we should use tdd”
when working on a software project, there are decisions which affect each team member (like picking a framework or package organization) and should be agreed by everyone or imposed by a team leader. but is tdd one of those decisions? is it possible that only a few team members work with tdd?
in one of uncle bob’s articles , we can read that it’s rather impossible. if a part of such team values tdd and other relies on integration tests, the end result will always be the same – separation. eventually, developers will seek for teams which share their values – if needed in a different company.
fortunately, there is also a positive variant for disagreement which wasn’t mentioned by uncle bob. some teams split between those who value automated tests and those who don’t write any kind of tests. in such a scenario, tdd can be successfully introduced by some team members without starting a conflict. there is a chance that by giving an example, more teammates will get interested in tdd over time.
4. “return on investment in tdd isn’t proven”
many people ask for the scientific proof of the effectiveness of tdd, and they absolutely have the right. after all, they are going to spend their own time on the technique, so evaluation of the investment seems reasonable. the statement about the lack of evidence isn’t, however, correct as there are several trustworthy sources which can be presented to tdd skeptics.
the first one is a paper from north carolina state university created by boby george and laurie williams in 2003 . tdd opponents often demonize the technique as extremely time-consuming, but this document demonstrates that the studied group of tdd practitioners spent only 16% more time on the overall development process.
another experiment documented in 2008 by nachiappan nagappan et al. , which was conducted on three microsoft and one ibm development teams shown a significant decrease (between 40% and 90%) in the total number of pre-release bugs in comparison with similar projects.
you can find much more studies on the web, but we can’t forget it’s impossible to generalize as each software project is different, some domains are hard to compare, and hence the exact numbers won’t be the same in various cases. the real takeaway from such papers is the observable fact that tdd is beneficial for software quality, but you should decide if the required time investment is the price worth paying in your particular case. when a brand can suffer because of bugs in software it’s better to find them as early as possible. but for back-office tools, fixing issues found by users is probably more acceptable. test driven development isn’t a silver bullet.
5. “we tried but it didn’t work”
a bad learning experience can effectively discourage software developers from using any technique. such an effect can be observed when tdd is applied to legacy code which wasn’t designed with testability and modularity in mind. god objects , which commonly occur in older systems, are sworn enemies of unit tests. legacy code isn’t the best place for the evaluation of tdd.
the problem also happens when tdd is applied to code which doesn’t need unit tests. purists can say that you should aim for 100% code coverage, but in real life, such a goal is a waste of time. every application consists of some kind of business logic and rather brainless glue code. while tdd is great for the first group, for the second, it’s like using a sledgehammer to crack a nut. such code usually coordinates the sequence of operations and may have many dependencies which require a lot of mocks in order to test. a long list of mocks and indirect result assertions can also lead to a bad impression of tdd, which in this case is simply a misuse of the technique.
6. “slow build process”
one possible reason for slow tests is the complexity of their setup. if units selected for testing are too big, preparation of the starting state may take a significant amount of time. slow running tests can be treated as a red flag which indicates that you should look more carefully into the application’s design and assess the coupling between elements which are under tests. the longest running tests can be easily found and addressed separately.
it’s also worth to mention that there is no reason to run all application tests as part of the red green refactor process . in a properly designed project structure , there shouldn’t be a problem to select only these tests which are dedicated to the currently changed part of the code base.
another cause for slow running tests is their total number. as already mentioned, some parts of every application simply don’t need tests. code coverage metric boosting is simply lying to oneself that software quality has improved. automated tests should focus on code that changes a lot - business logic, calculations, etc. learn to assess in which cases they aren’t needed.
7. “i don’t know all the requirements upfront to write all tests”
this is an example of misunderstanding the concept of the technique. tdd isn’t about creating all possible test scenarios for a given code unit before writing production code, but rather about discovering test cases together with implemented requirements . the whole beauty of tdd is that the gradual addition of new logic can be done with confidence because the regression of the requirements covered previously is guided by tests. tdd is an iterative process in which production code and tests are created alternately.
naturally, there’s nothing wrong with speeding up the cycle by writing a few tests in a row. in his book "test-driven development: by example," kent beck admits that as long as you understand the problem you are trying to tackle, you don’t have to strictly stick to the red green refactor cycle. just remember, you can always slow down while working on more complex challenges if you feel the need.
tips for tdd adepts
a bad impression of tdd can be severely minimized by learning from others' mistakes. if you’re at the beginning of your journey with tdd, below you can find a few general hints which should simplify the learning process.
- let's start with a truism: all beginnings are hard. expect the initial increase in time spent on developing new features. learning a new thing, whether it’s a framework, a musical instrument, or an unfamiliar technique, takes time. in order to improve, you need to overcome the temptation to return to old habits.
- focus on writing tests before production requirement implementation. it will force you to think about code modularity and impact the way you design classes and their relationships.
- legacy code isn’t the most pleasant playground for tdd. a new feature shouldn’t cause any problem, but adding tests to the existing monolith code base can quickly cool down your aspirations for the technique, as it’s definitely more challenging.
- the aim of 100% code coverage is a myth, and the reason tdd disbelievers describe it as a waste of time. concentrate on the most crucial part of your software where the business logic resides.
- there is plenty of material available on the web, but if you are looking for recommendations, you should check out "test-driven development: by example," written by kent beck, who is a leading tdd advocate.
- improve your unit testing skills. frameworks come and go, but investing your time in learning how to write proper automated tests will always be useful.
- opinionated side mark: making changes in code which isn’t covered by tests is stressful. tdd is good for your health!
what would you add?
the test-first approach has been around for almost 20 years now, yet doubts about its productivity are still pretty common. hopefully, the presented examples will persuade some of those who hesitate with tdd adoption. if you observe any other excuse that isn’t covered by this post, or have a tip for beginners, the comment section is waiting for you. maybe you experienced harmful effects of the technique and the story is worth sharing? your knowledge can help avoid problems by other developers. feel invited to the discussion.
Published at DZone with permission of Daniel Olszewski. See the original article here.
Opinions expressed by DZone contributors are their own.