Regression Dandelions Be Gone...
Regression Dandelions Be Gone...
Join the DZone community and get the full member experience.Join For Free
Do you need to strengthen the security of the mobile apps you build? Discover more than 50 secure mobile development coding practices to make your apps more secure.
Do you like dandelions? I hate them. You mow or pluck a few, yet many others crop up. The main reason that happens is because chances are that you left some behind, they matured and spread their seeds. What needs to be done is to remove them from the roots using a weed plucker. This way you will reduce the occurrence of weeds and in turn dandelions. Of Course this is hard :)
So what did you miss? Well, leaving the argument about simple and stupid components aside, you missed covering for those new or updated use cases. Now, you have broken dependencies and chain reactions there in! Bloated UI code breaking is but a symptom. You were not only supposed to update the tests, but also make sure the tests were representative of the use cases and workflows, i.e. mainly from the point of user flows. Then complete the tests, rerun the suites to seek out regressions. This is more or less Behaviour Driven Development or BDD. This cycle has to be followed ruthlessly everytime you touch the code . This way you will easily be able to catch behaviors that broke due to code changes. Silly unit test failures should be expected but the idea is to catch them way before integration tests or QA manual testing cycle comes into play. On top of this, you make sure you have good coverage numbers making sure you have hit a lot of the code branches with your units tests. This builds confidence within the team.
I have paid attention to unit testing a lot over the years, but BDD was not on my radar for the longest time. We just look at coverage numbers and tests on new code, but often get lazy focussing on the bigger picture. The crux is that when you add features to your product, it usually adds to and/or affects user behaviors. Someone has to keep an eye on that, from engineering point of view and make sure the tests reflect that, with every release. Someone has to play the role of, what I term as “Night Watchman!”.
So here are some practical mini-strategies that have worked well in my experience:
If cost and time permits, use every development iteration as an opportunity to refactor UI code into smaller and smaller manageable chunks.
Assuming you are Agile, make it a point that in every sprint, a team member steps up to become the night watchman. Rotation of this role helps team member, especially junior ones, to become empathetic towards end-users, QA team and their own time!
Bring the developers on the same page so that they write unit tests to be as representative of a user task as possible.
The night watchman works with Product Owner and/or just references well written QA test suites to code review the unit tests.
Track the regression metrics over time and determine what code is usually more susceptible to breaks. That code is your prime candidate for your next refactor.
Tag team and review each others test suites just like you would review code. Over time these tests become living artifacts.
Now, there are other theories and tools that help you follow these strategies. But I have found that BDD to be the one that will get you closest to how the user thinks and performs his/her tasks. If our tests are written to compliment that, we have a better chance of clamping down regressions with successive development iterations. This also adds speed when engineering effort in iterations is milder. I have been using Jasmine for writing BDD with my AngularJS code. Give it a try, if you have not already!
Opinions expressed by DZone contributors are their own.