What Should Developers Test?
What Should Developers Test?
Now that we've settled on the fact that testing in DevOps is important, we need to decide who should be testing what.
Join the DZone community and get the full member experience.Join For Free
Read why times series is the fastest growing database category.
Where Should We Draw the Line Between QA and Developer?
At a recent QA strategy meeting, one of our team members brought up that she had been writing unit tests for her team. She asked if any other test engineers were writing unit tests.
Everyone shook their head no.
Even on a team with plenty of quality resources, a developer still needs to do some testing. Ask a developer not to test that their code compiles before committing and see what they say.
The pipe dream of many software developers when they first get a test engineer is that they will never have to test again. Obviously, the compilation example above is a bit facetious, but it demonstrates that somewhere between “check that code compiles” and “full functional and usability testing,” there is a transition of testing responsibility from the developer to QA.
So, what is best tested by the developer of the code, and what is best left to a dedicated tester?
First, this cannot be a universal rule set. Different applications and different organizations have different needs. For example, in an organization where the tester is largely responsible for automating testing, a larger part of the manual testing might fall to the developer. When there are only a few cases that need manual testing, it doesn’t make sense wasting time with a handoff to QA.
My opinions here will be what works in very general cases. There won’t be an exact line to draw, but I hope the reasoning should be useful in guiding roles and responsibilities.
Goals of A Successful Testing Strategy
So, what is that reasoning? In order to figure out who tests what, we need to establish goals for a decent development pipeline. These are parts we need to optimize for development to move swiftly, with good quality:
- Left shift – Ignoring the jargon, it should be clear that the sooner bugs or quality issues are found, the easier and cheaper they are to fix.
- Appropriate overall quality – code shouldn’t release to users with major issues.
- Quick development – developers should not be unduly slowed by the release pipeline and testing.
- People don’t work below their pay – Developers are expensive. A doctor in machine learning should not be clicking the same buttons over and over for an hour.
Those are the ones I can think up. I’m sure there are others that smarter people could come up, but these tenets get us 90% of the way there.
Well then, how do some the of broad types of testing fit with these goals?
Usability testing should be facilitated by the tester or UI/UX designer, but it should be done as early as possible, and so requires involvement from the developer.
Some usability testing, like A/B testing with mock-ups, can be done solely by the user experience designers. However, they would be remiss if they didn’t work with the developer to some extent. It’s terrible to find a beautiful layout that the developers can’t make without going way over budget.
The main reasons for this are goals two and three. Color and layout decisions benefit heavily from having a trained UI designer, rather than a coder, create the styling. UI design can require large changes to the development project though, so in the name of swiftness, it should be done as soon as usability testing is possible.
Unit testing should almost always fall to the developer to write.
Separating unit tests from the developer slows down development with an unneeded hand-off to the tester. It also delays finding issues that would be trivial to fix if the developer had noticed before the handoff. That means more bugs found later, while being fixed more slowly.
A common argument against this is that it violates goal four. Unit tests are simple to write, and as part of other testing, a QA resource can do them just as well. That frees up the developer to work on other production code.
This is to some degree true, but right-shifting bugs isn’t worth it.
Even if it were worth it, unit testing typically falls into the bucket of making sure that code works on a basic level, which every developer benefits from before handing off to a tester.
Lastly, unit testing’s value is largely in documentation. The developer needs to be the one documenting the intent of the code that they wrote. The tester can guess the intent or ask the developer. In the first case they could be wrong, in the second the developer is probably spending more time explaining than if they had just written the tests in the first place.
A,h the fun one. When a new feature or bug fix comes in, who should do the functional testing to make sure it works?
The answer to this one is probably the most varied out of the testing types here.
It should be the responsibility of the developer to test that their code works before handing it off. Handing it off without any testing slows down the development stream when the tester notices a typo and must communicate that to the developer.
It’s also just really embarrassing to have a tester find a stupid bug. I know, I’ve been on both sides of that situation, sadly.
It also makes sense to have the QA resource test functional changes, though. I’ve seen some places rely on automation to fill that gap, but the tester is still writing them because of their testing experience. They should be using that experience to help satisfy goal two: improving overall code quality.
Having the tester expand on the developer's testing also lets the developer get back to production coding, while QA runs tests that don’t necessarily require as much technical knowledge.
So overall, I think it’s best if functional testing is shared. The developer needs to make sure it works to some degree, but the quality resource should be using their testing knowledge to ferret out non-obvious issues.
I split performance testing into 2 parts. First, the code efficiency, involving elements like runtime and memory footprint. Second, the real-world efficiency, or how the software responds on real servers with real data.
The first part, code efficiency, typically is up to the developer. They are the ones that will have an idea whether the code has changed in a way that could affect runtime or memory. If they suspect their changes might hurt this, they should run tests before handing off the code to avoid any performance deal-breakers.
The second, and probably more important part, is testing how the system performs with real world setup. That is, how well does the system perform with a database in another state, real network latency, a browser on a non-dev computer, etc.?
That second part necessarily must happen a little bit later in the pipeline than other types of testing, so there’s less harm in waiting to let QA handle this testing. If your tester is experienced in performance testing as well, then they may be able to design you a test that provides more accurate performance metrics. All this satisfies goals one, two, and three.
The time where this doesn’t make sense is if the tester does not have performance testing experience. Performance testing is inherently technical, so if QA is not comfortable with that then the developer must be the one to do it.
In contrast to unit testing, I believe regression tests almost always fall to the QA resource.
The developer should check for basic regressions if they know there is a risky change to an area, but that is getting into a grey area of functional testing.
True regression tests should take advantage of the tester’s experience. They could be run by developers if there are automated tests, but a general suite of regression tests should be designed by the tester.
If automation is used, this doesn’t slow down the pipeline and still satisfies the other goals as well.
If manual testing is required, it still makes sense to use the testers experience to get at the same test cases as an automated suite. However, with a manual testing strategy, the development stream will be slower, and it will be harder to left shift since the tests require QA to run the workflows.
Of course, this isn’t an exhaustive list of testing types, and I left some types intentionally vague. Hopefully, though, that gives some good examples for how to think through whatever testing is done for different development pipelines.
Opinions expressed by DZone contributors are their own.