Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Code Profiling and Unit Testing

DZone's Guide to

Code Profiling and Unit Testing

How to use code profiling to review code that does not have unit tests and add those tests to the code.

· Agile Zone
Free Resource

Discover how to build scalable and maintainable UI tests to optimize your Agile workflow. Brought to you in partnership with TestComplete by SmartBear.

I recently gave a talk on the role of a Quality Analyst as an organization transitions from waterfall to agile. The talk was entitled "Switching Horses in Midstream" and covered a number of topics. One item in particular struck me as worthy of blogging about. It's a technique I've been using for years, but have never written about it. So here we go:

Legacy code bases with a lack of test coverage are often trepidatious places to hang out. You never know when a simple abstract method is going to create a sink-hole of logic and reason, eventually leading to your exhausted resolve to revert and leave it dirty.

Whenever I encounter a code base that lacks unit tests, I begin with a technique I call "Code Profiling". This is a nod to criminal behavioral profiling where scientists observe objectively and describe patterns observed.

First of all, you need a naming convention for your unit test. I don't much care what it is. I mean, I have an opinion or two on the matter, but for the purpose of this article and this technique, you just need to pick something and stick with it. Today, I'm going to go with a Should_ExpectedBehavior_When_StateUnderTest pattern as it lends itself well to this technique.

So let's say we're looking at a postal address class.

MailingAddress

--------------------

Street

Street2

City

StateCode

StateName

PostalCode

Country

--------------------

OutputAsLabel

--------------------

This is a close proximity of a class I encountered at a client a few years back. All of our properties have getters and setters. Pretty basic stuff at first glance.

When we profile the class, we change our naming convention from Should_ExpectedBehavior_When_StateUnderTest to Does_ObservedBehavior_When_StateUnderTest. The key difference here is the "Does" instead of the "Should". The point is to clearly identify these as tests that were written after encountering untested code and are based on observed behavior.

For this particular class, we notice that both StateCode and StateName have getters and setters and that the StateName is optional in the constructor whereas StateCode is required. This is.... odd.

After reading the code a bit, we figure it would be entirely possible to set a StateCode and StateName that do not match. We take a look at the constructor and it protects against this by looking up the StateCode in a hash. If the StateName does not match, the code silently changes StateName to the match from the hash and moves on. No error. And a little more checking shows that we can update the StateName directly with no change to StateCode.

Does_NotError_When_StateMismatchToConstructor...
  
Does_AllowStateMismatch_When_StateNameChanged...

And here is the subtle significance of this approach. We make no judgement about what "Should" happen. We simply record what "Does" happen. The developers that missed this in the first place were likely thinking about what "Should" happen and missed these possible issues.

Now run through the tests with your product owner and/or QA. For those that are the desired behavior, rename them to fit the standard convention. For those that are not the desired behavior, if any, decide if they need to be fixed now or if they can be deferred. It is possible that the issue has been in the code for years and while it's annoying it is not an urgent fix. And it might take a lot of unravelling to actually fix it.

Your tests now indicate the desired behavior and the observed behavior that is not desired, but is not entirely "broken".

When you want to bring the behavior into alignment with expectation, I suggest commenting out the "Does" test and writing as many "Should" tests as necessary. Then, un-comment the "Does" test, ensure it fails, and delete it.

Overcome the weakest link in your automated testing cycle to increase test speed and coverage. Brought to you in partnership with TestComplete by SmartBear.

Topics:
programming practices ,development ,craftsmanship ,testing ,legacy code

Published at DZone with permission of Michael Norton, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}