DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

With Code Metrics, Trends Are King

Why code metrics are important to really understand what's going on with your code, and why "getting to zero bugs" is a misguided approach to dealing with your errors.

Erik Dietrich user avatar by
Erik Dietrich
CORE ·
Jan. 07, 16 · Opinion
Like (3)
Save
Tweet
Share
2.30K Views

Join the DZone community and get the full member experience.

Join For Free

Here’s a scene that’s familiar to any software developer.  You sit down to work with the source code of a new team or project for the first time, pull the code from source control, build it, and then notice that there are literally thousands of compiler warnings.  You shudder a little and ask someone on the team about it, and he gives a shrug that is equal parts guilty and “whatcha gonna do?”  You shake your head and vow to get the warning situation under control.

If you’re not a software developer, what’s going on here isn’t terribly hard to understand.  The compiler is the thing that turns source code into a program, and the compiler warning is the compiler’s way of saying, “you’ve done something icky here, but not icky enough to be a show-stopping error.”  If the team’s code has thousands of compiler warnings, there’s a strong likelihood that all is not well with the code base.  But getting that figure down to zero warnings is going to be a serious effort.

As I’ve mentioned before on this blog, I consult on different kinds of software projects, many of which are legacy rescue efforts.  So sitting down to a new (to me) code base and seeing thousands of warnings is commonplace for me.  When I point the runaway warnings out to the team, the observation is generally met with apathetic resignation, and when I point it out to management, the observation is generally met with some degree of shock.  “Well, let’s get it fixed, and why is it like this?!”  (Usually, they’re not shocked by the idea that there are warts — they know that based on the software’s performance and defect counts — but by the idea that such a concrete, easily metric exists and is being ignored.)

Getting to Zero

At this point, I usually recommend against a targeted, specific effort to get down to zero warnings.  Don’t get me wrong; having zero warnings is a good goal.  But jamming on the development brakes and working tirelessly to bring the count to zero is fraught with potential problems.

  • Some of the warnings may require serious redesign.
  • Addressing some of the warnings may create production risk, particularly if testing is not comprehensive.
  • There is no business value, per se, to fixing compiler warnings.
  • The effort involved in getting to zero may be a lot more significant than anyone realizes up front.
  • “Get to zero” is easily gamed by altering the warning settings of the compiler for this code base.

The easiest time to address a warning is at the moment that it’s introduced.  “Oh, this line of code I just wrote results in a compiler warning, so I should write it in a different way.”  If you write that line of code and then don’t notice that it’s generated a warning, not only will it fall out of your short term memory at some point, but you will probably write other lines of code that depend on that one, either explicitly or implicitly.  The easy reconsideration calcifies in the code and becomes harder to extract later.

It is at this point that managers wonder why people let warnings go and how they don’t notice them.  And this is where the number of warnings comes in.  If you’re working in a warning-free code base, a compile that generates a warning will be memorable.  You’ll notice the warning and fix it.  On the other hand, if you’re working in a code base with thousands of warnings, the number will constantly be changing and it’s unlikely that you’ll even notice whether it was you or someone else that added warning number 3,494.  It’s such a daunting figure that you’d probably only notice the introduction of dozens or even hundreds of new ones.

So for a team with a considerable number of compiler warnings, the most sensible approach is probably a slow, steady drawing down of warnings with each iteration/release of the software.  Putting development on hold to engage in a massive cleanup is unlikely to be practical, but setting a course for general improvement is not only practical — it’s essential.

Code Metrics Over Time

Compiler warnings is one of the most simple metrics when it comes to a code base, which is why I’m talking about it here.  But metrics, even simple ones, tend not to tell a compelling story when measured in a vacuum.  Is 3,494 warnings an excessive number?  Well, sure, assuming the team has all along been shooting for none.  But if the team only recently regarded this as worth paying attention to, and has pared that down from 25,000 over the course of a few months, then it’s doing a pretty good job.

For this reason, I strongly recommend that teams set up static analysis solutions that show trending and that they evaluate themselves based on the trends.  So for a group ‘starting’ at 25,000 warnings, the developers can chase a steady decline and for a group starting at 0 warnings, the presence of even one is clear and visible.

This applies to other metrics that you may want to capture as well.  If you measure test coverage (a practice of which I’m not necessarily a huge fan), whether 50% coverage is "good" or "bad" is going to depend on how much coverage you had a week or a month ago.  What you’re really looking for isn’t being able to point to a number on some readout and say “we have 95%” coverage.  What you need to know is whether code is being added alongside tests and whether previously untested legacy code is being characterized.

If you want something to improve, the first thing to do is measure it, and the second thing to do is continue to measure it.  So sit down with your team and decide on some goals and how to measure progress toward them.  This is important whether you’re a brand new, green-field team or a well-established team in maintenance mode.  And with the state of code analysis tools these days, there’s a way to measure pretty much anything you can dream up.

Metric (unit) trends

Published at DZone with permission of , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • 7 Awesome Libraries for Java Unit and Integration Testing
  • Deploying Java Serverless Functions as AWS Lambda
  • Bye Bye, Regular Dev [Comic]
  • How to Create a Real-Time Scalable Streaming App Using Apache NiFi, Apache Pulsar, and Apache Flink SQL

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: