Why Automate Code Reviews?
Why Automate Code Reviews?
Code reviews are as controversial and popular as ever. Find out why I think that your review process should be primarily automated.
Join the DZone community and get the full member experience.Join For Free
Editorial Note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. This is a new partner for whom I’ve started writing recently. They offer automated code review and documentation tooling in the .NET space, so if that interests you, I encourage you to take a look.
In the world of programming, 15 years or so of professional experience makes me a grizzled veteran. That certainly does not hold for the workforce in general, but youth dominates our industry via the absolute explosion of demand for new programmers. Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.
Perhaps nothing has exemplified this variety more than the code review. I’ve participated in code reviews that were grueling, depressing marathons. On the flip side, I’ve participated in ones where I learned things that would prove valuable to my career. And I’ve seen just about everything in between.
Our industry has come to accept that peer review works. In the book Code Complete, author Steve McConnell cites it, in some circumstances, as the single most effective technique for avoiding defects. And, of course, it helps with knowledge transfer and learning. But here’s the rub — implemented poorly, it can also do a lot of harm.
Today, I’d like to make the case for the automated code review. Let me be clear: I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest. But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.
I mentioned extremely productive code reviews. For me, this occurred when working on a team with those I considered friends. I solicited opinions, got earnest feedback, and learned. It felt like a group of people working to get better, and that seemed to have no downside.
But I’ve seen the opposite, too. I’ve worked in environments where the air seemed politically charged and competitive. Code reviews became religious wars, turf battles, and arguments over minutiae. Morale dipped, and some people went out of their way to find ways not to participate. Clearly, no one would view this as a productive situation.
With automated code review, no politics exist. Your review tool is, of course, incapable of playing politics. It simply carries out its mission on your behalf. Automating parts of the code review process — especially something relatively arbitrary such as coding standards compliance — can give a team many fewer opportunities to posture and bicker.
Learning May Be Easier
As an interpersonal activity, code review carries some social risk. If we make a silly mistake, we worry that our peers will think less of us. This dynamic is mitigated in environments with a high trust factor, but it exists nonetheless. In more toxic environments, it dominates.
Having an automated code review tool creates an opportunity for consequence-free learning. Just as the tool plays no politics, it offers no judgment. It just provides feedback, quietly and anonymously.
Even in teams with a supportive dynamic, shy or nervous folks may prefer this paradigm. I’d imagine that anyone would, to an extent. An automated code review tool points out mistakes via a fast feedback loop and offers consequence-free opportunity to correct them and learn.
So far, I’ve discussed ways to cut down on politics and soothe morale, but practical concerns also bear mentioning. An automated code review tool necessarily lacks the judgment that a human has, but it has more thoroughness.
If your team only performs peer review as a check, it will certainly catch mistakes and design problems. But will it catch all of them, or is it possible that you might miss one possible null dereference or an empty catch block? If you automate the process, then the answer becomes “no, it is not possible.”
For the items in a code review that you can automate, you should, for the sake of thoroughness.
Saving Resources and Effort
Human code review requires time and resources. The team must book a room, coordinate schedules, use a projector (presumably), and assemble in the same location. Of course, allowing for remote, asynchronous code review mitigates this somewhat, but it can’t eliminate the salary dollars spent on the activity. However you slice it, code review represents an investment.
In this sense, automating parts of the code review process has a straightforward business component. Whenever possible and economical, save yourself manual labor through automation.
When there are code quality and practice checks that can be done automatically, do them automatically. And it might surprise you to learn just how many such things can be automated.
Improbable as it may seem, I have sat in code reviews where people argued about whether or not a method would exhibit a runtime behavior, given certain inputs. “Why not write a unit test with those inputs?” I've asked. Nobody benefits from humans reasoning about something the build, the test suite, the compiler, or a static analysis tool could tell them automatically.
As I’ve mentioned throughout this post, automated code review and manual code review do not directly compete. Humans solve some problems better than machines, and vice-versa. To achieve the best of all worlds, you need to create a complimentary code review approach.
First, understand what can be automated, or, at least, develop a good working framework for guessing. Coding standard compliance, for instance, is a no-brainer from an automation perspective. You do not need to pay humans to figure out whether variable names are properly cased, so let a review tool do it for you. You can learn more about the possibilities by simply downloading and trying out review and analysis tools.
Secondly, socialize the tooling with the team so that they understand the distinction as well. Encourage them not to waste time making a code review a matter of checking things off of a list. Instead, manual code review should focus on architectural and practice considerations. Could this class have fewer responsibilities? Is the builder pattern a good fit here? Are we concerned about too many dependencies?
Finally, I’ll offer the advice that you can use the balance between manual and automated review based on the team’s morale. Do they suffer from code review fatigue? Have you noticed them sniping a lot? If so, perhaps lean more heavily on automated review. Otherwise, use the automated review tools simply to save time on things that can be automated.
If you’re currently not using any automated analysis tools, I cannot overstate how important it is that you check them out. Our industry built itself entirely on the premise of automating time-consuming manual activities. We need to eat our own dog food.
Published at DZone with permission of Erik Dietrich , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.