Let's Talk About Code Reviews
Let's Talk About Code Reviews
What to look for when you're reviewing someone else's code, and what to prioritize when you do.
Join the DZone community and get the full member experience.Join For Free
Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.
Let’s talk about code reviews. If you take only a few seconds to search for information about code reviews, you’ll see a lot of information about why code reviews are a good thing (for example, this article by Jeff Atwood).
You also see a lot of documentation on how to use Code Review tools like our very own Upsource.
What you don’t see so much of, is a guide to things to look for when you’re reviewing someone else’s code.
Probably the reason there’s no definitive article on what to be looking for is: there are a lot of different things to consider. And, like any other set of requirements (functional or non-functional), individual organisations will have different priorities for each aspect.
Since this is a big topic to cover, the aim of this article is to outline just some of the things a reviewer could be looking out for when performing a code review. Deciding on the priority of each aspect and checking them consistently is a sufficiently complex subject to be an article in its own right.
What Do You Look For When You’re Reviewing Someone Else’s Code?
It doesn’t matter how you’re reviewing code: via a tool like Upsource or Github pull requests; during someone else’s walkthrough or demo of their code; sitting down with a colleague and checking through their changes; even looking at code on StackOverflow. Whatever the situation, some things are easier to comment on than others. Some examples:
- Formatting: Where are the spaces and line breaks? Are they using tabs or spaces? How are the curly braces laid out?
- Style: Are the variables/parameters declared as final? Are method variables defined close to the code where they’re used or at the start of the method?
- Naming: Do the field/constant/variable/param/class names conform to standards? Are the names overly short?
- Test coverage: Is there a test for this code?
These are all valid things to check to ensure the code is consistent with the rest of your codebase, to make it easier for team members to understand - you want to reduce context switching between different areas of code and reduce cognitive load, so the more consistent your code looks, the better.
However, having humans check these is probably not the best use of time and resources in your organisation. Although a code review process that checks these kinds of properties will improve the overall quality of your code, many of these checks can be automated. This takes a little bit of work up-front, but pays off very quickly. Tools like Checkstyle, FindBugs, PMD, CodeNarc, Emma, Cobertura, JaCoCo and SonarQube can, especially when combined, check that your code is consistently formatted, that standards around naming and the use of the final keyword are followed, and that common bugs caused by simple programming errors are found. You can even run IntelliJ IDEA’s inspections from the command line, so you don’t have to rely on all team members having the same inspections running in their IDE.
What Should You Look For?
So far we’ve just talked about what it’s easy to look for, and that you should consider using tools other than code review to check these automatically.
What sort of things are humans really good for? What can we spot in a code review that we can’t delegate to a tool?
It turns out there’s a surprisingly large number of things. This is certainly not an exhaustive list, nor will we go into any one of them in great detail here. Instead, this should be the start of a conversation in your organisation about which things you currently look for in a code review, and what, perhaps, you should be looking for.
- How does the new code fit with the overall architecture?
- Does the code follow SOLID principles, Domain Driven Design and/or other design paradigms the team favours?
- What design patterns are used in the new code? Are these appropriate?
- If the codebase has a mix of standards or design styles, does this new code follow the current practices? Is the code migrating in the correct direction, or does it follow the example of older code that is due to be phased out?
- Is the code in the right place? For example, if the code is related to Orders, is it in the Order Service?
- Could the new code have reused something in the existing code? Does the new code provide something we can reuse in the existing code? Does the new code introduce duplication? If so, should it be refactored to a more reusable pattern, or is this acceptable at this stage?
- Is the code over-engineered? Does it build for reusability that isn’t required now? How does the team balance considerations of reusability with YAGNI?
Readability & Maintainability
- Do the names (of fields, variables, parameters, methods and classes) actually reflect the thing they represent?
- Can I understand what the code does by reading it?
- Can I understand what the tests do?
- Do the tests cover a good subset of cases? Do they cover happy paths and exceptional cases? Are there cases that haven’t been considered?
- Are the exception error messages understandable?
- Are confusing sections of code either documented, commented, or covered by understandable tests (according to team preference)?
- Does the code actually do what it was supposed to do? If there are automated tests to ensure correctness of the code, do the tests really test the code meets the agreed requirements?
- Does the code look like it contains subtle bugs, like using the wrong variable for a check, or accidentally using an
andinstead of an
Have You Thought About…?
- Are there potential security problems with the code?
- Are there regulatory requirements that need to be met?
- For areas that are not covered with automated performance tests, does the new code introduce avoidable performance issues, like unnecessary calls to a database or remote service?
- Does the author need to create public documentation, or change existing help files?
- Have user-facing messages been checked for correctness?
- Are there obvious errors that will stop this working in production? Is the code going to accidentally point at the test database, or is there a hardcoded stub that should be swapped out for a real service?
What makes “good” code is a topic that every developer has an opinion on. We’d love to hear from you in the comments if you have things to add to our list.
Published at DZone with permission of Trisha Gee , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.