Software Development Peer Reviews: Five Lessons Learned From the Experience
If you're looking to implement peer reviews into your development process, read on to get a great overview of the how the process is used and works best.
Join the DZone community and get the full member experience.Join For Free
In this article, I want to present five lessons I've learned from applying Peer Reviews in Software Engineering related activities, both in software development and teaching environments.
I've had the opportunity of not only applying a peer review process in software development industrial settings, but also in academic settings, for undergraduate courses I taught at the Departamento de Informática of the Universidad Técnica Federico Santa María (Chile), for training courses for several enterprises, and in R&D software projects.
Although a complete discussion about peer reviews is out of the scope of this article, we need to remember that a peer reviewing process deals with three main objects:
- A work product subject to revision: i.e., the object that will be reviewed (e.g., a use case document, software code, a software or database design).
A reviewer: the person who acts as observer and commenter.
A producer: the person who gets his or her work revised by a peer.
Some of the objects mentioned here have been already mentioned in other previously defined process areas or subprocesses. For instance, we have defined a subprocess called "Peer reviews of work products" (see http://ieeexplore.ieee.org/document/5750494/), but what I'm doing in this article is to give you specific and concrete recommendations on how to use these artifacts and how to improve these processes.
I've had many discussions before about whether peer reviewing is a software quality control or a software quality assurance process. To me, it doesn't really matter to what peer reviewing is applied after a work product is generated (coded, written, generated), which is the main argument software engineers use to classify peer reviews as a quality control technique. What really matters is that peer reviews are a useful process to improve the products' quality before an important release (e.g., before the Sprint demo, before the iteration ends, etc). In this context, I see the benefits of peer reviews as more related to software quality assurance than quality control. And, fortunately, I'm not alone in this: Daniel Galin also considers peer reviews in his book, Software Quality Assurance: From Theory to Implementation. Moreover, and in practical terms, peer reviews are very different from software testing, which is, perhaps, the most used software quality control process/technique to date.
Well, with all that said, let's start reviewing the lessons.
Lesson 1: Define and Use Checklists
No matter how experienced your developers are, a checklist will always be welcomed. This checklist can be considered an organizational asset and, as such, it does not die at the end of a project, but it is constantly evolving as time goes on. But, of course, there is always a zero hour, where the checklist gets its initial definition.
By definition, a checklist is a list of items to be checked or observed. I also recommend grouping the items to be checked by software-development-related areas. For example, let's enumerate two groups of items (note that I'm not imposing a limit on either the groups or on the items, it's only an example):
- Items for use cases.
Items for software code.
Items for Use Cases
Are actors identified?
Is there, at least, one actor (the primary one)?
Are alternative courses of action correctly identified and linked to the main course of action?
Are two technical elements inside the use case? (e.g., windows, sizes)
Items for Software Code
Are variable names "searchable"?
Do method names indicate action? (verb + substantive)
Are objects/classes substantively named?
Are comments explicating the rationale behind important parts of the software?
Do the comments explain that there's still work to be done?
Lesson 2: Create and Publish a Report Template
Developers will always welcome a report template. Depending on the specific context of your organization, a report template can be viewed as either a project or as an organizational asset. No matter the case, your template should present to the reviewer a brief indication of concepts, scales, and possible values that will be used for evaluation purposes. For example, in your report template, you should be informing the reviewers what a "1" means, what a "5" means, and what a "pass" or "fail" means.
Here I present a three-part report template I've been using for several years.
Part I: Identification
The first part is the identification of the report. And, if applied, it should have a reference to the repository in which the files are located.
Reviewer identification (perhaps a team).
The project involved (or product, if applicable).
Repository URL (if applicable).
Start Date, End Date (of revision).
General comments (if applicable).
Part II: Issues/Problem
For an example, let's look at the following table:
|Location||Reviewer||Issue / Problem||Severity|
|Person.java||John Doe||Non-symbolic constant found||L|
|ItemReporter.java||John Doe, Jr||Lots of comments indicating work to be done||M|
|Requirements list (req 1.3)||John Doe||The requirement is so big!||H|
|Use case "Report an item"||John Doe, Jr||Actors are not identified||H|
Please note that the reviewer field in the table is not always required. But if you work with "teams of reviewers," then you must identify the team in the first part, and identify the particular reviewer in the second part (i.e., the table).
It is up to you and your organization to define what low, medium and high means in the severity field, but it is a good practice to mark all issues considere 'severe' for mandatory and urgent correction. If you don't like using L, M or H, you are free to use the Likert-like scale or colors. In the case of colors, be careful to not use similar tones. It is preferable to use red, yellow and green, rather than to use three tones of yellow.
Although I presented one table for the example, I always prefer to have two tables: one for software code related issues and one for all the other issues (requirements, sequence diagrams, use cases, etc.).
Part III: Explanation of the Report Template
The explanation part shows the reviewer how to fill-in the report template and also shows what the important fields mean. In this part, you should be describing what low severity means and how it compares to a high severity issue or problem. In the example, I advise you to consider all high severity issues marked as mandatory for urgent correction. This advice, if taken by you or your organization, should be described here.
Note that this part should be present in the report template, but deleted from the final report (e.g., when you generate the .pdf file).
Lesson 3: You Must Set a Clear Goal
Whether you're using peer reviews for academic or industrial purposes, you must set a goal. Try to set clear specific goals, not generic ones. In my experience, generic goals such as "improve quality" will be hard to account for.
Some examples of goals that are too generic:
"To teach a new quality assurance technique."
"To reduce the number of defects in the products."
"To improve the overall quality of deployed software."
I strongly recommend you set goals that are measurable. In my experience, it is not important in this phase to set metrics and thresholds for them, but it is really important that you can tell, for instance, that you're reducing the number of defects in the products. I also strongly recommend always taking into account your organization's strategic goals. Alignment with these goals ensures you and your development team are not wasting resources measuring things that won't contribute to your organization.
Let's look at a concrete example which is drawn from a real-world experience of mine. Peer reviews are time-consuming, no one doubts that. When you're applying peer reviews in an academic setting, you must be especially careful with the time you require from your students to be aligned with the flow of the course. Requiring your students to apply peer reviews in each of the milestones (supposing your course is organized with milestones) is a daunting task for students. One solution to this problem is to require your students to apply peer reviews in a random fashion with the constraint that no group will have to use peer reviews again. But be careful: this proposed solution is only suitable if your goal is to "teach peer reviewing as a quality assurance technique." If your goal was "use peer reviews to assure the quality of the product in order to decrease the rate of post-demo reported defects," then this proposed solution is not suitable (you will probably end here). Thus, you can see how critical it is to set a clear goal before running any peer review process.
Lesson 4: Define the Roles Involved
Defining the roles not only means describing their names. You should also describe the required (or expected) responsibilities.
In this context, it is important to always check that any review is done with the product in mind, not the author or producer (the person which gets his/her work revised). I've received several comments like, "how it is possible to not criticize a person (i.e., the author) when criticizing a product?" My answer is always the same: "clearly distinguish between the person and the product."
The following two tables show how I define the reviewer and producer roles.
Table 2. Reviewer responsibilities
Table 3. Producer responsibilities
Note that neither the reviewer nor the producer roles are responsible for identifying the products to be reviewed. That is out of the scope of the review process and, instead, should be decided on during planning activities, be it a project planning phase or an iteration planning meeting.
Lesson 5: Set Concrete Limits on the Report
There could be many limits, but at least one has to be set: the report extension. In line with the goals you have previously set, the organization and the reviewers should understand that more doesn't mean better.
One criterion that has been useful for me in past revisions is to set the length of the report to a maximum of 2 pages. This will not only help to maintain the report's scope but also keep peer reviews quick, which is helpful because many tend to see peer reviews as boring and a distraction from "real" development activities (everybody is always busy!).
I have also recommended (in line with strict time constraints typically found in software development projects) to not include the same issue more than once. This implies some responsibility on the part of the reviewer in the sense that if an issue is found for the first time, the author of the report is responsible for correcting similar issues in the rest of the product. Here we find another benefit of having the author of the product acting in the producer role as he or she is the only person who knows with a high degree of certainty where similar issues could be found.
I've not covered any planning issues here. But I can say that peer reviews must be planned. What planned means depends on your specific process, framework, and/or methodology. For those of you who are thinking about Scrum, peer reviews should be considered as items in the Sprint Backlog. When using Scrum, peer reviews are well suited to be carried on inside iterations, by developers, and on development time. Do not use peer reviews in Daily Scrum, Sprint Retrospective, or Sprint Demo. Of course, you can use the results of the previously applied peer reviews (i.e., in development phase) as input for the Sprint Retrospective, but do not run a peer review process in these meetings.
Hope you have enjoyed this reading, and your comments are welcome!
Opinions expressed by DZone contributors are their own.