The Forrester survey polled 159 IT professionals in the US. 35% of the respondents were from the computer software industry with 15% and 13% chunks from military/aerospace and computer hardware industries respectively. Smaller slices came from telecoms, financial services, healthcare, manufacturing, government, transportation, and energy/utilities. 47% of the respondents put down 'application development and/or support' for their job function. 26% were engineering and support. There were also small portions of project managers, C-suite, enterprise architects, QA, and build/release management.
The job level distribution was almost completely even among respondents. 30% said they were team leaders and the amount of individual contributors, managers, VPs/directors, and Senior-most decision makers was between 16-21% for each of those categories. The number of distributed teams in the respondent pool was 28% while the rest of the respondents' teams were mostly centralized.
Conducting Code Reviews
Forrester first asked respondents to indicate which code review benefits were 'key' benefits - they could select more than one answer. The top benefit, selected by 84%, was the reduction in the number of bugs found later in the cycle. Behind that at 72% was the benefit of shared best practices and learning. Other benefits that were often selected included: 'encourage refactoring and simplification, opportunity for code reuse,' 'reduce wasted time (later in the project),' and 'compliance'. 'Providing input into HR reviews' was listed as a code review benefit by only 6% and not a single respondent said "I see no clear benefit of code reviews."
60% of the surveyed code reviewers said they communicate in person, which is the best practice for code reviews. The other 40% were evenly distributed between email, conference calls, and web applications. In a "select all that apply" question, respondents indicated who is usually involved with code reviews. 93% said developers were involved, 62% involved their project managers, 45% involved architects, and 31% involved QA. Over 90% of the respondents were following best practices by having a process for determining who should be involved in code reviews, but over half said their process was informal.
The range of code review tools used was diverse and pretty evenly dispersed. Only 18% of respondents said they didn't use any tools. Respondents could select all tools that applied in this question. These were the results:
- Static code analysis tools - 33%
- Purpose-built tools - 31%
- Online meeting - 29%
- Code coverage tools - 27%
- IDE - 20%
- Wiki - 19%
Another interesting question in the survey found that 26% of the respondents did not have mandates for their software to be compliant. This was a 'select all that apply' question. 57% said they required compliance with internal mandates (ISO or other quality standards) and 42% said they required compliance with regulatory mandates (FAA, FDA, PCI; these respondents likely included the government, military, and healthcare sectors).
This survey shows that code reviews are in industry standard for most organizations. 53% said that code could not go live without a review, while 25% said they reviewed key components at the very least. However, there wasn't a clear standard on how the frequency of code reviews is determined. Here are the results for that question:
Code reviews are conducted...
- At the end of each phase/iteration of the project - 33%
- On an ad hoc basis decided by the development lead - 30%
- When a developer has finished his/her work - 25%
- When the developer thinks he/she needs it - 9%
This indicates that only a third of the respondents had a strict time schedule for code reviews. Most organizations seem to be more flexible on when a code review should take place.
Challenges in Code Review
Respondents were asked to rank code review challenges from 1-5 (1= not a challenge at all, 5=significant challenge). The results were pretty even on the challenges. Statistically, "having the right amount of time to prepare" was the biggest challenge with respondents ranking it 3 or higher. "Getting everyone interested and motivated" was another difficult challenge with 66% ranking it 3 or higher. "Getting everyone into one location" had 60% answering 2 or less because 72% of respondents were mostly centralized.
Another question asked how often a respondent's team used certain elements/artifacts in code review. The answers were "always", "sometimes", "rarely", or "never". Predictably, 84% said they always use the source code in their review. Requirements (59% "always"), designs (45%), standards (40%), and test results (39%) were always used by many organizations as well. Visual models like UML were not used as often, with only 11% saying they always use them and 45% saying they sometimes used them.
All data and graphics are Copyright © 2010, Forrester Research, Inc.
Finally, Forrester wanted to measure the current adoption rates of social media tools in code review and development. They found a pretty even spread between those who "sometimes" or "always" used tools like reddit, twitter, blogs, and Stack Overflow to get answers, and those who "rarely" or "never" used them. Social media is having even less traction in code review, specifically. 58% said they never use social media tools for code reviews. 24% said they rarely do, 15% said sometimes, and 2% said always.