Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Continuous Code Reviews & Quality Releases

DZone's Guide to

Continuous Code Reviews & Quality Releases

Let's look at how we can make a solid end-to-end code review process with continuous monitoring in place and deliver high quality products using some of the tools like Git, Stash, Jenkins, SonarQube, JaCoCo & Ant/Maven for a Java-based product, for example.

· Agile Zone
Free Resource

See how three solutions work together to help your teams have the tools they need to deliver quality software quickly. Brought to you in partnership with CA Technologies

It is very important that we write quality code from day one to deliver high quality products. We need to educate and encourage developers about quality by continuously monitoring/promoting throughout the development life cycle.

Let's us see how we can make a solid end-to-end code review process with continuous monitoring in place and deliver high quality products using some of the tools like Git, StashJenkins, SonarQube, JaCoCo & Ant/Maven for a Java-based product, for example.

Processes

All we need to know is how to use the available tools/methods and how best we can leverage, integrate and orchestrate them to create an end-to-end system in place which we can call the "Review System". 

Here's a link to share the list of tools & methods that can help in building a Review System  http://www.methodsandtools.com/archive/archive.php.

We will need both Manual & Automated review processes to have a solid Review System. 

I've tried my best to illustrate a simple example of a Review System. We can add more automated processes to this, similar to Junit, Jmeter, SonarQube and so on, as shown below.

Image title


Developer checks-in the code using Git commands and uses Stash help to get the code reviewed by the respective reviewer by requesting through Stash.

A reviewer gets an email requesting for a code review, then the reviewer goes through the code and does a manual review of it. Making this manual review process mandatory by using tools like Stash/JIRA will help in getting continuous reviews done, logging feedback/issues religiously.

Once manual reviews are done and accepted, code gets merged into the Git repo automatically. Now, a CI tool like Jenkins pulls the source code to execute relevant scripts for automated reviews and thus generating reports.

These automated scripts can be written using Ant/Maven/Bash. I have used Ant scripts here for the examples.

Manual Review Process

We should try to automate everything if possible and leave only those areas which cannot be automated for manual reviews.

Manual code reviews can cover the following important areas (but not limited to), and these review areas might vary depending on the phase, code churn, size, and release type (Dot, Hotfix, and Major).

Design & Functional

The reviewer should do an end-to-end review to verify against the specifications. This should cover overall product design to every component level with its pros and cons to make sure the proposed system addresses the business problem.

This could cover architecture, technologies, framework, design patterns, database schemas, end-to-end data flow, and critical components. Important reviewers at this stage can be chief architects and subject matter experts.

Improve the design as the application evolves, there is no perfect or everlasting design.

Compatibility

Backward and forward compatibility review plays a vital role to ensure that the application has not broken while it underwent enhancements/improvements in a previous version or future versions. This could cover operating systems (OS), devices, database schemas, networks, browsers, libraries, and development kits.

But, there needs to be a limit to this support as we try to improve, add more features, and take advantage of new technologies. We may not be able to support n-4, n-5, n-6...n-x versions. For example, not all OS versions runs on every device version.

Architects and subject matter experts can do a better job with their overall knowledge of a product, its life span, and roadmaps.

Backward compatibility may become a hurdle for the product evolution, but it shouldn't be ignored.

Performance

This is another very important area that should be reviewed in detail from the inception of the code till the product goes live. Reviewer needs in depth knowledge & the usage of each and every technology.

Always look for alternatives like utilities, designs, best practices/patterns if any which can improve the performance even a millisecond for that matter of fact.

But that said, there could be situations where we may need to compromise with the performance to achieve functionality & this may be due to technology limitations at that point of time.

For example, a smart phone device will drain out early than a non-smart of the same manufacturer.

Performance reviews, improvements, and tests should be taken care from day one.

Security

This is one of the important review checks that needs to be performed. Though we could delegate the security reviews to tools like Fortify/Findbugs the product still needs a manual review process, as every system is unique in its design, development, integration and installation/deployment.

This review could cover threat modelling, penetration, integrations with 3rd party APIs and so on, to eliminate poor design/development/configuration/deployment errors which can exploit weak areas.

Try to hack your own system with known tools/methodologies, for example: SQL injection/cross-site scripting, to identify weak areas and close them prior to the product launch.

Readability

Writing code is an art. Reviewers should go through the code, review the code complexities and comments, and verify it against standards/guidelines/consistency checks. Any other developer other than the  author should be able to make out its purpose, identify/fix defects, and enhance it further.

Maintainability

Maintenance is a very important aspect that if not appropriately factored for, could prove more expensive and hidden, than the actual development itself. Low maintenance systems, would avoid high down times (High availability) and resume system with easy efforts.

The author of the code may not always be the person who might be maintaining the code. The important reviewers can be a system administrator & an architect.

Readability & Maintainability of a code should be as simple as a relay race, where in developer passes the code like a baton to others.

Code Check-ins, Branches & Releases

We can make review, test & release processes easy & better with the below

  1. Maintain a main branch for continuous development and integrations.
  2. Maintain separate branch for each feature (Feature Branch) & merge this to main branch once reviews/tests are done.
  3. Try to push the code in small sizes which are reviewable & testable.
  4. Try to push the code as frequent as possible to avoid bulk merges/conflicts.
  5. Code check-ins with proper comments with feature/defect descriptions/Ids would help in quicker traceability, easy reviews & feedbacks.
  6. Cut the release candidate from mother branch and proceed with the release process.
  7. Maintain main branch always in stable state.

Image title


Review Feedback

Feedback can be integrated with a bug tracking tools like JIRA. You can find more information on integration of Stash with JIRA at https://confluence.atlassian.com/bitbucketserver/jira-integration-776639874.html.

The author sends a request for review through Stash. Below is an example of Review Request mail sent by Stash.

Push Request Example

Reviewers go through the comments of the request, followed by code review, and eventually providing feedback. An example of a reviewer code acceptance mail sent by Stash on Approval.

Image title

Automated Review Process

Formatting, styling, naming, coverage, standards and possible security vulnerability reviews can be given to tools like Junit, Selenium, Jmeter, Fortify, and SonarQube.

Here are the steps that could help you build an automated system for code reviews:

  • Download & configure Jenkins and SonarQube.
  • Chose a database for your system, for example MS SQL Server/MySQL.
  • Create a database with the name "SONAR" in the database.
  • Configure Jenkins job and set "Build Trigger" as per the project requirements.  For example, Unit test executions can be a daily and performance/security can be weekly frequencies.
  • Configure Git in Jenkins under "Source Code Management" to pull the code.

Image title

  • Write an Ant script to build .war or .ear files and deploy to a application server. Invoke this script from Jenkins.

Image title

  • Example of Ant Target to create a .war:
  • <target name="makewar-AppServer" >
    <mkdir dir="${release.dir}/AppServer" />
    <war destfile="${release.dir}/AppServer/${ant.project.name}.war" webxml="${web.xml}">
    <classes dir="${classes.dir}"/>
    <fileset dir="WebContent/">
    <exclude name="**/example.jar"/>
    </fileset>
    <manifest>
    <attribute name="Example-Version" value="${LABEL}" />
    </manifest>
    </war>
    </target>
    • Invoke unit test script from Jenkins that runs all P0, P1, P2...Pn automated test cases as part of the build to get the JaCoCo, Selenium test results. Execution of automated unit test cases can be a daily job. 

    More details on Ant task for unit test executions & reports: http://eclemma.org/jacoco/trunk/doc/ant.html

  • Example of JUnit Ant target
  • <target name="unittest" description="Execute Junit tests">
    <taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml">
    <classpath path="${lib.dir}/jacocoant.jar" />
    </taskdef> 
    <property name="jacoco.resultfile" value="jacoco-syncservice.exec" />
    <jacoco:coverage destfile="${jacoco.resultfile}">
    <junit fork="true" forkmode="once" printsummary="true" failureproperty="junit.failure">
    <sysproperty key="user.dir" value="${basedir}"/>
    <classpath refid="test.base.path"/>
    <batchtest todir="${build.dir}/test-reports">
    <fileset dir="${build.dir}/test-classes">
    <include name="com/kony/examples/tests/suits/Suite1"/>
    <include name="com/kony/examples/tests/suits/Suite2"/>
    <include name="com/kony/examples/tests/suits/Suite3"/>
    <include name="com/kony/examples/tests/suits/Suite4"/>
    <include name="com/kony/examples/tests/suits/Suite5"/>
    <include name="com/kony/examples/tests/suits/Suite6"/>
    </fileset>
    <formatter type="xml"/>
    </batchtest>
    </junit>
    </jacoco:coverage>
    <jacoco:report>
    <executiondata>
    <file file="${jacoco.resultfile}" />
    </executiondata>
    <structure name="${ant.project.name}">
    <classfiles>
    <fileset dir="${classes.dir}" />
    </classfiles>
    <sourcefiles encoding="UTF-8">
    <fileset dir="${src.dir}" />
    </sourcefiles>
    </structure>
    <html destdir="${build.dir}/coverage-report" />
    </jacoco:report>
    <antcall target="test-report"></antcall>
    <fail if="junit.failure" message="Junit test(s) failed.  See reports!"/>
    </target>
  • Invoke Jmeter script in Jenkins to execute the performance test cases. 
  • Configure the unit & performance test results to capture the trends and set the thresholds as per the standards.
  • Test Results

  • Write Ant script to run SonarQube analysis with respective environment details of paths and credentials and invoke it from Jenkins. Example Ant script: 
  • <project name="<projectName>" default="sonar" basedir="." xmlns:sonar="antlib:org.sonar.ant">
      <property name="base.dir" value="." />
    <property name="reports.junit.xml.dir" value="${base.dir}/build/test-reports" />
    <!-- Define the SonarQube global properties (the most usual way is to pass these properties via the command line) -->
    <!-- Used SQL Server in this example -->
    <property name="sonar.jdbc.url" value="jdbc:jtds:sqlserver://<ipaddess>/sonar;SelectMethod=Cursor" />
    <property name="sonar.jdbc.username" value="<userId>" />
    <property name="sonar.jdbc.password" value="<password>" />
    
    <!-- Define the SonarQube project properties -->
    <!-- Used Java as source code in this example -->
    <property name="sonar.projectKey" value="org.codehaus.sonar:<projectKey>" />
    <property name="sonar.projectName" value="<projectName>" />
    <property name="sonar.projectVersion" value="<ver#>" />
    <property name="sonar.host.url" value="http://<ipaddess>:<port>" />
    <property name="sonar.language" value="java" />
    <property name="sonar.sources" value="${base.dir}/src" />
    <property name="sonar.java.binaries" value="${base.dir}/build/classes" />
    <property name="sonar.junit.reportsPath" value="${reports.junit.xml.dir}" /> 
    <property name="sonar.dynamicAnalysis" value="reuseReports" />
    <property name="sonar.java.coveragePlugin" value="jacoco" />
    <property name="sonar.jacoco.reportPath" value="${base.dir}/jacoco-example.exec" />
    <property name="sonar.surefire.reportsPath" value="jacoco-example.exec" />
    <property name="sonar.jacoco.antTargets" value="build/<test-reports>/"/>
    <property name="sonar.libraries" value="${base.dir}/lib" /> 
    
    <!-- Define the SonarQube target -->
    <target name="sonar">
    <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml">
    <!-- Update the following line, or put the "sonar-ant-task-*.jar" file in your "$HOME/.ant/lib" folder -->
    <classpath path="path/to/sonar/ant/task/lib/sonar-ant-task-*.jar" />
    </taskdef>
     <!-- Execute the SonarQube analysis -->
    <sonar:sonar />
    </target>
    </project>
  • Configure "Editable Email Notification" to get the build result mails with build log attached and the test results published as part of mail body using ${JELLY_SCRIPT, template ="html"}
  • Image title

    • On completion of the job, Jenkins sends the mail to the "Recipient List".

    Examples of build success/failure status mails. Please look into the details of the mail, it has Build status, Build duration, Build branch with version details, Changes, Artefacts checked-in & Unit test statistics.

    Image title

    Image title

    SonarQube

    SonarQube comes with various dashboards that help to monitor code metrics in detail at project and portfolio level, facilitating the developer to get registered and assign the defects, to navigate through the defects to the code level, and to know the exact cause/possible fix.

    Main dashboard covers

    1. Programming Defects
    2. Code Duplications
    3. Code Complexity
    4. Code Coverage

    Example project dashboard view

    Sonar Dashboard Example

    Conclusion

    Review System with solid Manual & Automated processes will help in:

    1. Compiling, executing test suits, deployments continuously.
    2. Continuous reviews, controlled check-ins, and share knowledge.
    3. Avoiding time lags between development and test phases.
    4. Shortening QA cycles and avoiding repeated manual tests comparatively.
    5. Producing high quality releases with continuous integration, parallel reviews, tests, and fixes.
    6. Frequent, predictable, and continuous releases.
    7. Minimizing rework, refactor/clean-ups/tech debts as the code grows over a period of time.
    8. Minimizing manual intervention/corrections and maximizing quality.
    9. Bringing in transparency to all (from developer to leadership).

    Every single line of code should get tested and produce expected results every single day.

    I tried my best to share my thoughts throughout this article, please feel free to leave me any comments/questions you might have.

    Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies

    Topics:
    code quality

    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}