Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Cloud-Agnostic, Continuous Quality Assurance

DZone's Guide to

Cloud-Agnostic, Continuous Quality Assurance

Everyone wants clean code.

· Performance Zone ·
Free Resource

Built by operators for operators, the Sensu monitoring event pipeline empowers businesses to automate their monitoring workflows and gain deep visibility into their multi-cloud environments. Get started for free today.

Everyone wants clean code. We have learned early on in our careers that bad code is a monster eating away at our project deadlines. The question this article is trying to answer is if it's possible to set up a development process in a way that will, by design, prevent the rotting of our source code.

Image titleIntroduction

Let's just get one thing straight. There is always a point in a project where someone has to decide to do things quick and messy, and then, later on, pay the price. This is OK from a tactical point of view. But it is possible to set up tools that will let us keep code quality standards even in cases of quick and dirty code writing

The last thing I would like to point out is that every enthusiastic team starting a new project wants to make a clean and successful project. But as the project kicks off, there is always a lack of time to actually set up a quality assurance environment. It can be either because of lack of time or lack of knowledge in the domain. There should always be, from the first line of code committed, quality metrics and checks as part of the development process.

Let us define some required toolset that will help us get a quick start. We will run all our environments as Docker containers. The community has already done hard work to prepare pre-baked components we will use. Also, using Docker is becoming more mainstream, so I will use this opportunity to share a thing or two about how to use it.

We will be using the following tools throughout the article:

  • Gerrit
  • Jenkins
  • SonarQube

Source Code Repository

There isn’t really any use case where one would not want their code stored in a source code repository. Even when developing alone, one would benefit from having source code versioned. The reasoning from my perspective is simple. I consider my development machine volatile, and to avoid the classic “the dog ate my homeworking” syndrome, it is preferable to push code to some remote location, either as a backup or as primary storage. The most commonly used technologies for code versioning are SVN, Git, and Mercurial. We are going to focus on Git. Git is widely adopted, feature-rich, and scales very well for bit teams.

The official Git implementation offers the possibility to start a central Git server. This is nice as it helps for a quick start. The downside of it is that the default Web UI is limited. Luckily, there are plenty of web-centric solutions that make managing Git repositories a breeze. Tools I had the opportunity to play with are GitBlit and BitBucket. Since we are mainly interested in offering a review process, GitBlit will not support us there. On the other hand, BitBucket is an all-in-one solution that is, unfortunately, not free or open source. To contribute back some marketing for the open source world, we are going to focus on Gerrit here. Gerrit integrates source code management with code reviews. Let's not get too ahead of ourselves. Let us focus on source code repositories, first.

Let's analyze how version control can contribute to code quality. Version control saves the entire history of a project, every change made to every file, the person who contributed to a change, and when, together with the comment. To steer your mind away from blaming culture and finger pointing to whoever contributed to the nastiness that you stumble upon, a positive culture will contribute far more. For example, you can identify bad code. If you know how to fix, fix it and teach the author how to do better in the future.

Gerrit Setup

Let's get real. Assuming you have Docker up and running on your machine, the Docker installation instructions on the official site are far more than enough to get you started. Just a couple of hints:

  • On Windows [7- 10> docker is not called docker for Windows it is called docker toolbox and it's running inside a VM
  • Docker does not run natively on MacOS, it is running inside a VM as well
  • Setup GitBash or similar Bash implementation for Windows

I’m lucky enough to be running a Linux distribution on my workstation, so docker runs natively and Bash is already present by default. To test if an installation is working, issue the following command:

docker run hello-world


Expected Output:

Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world
d1725b59e92d: Pulling fs layer
d1725b59e92d: Verifying Checksum
d1725b59e92d: Download complete
d1725b59e92d: Pull complete
Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788
Status: Downloaded newer image for hello-world:latest
Hello from Docker!


As mentioned before, we will start Gerrit as a Docker container. We will be using Docker compose to help us with the setup. Docker compose helps us define our Docker setup declaratively. The first version of our setup can be fetched with the following command:

git clone https://github.com/asambolec/code-pipeline.git


The project contains a docker-compose.yaml, an httpd.conf file containing the Apache configuration, and a passwords file defining three users:

  • admin/admin
  • user1/user1
  • user2/user2

We can start the code management pipeline by running:

cd code-pipeline

docker-compose up -d


Docker-compose will use a definition of the container in the docker-compose file and start the entire cluster. The output should be as follows (excluding some downloading lines):

Creating network "codequality_dev" with the default driver

Creating apache ... 
Creating gerrit ... 
Creating apache
Creating gerrit ... done


Each service runs in its own container. To see logs of the Gerrit server, run:

 docker logs -f gerrit 


If we examine the logs, we can see the following line, which is indicating that Gerrit is ready:

[2018-10-02 18:04:32,170] [main] INFO com.googlesource.gerrit.plugins.gitiles.HttpModule : No /var/gerrit/etc/gitiles.config; assuming defaults

[2018-10-02 18:04:32,456] [main] INFO org.eclipse.jetty.server.handler.ContextHandler : Started o.e.j.s.ServletContextHandler@1221d607{/,null,AVAILABLE}

[2018-10-02 18:04:32,463] [main] INFO org.eclipse.jetty.server.AbstractConnector : Started ServerConnector@4efa7d2f{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}

[2018-10-02 18:04:32,463] [main] INFO org.eclipse.jetty.server.Server : Started @16315ms
[2018-10-02 18:04:32,465] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15.4 ready


Gerrit can now be accessed. You can open it on http://127.0.0.1:8080/ in your browser. A prompt from the browser will appear to enter username and password. Here, we will use admin/admin as the credentials.

The first time opening Gerrit, you will land on the intro page. Pay attention to the information in the screen and click “Skip intro” to continue. You are logged in as administrator. Per default, Gerrit runs in development mode. This will cover the requirements of this article. There are plenty of tutorials on how to set up a production-ready Docker/Gerrit setup. If there is interest, I will prepare a follow-up article.

Creating a Project

Let's create our first project. Click on “BROWSE” in the top menu and choose “Repository.” The “Create new” button will appear on the top right side. After clicking the “Create new” button, the following pop-up will appear:

I will name the project “hello-world” and click the “Create” button. Gerrit will automatically redirect you to the page where you can find information on how to clone a project and configure additional options.

Let's keep defaults and clone the project. Execute:

git clone http://127.0.0.1:8080/hello-world

cd hello-world/

git config user.name admin

git config user.email admin@splendit.at


Inside the directory, we will create a new file called UserRepository.java  in the src/main/java directory using the following command:

mkdir -p src/main/java

cat << EOF > src/main/java/UserRepository.java

import java.util.HashMap;
import java.util.Map;

public class UserRepository {

private final Map<String, String> fistNameLastName = new HashMap<>();

public static void main(String[] args) {
UserRepository userRepository = new UserRepository();
String lastName = userRepository.getUserLastName(args[0]);
if (lastName != null) {
System.out.println(lastName);
}

}

private String getUserLastName(String name) {
if (fistNameLastName.containsKey(name)) {
return fistNameLastName.get(name);
}
return null;
}
}


Now, we can commit the file and push it to the code repository.

git add --all

git commit -m "First commit"

git push origin master


You will be prompted for a username. You can supply the admin as the username. Then, go to Gerrit UI and click the icon on the top right corner, which will bring following the dropdown menu.

Click on the “Settings” option. Then, scroll down to HTTP credential section and click the “Generate new password” button. Note that using HTTP credentials is not the recommended method for authentication against a Git repository. For production setup, please use ssh certificates.

Click “GENERATE NEW PASSWORD” to continue the push operation.

The following message should appear in your console if the push is successful:

Username for 'http://127.0.0.1:8080': admin

Password for 'http://admin@127.0.0.1:8080':

Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (6/6), 631 bytes | 631.00 KiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
remote: Processing changes: done 
To http://127.0.0.1:8080/hello-world
1b4d4c7..4d0c679 master -> master


Let's make another change in the file and repeat the process. We change line 12 to the following:

System.out.println("Last name: " + lastName);


The following commands will commit and push the code:

git add --all

git commit -m "Second commit"

git push origin master


Source Control Example

To point out the importance of source control systems, we will look at an example. Let's imagine that we have a bug and we traced it down to our newly updated class and we are not sure who changed what and why. We can now easily see changes to files with git annotate and git blame commands. But better yet, if we use an IDE (Eclipse in this example), information will be even more readable. After importing a Git project in Eclipse, we open UserRepository.java. We right-click in the editor to get the “Context Menu” and go on the “Team” menu where we choose “Show Revision Information”.

Every line gives us more insights into the history and changes done to that line.

This technique is extremely helpful because every time an experienced developer opens some class and spots problematic code, they can directly teach the author how to approach this problem in the future. Additionally, we can see which code change could introduce a bug in the code.

One hint! To close revision information, click the right mouse button on a line number column, and under the “Revisions” menu, choose the “Hide Revision Information” option:

This is just one tiny example of how the source code repository can be used to improve the code quality. In the following chapter, we are going to address a core review from the aspect of maintaining code quality.

Recommendation: always use the source code repository for any type and size of a project.

Code Review

Using source code repositories adds a huge benefit to any project. It also provides a supporting infrastructure for maintaining code quality, as shown before. The second process, from which the code quality can be controlled, is code review. Code reviews can help prevent bugs by having a second pair of eyes validate code, maintain coding standards (like the formatting), etc.

Almost any mature source code server has code review implementations via mechanisms, like pull requests. Since we are focusing on the open-source solutions, we are going to dive deeper into pull request mechanisms offered by Gerrit.

With our Apache setup, we have created two additional users. We can log out as admin and log in as user1 to continue. The recommended way is to use a private browser window to test the new user.

A popup screen appears after login. We can just click the “Close” button on that screen and continue.

We repeat the same steps for user2. This is required because users in Gerrit are created only after the first login. Please log out as user user2 and log in as admin.

All permissions in Gerrit are configurable via groups, so we will create a group for the hello-world project owner. We go to “Browse” -> “Groups,” and then, we can click on the “CREATE NEW” button on the top right corner. We name the group “hello-world-owner.” We can then add the user1 user as a member to the group by entering data into the text field and clicking the “ADD” button.

Now, we go back to “Browse” -> “Repositories” -> “hello-world” and click “Access” on the left side.

We then click the “Edit” button and “Add reference” link. We choose the “Owner” option in the “Add permission …” dropdown menu, and as a group name, we type “hello-world-owner.” We also add the permissions “Submit” and “Label Code-Review” to the same group.

The “Save changes” button will configure that the hello-world-owner group has owner permission on the hello-world project and allows users in this group to do code reviews.

Pull Request

After preparing the configuration, we can make our first pull request. Please login as the user2 user. First, we set up the project for Gerrit using the following commands:

git config user.name user2

git config user.email user2@splendit.at

curl -Lo .git/hooks/commit-msg http://127.0.0.1:8080/tools/hooks/commit-msg

chmod u+x .git/hooks/commit-msg


Now, we can open our project and make a change on line 21 to the following:

throw new RuntimeException();


We have to push the change. We can do that by using the following commands:

git add --all

git commit -m "First pull request"

git push origin HEAD:refs/for/master


If we open the web UI as the user2 user, we can see there the review request:

We can open the “First pull request.” On the left side, we can add user1, as a reviewer, by clicking the “ADD REVIEWER” link.

Now, to see and approve changes, we log in as the user1 user. We can see the pull request in the list. If we open it, we can see all the changes done to the file:

When we are satisfied with the changes, we can accept them by clicking on

And

By submitting, we have merged the pull request to the master branch.

With this basic example, we have shown the core setup of pull requests in Gerrit. In the following chapters, we are going to explore some integrations that can be done with other tools. Pull requests are very powerful but also have downsides. Approvals are usually made by more experienced developers. Their time is precious and too many requests can disrupt their work. To mitigate this, pull requests should be small; the smaller, the better. Also, it is important that commit messages are descriptive and offer more context to a reviewer. Additionally, a very important thing is a cultural change. Reviews must not be something people are afraid of; they must be opportunities to learn and improve.

Jenkins

Remember the first chapter and our explanation of how important it is to use version control, even on the smallest projects? Well, I would dare say that continuous integration is right after that. This is the most precious addition one could, and should, have on the project. A bit of the story from the beginning:

I had the opportunity to work on a big project during my first years as a professional carrier. There, we had a Java 6 application with a relatively large code base split over 8-10 Eclipse projects. What was very interesting about it was how Eclipse generously connected those projects together. Eclipse would detect changes, recompile, and redeploy automatically. It was quite a nice setup on the first sight. But there was a monster hiding behind it. Every deployment to any environment required that developer exports a WAR file from within their IDE and manually copy it to the target environment. This could be quite a headache because, sometimes, people deployed their local changes accidentally on production. We learned quite fast that there are tools that can help us escape from the hell we were in. The problem was that we had libraries laying around projects without version, without proper names. It took quite a large effort to find the proper setup to produce matching WAR files.

Tools like Maven give people headaches. But it turns out that most of the problems come from infrastructure that is surrounding developers (network proxies, firewalls, antiviruses). It requires dedication and lots of fighting to get applications like Nexus or Apache Archiva installed with normal Internet access.

Again, a recommendation: whatever difficulty you have for setting up infrastructure, always use dependency management tools on your project. They will pay off very fast in any project.

When we use dependency management, we quickly find one big hidden game. Our builds can now run outside of the IDE. Actually, they can run anywhere. So, that means we can create tools that would build our code automatically! Luckily, these tools are already there. Say hello to Jenkins.

Jenkins is the most famous, open-source, continuous integration platform. It offers integration with many source control repositories and build tools. Jenkins will help us build our projects using our dependency management tool. But not only that, Jenkins will help us deploy the code to different environments and do a lot more.

Now, we come to something called deployment pipeline. We need some way to define a chain of events that will convert our committed code into value for customers. Since version 2.0, Jenkins has implemented support for pipelines. The pipelines allow the storing of the development pipeline configuration into a source code repository.

To be more practical, let's add the Jenkinsfile to our hello-world project. The Jenkinsfile is expected to be on the root of the repository and is used by Jenkins to create and execute the pipeline. We will put the following Jenkinsfile in the root of our hello-world project. We then commit and push the file, as described in previous chapters.

pipeline {

agent any
tools { 
maven 'Maven 3.3.9' 
jdk 'jdk8' 
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
''' 
}
}

stage ('Build') {
steps {
sh 'mvn package' 
}
}
}
}


We also need to use build a tool for our project. An easy option is to use Maven. We just add the following content to a pom.xml file in the root of the project and push it together with the Jenkinsfile.

<project xmlns="http://maven.apache.org/POM/4.0.0"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.jsparrow</groupId>
<artifactId>hello-world</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<finalName>ModuleOne</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>


Additionally, we update our docker-compose.yaml file with the Jenkins container definition, as follows:

version: '2.1'

services: 
…
jenkins:
container_name: jenkins
image: jenkins/jenkins:2.107.3
ports:
- 8081:8080
networks:
- dev
networks:
dev:


After running the ‘docker-compose up -d’ command, we can access Jenkins on http://127.0.0.1:8081. Jenkins will prompt us with the initial setup. When you run the following code, you can acquire the password required to complete the initial setup.

docker logs -f jenkins


Enter a password from the console and click “Continue.” Then, click the “Install suggested plugins” button:

We have to wait for the process to finish. Your Jenkins is now ready. Now, we need to add our project to Jenkins. We will use the admin user to complete the process. You will be prompted to enter the user data:

Click “Submit” and “Start using Jenkins.” Now, we need to configure the tools that we needed for our build. These are Maven and the JDK. We can go to the Jenkins homepage and choose “Manage Jenkins” on the left side of the screen. Then, we pick the “Global tool configuration.” Configure the following settings:

Save the changes and go back to the homepage by clicking Jenkins icon on the top left corner. You can immediately create a new job by clicking “create new jobs” link:

Enter “hello-world” as job name, choose “Multibranch Pipeline” as a type, and click “OK.” Click “Add source:”

Add the URL to our Gerrit Git repository: http://gerrit:8080/hello-world. By credentials, click “Add” and put it in the admin as the username with an HTTP password generated in Gerrit “Settings” for the admin user. Finish setup by clicking the “Save” button. If you open the Jenkins homepage and go to the “hello-world” project “master” branch, you will be able to see your build results:

This was the basic Jenkins setup required for continuous integration using Maven. Now, we can build our code outside of our IDE. This is just scraping the surface. We can easily build our code automatically on every commit. We can even configure the deployment of the produced artifacts. There is also a possibility to automatically approve Gerrit pull requests only if the Jenkins build is successful. If there is an interest in this topic, please let me know in the comments, so I can think about it for a follow-up article.

Since we are talking about code quality, it is hard to ignore the tools for code analysis, like SonarQube or TeamScale. We learned so far how easy it is to build our code using Jenkins. In the following chapter, we will see how easy it is, now that we have Jenkins and Maven, to include SonarQube in our pipeline.

SonarQube

Up to now, we have seen tools that are supporting our effort of clean code. Now we are going in a different direction. We will check the tools that can provide metrics about the code quality. The tool analyzed here is SonarQube. SonarQube is an open source project. It has a predefined set of rules which are run against the source code and provide measurements about possible issues and even estimations in form of technical debt.

SonarQube is integrated with IDEs, using a SonarLint project. SonarLint provides real-time information about possible issues even before the code is committed to the source code repository.

How does this help us to write better code? On one hand, the SonarQube provides information to product managers about the current state of the project so that actionable tasks can be created to mitigate possible issues. On the other hand, developers have insights about their code and can set targets and thresholds for their project.

We are here going to observe simple integration setup with Jenkins using Maven. The first step is to add the SonarQube as docker container to our docker-compose.yaml file with the following lines:

sonarqube:

container_name: sonarqube
image: sonarqube:7.1
ports:
- 9000:9000
networks:
- dev


After running the ‘docker-compose up -d,’ the SonarQube will be available on http://127.0.0.1:9000. Initial credentials are admin/admin.

SonarQube can be integrated in various ways. A simple way to integrate it is as a Maven plugin. We need to add the configuration to our Jenkinsfile in order to run SonarQube as part of the build pipeline. Add the following lines to the Jenkinsfile after the “Build” stage:

pipeline {

...
stage ('Sonar') {
steps {
sh 'mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.4.0.905:sonar \
-Dsonar.host.url=http://sonarqube:9000'
}
}

}
}


Commit and push changes to the master branch as shown before. We can initialize a build in Jenkins by clicking the “Build Now” button in our job.

After a successful build, we will see our new step in Jenkins and results in SonarQube. In SonarQube, open the Projects page:

The newly scanned project is available. Now, we can see more details into our code quality. In our current implementation, there are three code smells:

As shown, we get a lot of information about our project. Other than the ones shown, SonarQube can also gather data about code coverage and be integrated into the code review process. One recommendation I can provide is to integrate SonarQube on day one of the project and make it mandatory to comply with its rules. This is the easiest and, sometimes, the only way to keep SonarQube less red and the code more manageable.

Bring Legacy Systems in the Right Way

So far, our recommendation was to start using all the tools in the first steps of the project! Surely, it would be ideal to start the right way immediately when starting the project. But it’s not always like that! We often drag some old legacy systems without proper source control, quality checks, or continuous integration processes in place. For those projects, it’s never late to bring them on the right track! Better late than never! The ideal would be to cover them with some tests first, for easier refactoring and modernizing. When tests are in place, there are also some tools that can provide us with some help in modernization. Unfortunately, there is no good open-source solution. We can make use of IntelliJ's capabilities, or use a cheaper option with the same or an even bigger set of capabilities, like the jSparrow Eclipse plugin, which is also more convenient when using Eclipse. With more modern code, we can calmly continue setting up the modern continuous integration process, delivery, and deployment pipeline, and proceed development the right way.

Conclusion

This article described an easy setup of the environment to help and guide you in producing more quality, easily-readable, and maintainable code. So, let’s recapitulate the tools that were used and their benefits!

There is no use case and explanation for not storing your code into the source code repository! A version control, like Git, helps you by saving the entire history of the project with all the changes being traceable.

The code review can help prevent bugs by having a second pair of eyes validate the code, maintain coding standards (like formatting), etc. A person reviewing the code has a completely different perspective and sometimes even more knowledge. Make use of it to learn and improve on a daily basis. But remember to keep it small — the smaller the pull request is, the better. Keep in mind that you should use it in a positive and blameless environment!

Continuous integration is right after the source control. It is the most precious addition one could and should have on the project. Save your time and improve quality by automating everything you can! With Jenkins, you can schedule automated builds with automated tests executions and quality checks! Also, set up an automatic pull request rejecting build failures.

Last, but definitely not the least important, set your helper on a way to quality code with using SonarQube and automatic analysis of your code. With SonarLint integrated into your IDE, you can do it with every line you write and produce nicer, readable, quality code!

Thank you for reading!

Download our guide to mitigating alert fatigue, with real-world tips on automating remediation and triage from an IT veteran.

Topics:
devops ,quality assurance ,code reivew ,source code repository ,continious integration ,jenkins ,git ,gerrit ,sonarqube ,docker

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}