A Look at Unit, Integration, and Performance Testing

DZone 's Guide to

A Look at Unit, Integration, and Performance Testing

Always think about maintenance when implementing both features and tests. Maintenance means you do not need to do low-value work.

· DevOps Zone ·
Free Resource

The intention of this article is to provide a simple sample of testing a REST API web service. The code of this sample can be found here

In the sample, we will use this technological stack:


Gradle is a tool used to automate the build process. There are a lot of comparatives between Maven and Gradle. We choose Gradle because of its incremental compilation, smart testing (if a project has not changed, then the tests will not be re-executed), and for its Groovy scripting, with which you can do some cool things like:

task extractApi(type: Copy) {  
  from(zipTree('build/lib/vv-automation-resources-' + project.version + '.jar')) into 'build/resources/main'

Spring Boot 1.4 

Spring Boot 1.4 may be the easiest way to bootstrap a JEE application.

Spock 1.0

Spock 1.0 is a Groovy test framework that also can be used to test java applications. It is cool because of its expressive DSL, but it’s needed to see all its power.

Cucumber 1.2

Cucumber 1.2 helps us because we will use BDD with Java, and we want our Bussines Analyst to review the Scenarios.

Serenity 1.1 

Serenity 1.1 is useful because of its ready-to-use IOC and its reports.

Rest Assured 3.0

Rest Assured 3.0 is helpful because of its fluent way of testing a web service.

Gatling 2.2

Gatling 2.2 makes it incredibly easy to modify the configuration in order to test different parameters of performance.

The API REST Service Implementation

This is a simple Java REST web service that will generate scattergrams and store them in a given path.

Like the web.xml in the old web applications and the pom.xml in a Maven application, the build.gradle is the first file to start with to understand how an application is. Here are some interesting parts of the build.gradle:

  • The integration-test inherits the configuration of the unit test.
  • Finally, we generate an installable service.
plugins {
    id 'java'
    id 'scala'
    id 'groovy'
    id "org.springframework.boot" version '1.4.2.RELEASE'
    id "com.github.lkishalmi.gatling" version "0.4.1"

def operatingSystems = ["LinuxRedHat","LinuxSuse","Windows"]

dependencies {
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-web', version:springBootVersion
    compile group: 'de.codecentric', name: 'spring-boot-admin-server', version: '1.4.4'

   // For unit testing
    testCompile group: 'org.codehaus.groovy', name: 'groovy-all', version: groovyVersion
    testCompile group: 'org.spockframework', name: 'spock-core', version: spockVersion
    testCompile group: 'org.spockframework', name: 'spock-spring', version: spockVersion

    // For integration testing    

// Generation of the service as executable
springBoot {
    executable = true
jar {
    manifest {
                'Class-Path': configurations.compile.collect { it.getName() }.join(' '),
                'Main-Class': 'com....IcaImageGenSrvApplication'

 * Task to create daemon packages for compatibles O.S
tasks.addRule('Pattern: daemonPackage<ID>') { String taskName ->
    if (taskName.startsWith('daemonPackage')) {
        task(taskName, type: Zip, dependsOn: 'jar') {
            def os = taskName.replace("daemonPackage", "")
            baseName = "${project.name}-daemon-${os}"

            //Create base structure
                includeEmptyDirs = true
                exclude '**/*.keep'

            //Copy libs
            from([configurations.runtime, tasks.jar.archivePath]) {
                rename(tasks.jar.archivePath.name, "${project.name}.jar")
                into 'lib'

task daemonPackage(group: 'build') {
    dependsOn << operatingSystems.collect { "daemonPackage$it" }

The service is built with Spring Boot 4. The main class will take care of the initialization of the service. Part of the generation of the chart will be done in an async process we put in the initialization the @EnableAsync notation.

public class IcaImageGenSrvApplication extends AsyncConfigurerSupport {

    private int maxPoolSize;

    private int queueCapacity;

    private String threadNamePrefix;

    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
        return taskExecutor;

public static void main(String[] args) {
SpringApplication.run(IcaImageGenSrvApplication.class, args);

Then, we have a service to validate the entry values. Here are some things to mention about this file:

  • The logger definition takes advantage of Java 8, so we can copy and paste from one class to another without worries.
  • The ScattergramnService injected by Spring will help us test the class.
  • The validation will take care of all the possible problems of the entries and return a message in case of any issues.
public class ImageReceiverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());

    private ScattergramService scattergramService;

    public ImageReceiverService(ScattergramService scattergramService) {
        this.scattergramService = scattergramService;

     * Receives the dto with the parameters to create a scattergram. 
     * Perform the validations and queue it.
     * @param scatterParams
     * @return ACK msg to notify if the scattergram is going to be created or a NACK and the message explaining why cant be created
    public String generateScattergram(final ScattergramParams scatterParams) {
        String result = "ACK";
        try {
            MsgValidation validation = validateInputs(scatterParams);
            if (validation.passed) {
                // push scatterdata to the generation queue
            } else {
                result = validation.msg;


Finally, there's the ScattergramnService, where the method is annotated with @Async in order to let Spring manage the execution in an asynchronous thread.

public class ScattergramService {


    public void generateFile(ScattergramParams scatter){

The Test Implementations

The "Old Way"

The next approach of testing is using “old school” testing. In the real example, we had 600 lines of code. Of course, this is not what we are trying to recommend that you do.

public void allParamAreInitalizedProperly() throws Exception {
       //... more
        reqBody  testBody = new reqBody();

        try {
            ObjectMapper mapper = new ObjectMapper();
            String bodyJsonString = mapper.writeValueAsString(testBody);

        }catch (Exception ex){

    public void emptyBody() throws Exception {
        try {
            String bodyJsonString ="";


        }catch (Exception ex){

Unit Testing

With Spock, we can see an example of how easy could be to test the entry arguments of a function and how the DI facilitates the testing and mocking of the dependent classes.

class ImageReceiverServiceTest extends Specification {
    void 'parameters checker'() {
            def scattergramService = Mock(ScattergramService)
            def imageReceiverService = new ImageReceiverService(scattergramService)
            def scatterParams = new ScattergramParams()
            use(InvokerHelper) {
            def result = imageReceiverService.generateScattergram(scatterParams)
            result == expectedResult
            givenParameters | expectedResult
         "token":"#","rgbToken":",", imgHeight:300, imgWidth:300]  | "ACK"

Integration Testing

With BDD (in this case, Cucumber and Serenity), our code is less than 100 lines. Here are some things to note about this file:

  • SpringBootTest will start the Service. Here, we have an integration test.
  • The ScattergramParams is automatically cast from the sample of the feature of Cucumber, so there's less to code.
  • The call to the service is done in a fluent way (rest().given()...). This will result in a more understandable code and better maintenance.
  • We can use fields to pass values between steps because all the steps belong to the same feature.
  • In the assert, we are using AssertJ, which gives all the details that you need in case of error. You should test it to realize how can help you AsserJ given details of the errors.
@SpringBootTest(classes = IcaImageGenSrvApplication.class, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class IcaImageSteps {


    @Given("the following image has just come out:")
    public void requestParamenters(List<ScattergramParams> scattergramParams) {
        scattergramParam = scattergramParams.get(0);

    @When("I send the request to generateScatterGram")
    public void sendRequest() {
    reqAnswer = rest().given().contentType(MediaType.APPLICATION_JSON_VALUE)

    @Then("I should be able to find the content OK")
    public void shouldBeAbleToFindThatAllIsOk() {
        String content = reqAnswer.then().statusCode(HttpStatus.OK.value())


With Serenity, we can have these awesome reports:

  • Summary of all the executed tests where the tests cases can be filtered by capabilities, features, etc.

Image title

  • Details about the executions.

Image title

  • From the detail of the execution, we can access the REST queries executed by RestAssured. We will have all the details in case of error.

Image title

Performance Testing

In this case, we will use Gatling as the performance framework, but we can also take advantage of the “Spring Boot Admin Server” that we have up and running by simply putting @EnableAdminServer in our Main Spring class. If this annotation is working properly, after starting the service, we will see this kind of line at the end of the log:

..Application registered itself as {managementUrl=http://localhost:20666, healthUrl=http://localhost:20666/health, serviceUrl=http://localhost:20666, n

Then, we will be able to access the "Admin Server" and see all the parameters of the execution:

Image title

In the Gatling implementation, we have to take care of the old executions. As in a performance test, we can generate a lot of files and "asserts" in order to use it from the CI: 

class FullWorkLoadDynamicSimulation extends Simulation {

  val noOfUsers      = 10
  val rampUpTimeSecs = 5
  val testTimeSecs   = 60
  val minWaitMs      = 1000 milliseconds
  val maxWaitMs      = 3000 milliseconds

  val httpConf = http

  val headers_10 = Map("Content-Type" -> "application/json")
  val incrementalId = new AtomicInteger(0)
  var imagesFolder="build/tmp/fharts"
  var image= imagesFolder + "/scattergram"
  var extension=".png"
  var imagesFolderPath = new File(imagesFolder)

  val scn = scenario("Generate Scattergram")
    .during(testTimeSecs) {
      exec(session => session.set("imageFile", image + incrementalId.getAndIncrement + extension))
          http("Generate Scattergram dynamic")
        .pause(minWaitMs, maxWaitMs)
 setUp(scn.inject(rampUsers(noOfUsers) over (rampUpTimeSecs)))

  def deleteRecursively(file: File): Unit = {
    if (file.isDirectory)
    if (file.exists && !file.delete)
      throw new Exception(s"Unable to delete ${file.getAbsolutePath}")

Some Thoughts About Testing

Think about what you really want to test and always find out the best way to test it.

By this, I mean that it is not only about what kind of test to do (i.e., unit test, integration test, etc.); it is also about how best to focus our "testing" efforts. The main focus of implementing tests is to find errors and find out quickly and easily why there is an error. Where does the error come from?

Sometimes, I feel that we can find three different approaches in testing:

  1. There are some people who simply don’t test (and I suppose that they are not reading this article!). That's just bad practice.
  2. There are some people that just test the "happy path," but this is not enough.
  3. There are some people that start thinking about testing all the possibilities. Here, maybe, we could incur in implementing too many tests. I will explain this thought.

What I want you to take into account here is that we should think about the maintainability of our tests. We have to take care to not overload our application with tests because if something has to be changed (and it will happen no matter how simple our application is) then we (maybe) will realize that we wrote a lot of tests that don't make much sense or are duplicated tests. The maintenance effort of the tests will be costly. For example, if we are working in a BDD, it is very usual to see some tests like this:

Scenario1: I can consult the value...
Given I have some values
When I get a value
Then I see the value

Scenario2: I can delete a value
Given I have some values
When I get a value
And I modify the value
Then I see the value is changed

However, if we see this kind of implementation:

@When("I send the request to generateScatterGram")
public void sendRequest() {

...then we realize that the frameworks that we are using are helping us with detailed information in case of error in the When condition. I know that this is a very delicate point, with a thin line between what can be considered as overworking and doing the correct things. From my point of view, Scenario1 could be omitted because it is implicitly tested in the second one, as the Given will fail in case of error. We have all the detailed information in case of error. 

This is just a sample that we can take advantage of to see what is implicitly tested, with detailed information in case of error, even knowing that our tests should focus on testing one thing.  

Another sample could be to do UI testing with Page Factory Object and on every load of the object, verifying that all the components of the page are correct. From my point of view, if we have a problem, this should be discovered in a specified functional test so that another time, we could be doing more work than specifically needed.

Please, always think about maintenance when implementing features and when implementing tests. Maintenance means you do not need to do "low-value" work.

Finally, if we have low-value (needless) tests, this implies wasted time in the execution of the tests. The execution time is another variable to take care in the CI, even though we have designed a testing solution with parallel execution in mind. 

P.S. Thank you to Fernando Blanco and Isaac Aymeric for their collaboration.

devops, integration testing, performance testing, rest api, software development, unit testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}