Functional Web Services Testing Made Easy with SoapUI - Part 3

DZone 's Guide to

Functional Web Services Testing Made Easy with SoapUI - Part 3

· Web Dev Zone ·
Free Resource

In the first two parts(part 1 and part 2) of this series we have seen how to use SoapUI to write functional tests for web services and also how to use Groovy for test setup, properties transfer, and assertions. As we have already said, tests should be integrated with your builds and should be able to be run with them. If you have automated your builds and they are running as part of your continuous integration (CI) setup, SoapUI comes in even handier: you can run the test suites and the test cases you created and, moreover, you can generate JUnit reports.

This part of the series guides you step-by-step through the Ant tasks for running tests written using SoapUI, generating JUnit reports, integrating with CI, and to top it all off getting code coverage using Cobertura. When you are done with this part, you will be able to reap the benefits of writing tests, continuous integration, and code coverage.

The project we will use is the famous PetStore from Sun’s BluePrints. We will use JPA (with Toplink) for the persistence tier and session beans for the business tier; by adding simple annotations we will publish these session beans as web services. This project has no front end. It will be deployed to the JEE reference app server GlassFish. This is a prototype written for this series in less than an hour using the NetBeans IDE. However, we will pass over the details of project creation in favor of keeping our eyes on the prize: using SoapUI and Ant, generating JUnit reports, integration with CI, and code coverage.

Source code for the AccountManager session bean interface:
package com.stelligent.biz.ws;

import java.util.Collection;

import javax.ejb.Remote;

import com.stelligent.ent.jpa.Account;
* @author msubbarao
public interface AccountManager
public Account create(Account info);
public Account update(Account info);
public void remove(Account info);
public Account[] findAllAccounts();
public Account findByUsername(String username);

Implementation Class for Account Manager with annotations for web services:

package com.stelligent.biz.ws;

import javax.ejb.Stateless;
import javax.jws.WebMethod;
import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
import javax.persistence.*;

import com.stelligent.ent.jpa.Account;

import java.util.*;
* @author msubbarao
@WebService(name="AccountManager", serviceName = "AccountManagerService", targetNamespace = "urn:AccountManagerService")
@SOAPBinding(style = SOAPBinding.Style.RPC)
public class AccountManagerBean implements AccountManager
private EntityManager manager;

public Account create(Account info)
return info;

* Updates the Account entity.
* @param info the Account object used to update.
* @return the info object.
public Account update(Account info)
return this.manager.merge(info);

* Removes the Account entity.
* @param info the Account object used to update.
public void remove(Account info)
this.manager.remove(this.manager.getReference(Account.class, info.getUserid()));

* Retrieves all the Account entities as an array.
* @return the objects as a Account array.
public Account[] findAllAccounts()
Query query = this.manager.createQuery("SELECT o FROM Account o");
java.util.List<Account> accounts = new java.util.ArrayList<Account>();
java.util.Iterator<Account> iter = query.getResultList().iterator();
while (iter.hasNext())
accounts.add(((Account) iter.next()));
return (Account[]) accounts.toArray(new Account[accounts.size()]);

* This method retrieves the Account entity info using the primary key.
* @param username the String
* @return the object as a AccountInfo or null if not found.
public Account findByUsername(final String username)
return (Account) this.manager.find(Account.class, username);
catch (Exception e)
return null;



Let’s consider each of these in more detail.

1. SoapUI and Ant
SoapUI provides command line tools to run various tests. To run the functional tests, we need to use the com.eviware.soapui.tools.SoapUITestCaseRunner class, which takes the path to the SoapUI project file containing the tests and a number of options:

Since the runner runs from the command line, we will use the exec task in Ant. SoapUI comes with scripts based on the operating system you installed it on, testrunner.bat or testrunner.sh. (If you are using a Windows machine and your build machine is using a different operating system, for example UNIX, you will need to make changes to this script.)

Let’s begin by writing out what we need to add to the Ant build file to run these functional tests on a Windows machine. We need to define a target that will run the tests; for clarity, we will also specify properties that will clarify the meanings of the file locations specified:

a. Specify the properties for SoapUI location and the project file

<property name="soapui-location" location="C:\\Program Files\\eviware\\soapUI-1.7.6\\bin"/>
<property name="soapui-project-xml-location" location="Weather-soapui-project.xml"/>

b. Define a target to run the functional tests.

<target name="run-soapui-tests" description="runs all functional SoapUI  tests">
<exec dir="${soapui-location}" executable="cmd.exe" failonerror=”true”>
<arg line="/c testrunner.bat -j -f${reports} -r -a ${soapui-project-xml-location}"/>

At this point, we will open a command window and run the soapui-tests. The output is:

2. JUnit Reports
Adding the -j switch in the Ant task arg above produces XML reports. SoapUI maps TestSuites to report-packages and TestCases to report TestCases. The XML reports generated from the Ant task can be further transformed using the junitreport Ant task:

<target name="generate-report" description="creates JUnit-compatible xml reports">
<junitreport todir="${reports}">
<fileset dir="${reports}">
<include name="TEST-*.xml"/>
<report format="frames" todir="${reports}\\html"/>

This produces output like this:

3. Integration with CI
Continuous integration is all about compiling, testing, inspecting, and deploying to your application server at each commit. The benefit of CI is simple: when you build software often, you will find code defects early. A lot of CI servers are available, both open source and commercial. Hudson is an open source CI server with a very simple set up, just three steps (yes you heard me right, just three steps):
a. Download the latest version of Hudson.
b. Open a command window and type java -jar hudson.war
c. Open up a browser and go to the url http://localhost:8080
That’s all you need to get the CI server up and running in less than two minutes. You can read more about configuring Hudson and downloading all the plugins and much more at the link here.

As we will now see, it’s very easy to run the functional tests we wrote in this series once we deploy our application to a server. If all the tests run, the build file will generate JUnit reports; these reports can be integrated within our CI dashboard. If any of the tests fail, we fail the build, using the failonerror attribute of the Ant task. Based on how CI has been set up, everyone in the team can receive an email or a text message should the build break.

Let’s take a look at how we run these functional tests and integrate them with CI:
Once you have Hudson installed and running, you must add your projects to Hudson as Jobs for it to monitor these projects. We are going to create a single job which runs at midnight.

1. Go to the Hudson main page, and click New Job. This will open a page as such:

2.Enter a descriptive name since this is the name which will be displayed on the Hudson dashboard. Select the "Build a free-style software project". Click the OK button and you’ll be presented with yet another screen

3. In this screen, select your source code management. You will be presented with a detailed screen based on your SCM. Fill in all the details. Hudson needs all this information to checkout all the source code assets to build the project.

4. Hudson can be configured to build your project on a scheduled basis, or periodically. In our case the job needs to run at midnight. So, here are the changes needed to run the job on a nightly basis.

5. Next step is to configure Hudson to properly build our project using Ant. As you can see from the image above, Hudson provides many ways for building your project. In the PetStore case, I am setting up some environment variables and therefore using a simple batch file.

6. In the Post-Build Actions section, we will select the Publish JUnit test result report option. We are generating the JUnit reports from SoapUI. Specify a location for where Hudson can find the XML files that JUnit produces when run through Ant.

7. Now, that we have Hudson setup. we will force the build. This will cause Hudson to checkout all the source code artifacts from SCM, and initiate a build. In my case the build failed; reason being SoapUI generates the JUnit reports with no package name, Hudson complains about the same and throws a NPE. A possible workaround to publishing the JUnit reports, is to use the Achive the artifacts option in Post-build Actions. Here are the Hudson screenshots and the output:

Hudson console output:

4. Code Coverage using Cobertura
There are many open source tools available to identify line and branch coverage. By using these, you can find areas in your test code that need more tests. In most cases, when a developer checks in something, the commit build runs, which in turn runs the unit tests and some integration tests. A secondary build, often run at night, can instrument your code for code coverage reporting, deploy to the application server, and run the available tests. This will create a detailed report that the team can see first thing in the morning.

Cobertura is a code coverage analysis tool for Java. You can use it to determine what percentage of your source code is exercised by your unit tests. Cobertura adds instrumentation directly to the bytecode and is easy to integrate with Apache Ant. It comes with its own Ant task definitions for you to use.

The steps to follow here are simple:
1. Change the build file to instrument the code. Here are the changes we need in the build file to instrument our source code:

<target name="instrument" depends="compile">
<cobertura-instrument datafile="${cobertura-data-dile}" todir="${classes.dir}">
<fileset dir="${classes.dir}">
<include name="**/*.class" />

2. Deploy the instrumented code to GlassFish. Once the code is instrumented, deploy it your application server. The ant task provided here is for GlassFish V2 application server.

<target name="gf2-deploy-ear" depends="instrument">
<echo message="Deploying webservices-samples"/>
<taskdef name="sun-appserv-deploy" classname="org.apache.tools.ant.taskdefs.optional.sun.appserv.DeployTask"
classpath="${GLASSFISH_HOME}/lib/sun-appserv-ant.jar" />
<sun-appserv-deploy user="admin"
host="localhost" port="8484"
file="${ear-file-name}" asinstalldir="${GLASSFISH_HOME}"/>

3. Generate a report. Cobertura stores coverage information to a file called cobertura.ser. Using the report task, Cobertura can generate coverage reports in either HTML or XML format.

<target name="coverage-report" depends="run-soapui-tests">
<cobertura-report datafile="${cobertura-data-dile}" srcdir="${src.dir}" destdir="${coverage.xml.dir}" format="xml"/>
<cobertura-report datafile="${cobertura-data-dile}" srcdir="${src.dir}" destdir="${coverage.html.dir}" />

5. Integrate these reports with Hudson using the Cobertura plug-in.

Installing a Hudson plug-in is as simple as installing Hudson. Download the latest version of the plug-in from here,
click the Manage Hudson link from Hudson's homepage. Next, click the Manage Plugins link, where you can upload the plug-in archive file. Once the plug-in has been installed, you'll have to restart Hudson. If everything goes well, you should be able to see a screen like:

Once you have downloaded and installed the cobertura plug-in, you need to configure your Job to use the reports generated. Click configure on PetStore_Nightly Job, this will display the Job configuration. Choose Publish Cobertura Coverage Report, and provide the file name pattern that can be used to locate the cobertura xml report files.


6. Finally, lets force a build and see the results.

PetStore Dashboard:


Cobertura Coverage Report in Hudson:


Since we also generated the HTML reports for cobertura within the Ant task, we can take a look at these as well:

P.S: I should mention that when deploying instrumented code using cobertura to an application server, cobertura doesn't update the data file unless the application server is shutdown. So, I used the same workaround as mentioned in the cobertura web site, instead of stopping and starting the server each and every time. I placed the following code in one of the Session Beans, published this as a web service, added a test case for this method and this was the last test case called within SoapUI.

try {
String className = "net.sourceforge.cobertura.coveragedata.ProjectData";
String methodName = "saveGlobalProjectData";
Class saveClass = Class.forName(className);
java.lang.reflect.Method saveMethod = saveClass.getDeclaredMethod(methodName, new Class[0]);
saveMethod.invoke(null,new Object[0]);
} catch (Throwable t) {

In this part, you learned about the Ant tasks for running tests written using SoapUI, generating JUnit reports, integrating with CI, and getting code coverage using Cobertura. I've used some popular open source tools like Ant, Hudson, Cobertura and showed you how easy it is to set up a CI environment using these tools and run functional tests for your web services written using SoapUI.

You can go ahead start building web services with great confidence! Enjoy.


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}