Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Testing JavaEE With Servlet, JAX-RS, Batch, and Microprofile

DZone's Guide to

Testing JavaEE With Servlet, JAX-RS, Batch, and Microprofile

Want to learn how to implement a new artifact that uses Microprofile API within Java EE 7? Click here to learn how.

· Java Zone ·
Free Resource

Atomist automates your software deliver experience. It's how modern teams deliver modern software.

One of the most interesting concepts that made Java EE appealing for enterprise is its great backward compatibility, ensuring that years of investment in R and D could be reused in future developments.

Nevertheless, one of the least understood facts is that Java EE in the end is a set of curated APIs that could be extended and improved with additional EE-based APIs, e.g., Microprofile, DeltaSpike, and vendor-specific improvements, including Hazelcast on Payara and Infinispan on Wildfly. In this article, I'll try to generate a response to a recent question from my development team:

Is it possible to implement a new artifact that uses Microprofile API within Java EE 7? May I also use this artifact in a Java EE 8 Server?

To answer this question, I prepared a POC to demonstrate different Java EE capabilities.

Is Java EE Backward Compatible? And, Is it Safe to Assume a Clean Migration From EE 7 to EE 8?

One of the most pervasive rules in IT is "if ain't broke, don't fix it." However, being "broke" has become relative in regards to security, bugs, and features.

Security vulnerabilities and regular bugs are patched through vendor specific updates in Java EE, retaining the feature compatibility through EE API levels. These kinds of updates are considered safer and should be applied proactively.

However, once a new EE versión is on the streets, each vendor publishes and releases it's product calendar, making them responsible for future updates. It becomes expected that any Java EE user will update his stack (or else perish).

In this line, Java EE has a complete set of requirements and backward compatibility instructions, for vendors, spec leads, and contributors. This is especially important considering that on every version of Java EE we receive: 

  • New APIs (like Batch in EE 7 or JSON-B in EE 8)
  • APIs that don't change and are included in the next EE version (like Batch in EE 8)
  • APIs with minor updates (Bean Validation in EE 8)
  • APIS with new features and interfaces (reactive client in JAX-RS EE 8)

According to compatibility requirements, if your code retains and implements only EE standard code, you receive source code compatibility, binary compatibility, and behavior compatibility for any application that uses a previous version of the specification — at least that's the idea.

Creating a "Complex" Implementation

To test this assumption, I've prepared a POC that implements the following:

  • Servlets (updated in EE 8)
  • JAX-RS (updated in EE 8)
  • JPA (a minor update in EE 8)
  • Batch (does not change in EE 8)
  • Microprofile Config (extension)
  • DeltaSpike Data (extension)

Batch Structure

This application loads a bunch of IMDB records from a csv file in the background to save the records in Derby (Payara 4) and H2 (Payara 5) using the jdbc/__default JTA Datasource.

For referece, the complete Maven project of this POC is available at GitHub.

Part 1: File Upload

The POC a) implements a multipart servlet that receives files from a plain HTML form, b) saves the file using Microprofile config to retreive the final destination URL, and c) calls a batch job named csvJob:

@WebServlet(name = "FileUploadServlet", urlPatterns = "/upload")
@MultipartConfig
public class FileUploadServlet extends HttpServlet {
    @Inject
    @ConfigProperty(name = "file.destination", defaultValue = "/tmp/")
    private String destinationPath;

    @Inject
    private Logger logger;

    protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        String description = request.getParameter("description");
        Part filePart = request.getPart("file");
        String fileName = Paths.get(filePart.getSubmittedFileName()).getFileName().toString();


        //Save using buffered streams to avoid memory consumption
        try(InputStream fin = new BufferedInputStream(filePart.getInputStream());
                OutputStream fout = new BufferedOutputStream(new FileOutputStream(destinationPath.concat(fileName)))){

            byte[] buffer = new byte[1024*100];//100kb per chunk
            int lengthRead;
            while ((lengthRead = fin.read(buffer)) > 0) {
                fout.write(buffer,0,lengthRead);
                fout.flush();
            }

            response.getWriter().write("File written: " + fileName);

            //Fire batch Job after file upload
            JobOperator jobOperator = BatchRuntime.getJobOperator();
            Properties props = new Properties();
            props.setProperty("csvFileName", destinationPath.concat(fileName));
            response.getWriter().write("Batch job " + jobOperator.start("csvJob", props));
            logger.log(Level.WARNING, "Firing csv bulk load job - " + description );

        }catch (IOException ex){
            logger.log(Level.SEVERE, ex.toString());

            response.getWriter().write("The error");
            response.sendError(HttpServletResponse.SC_BAD_REQUEST);
        }
    }
}


You also need a plain HTML form:

<h1>CSV Batchee Demo</h1>
<form action="upload" method="post" enctype="multipart/form-data">
    <div class="form-group">
        <label for="description">Description</label>
        <input type="text" id="description" name="description" />
    </div>
    <div class="form-group">
        <label for="file">File</label>
        <input type="file" name="file" id="file"/>
    </div>

    <button type="submit" class="btn btn-default">Submit</button>
</form>


Part 2: Batch Job, JTA, and JPA

As described in Java EE tutorial, typical batch jobs are composed of different steps. These steps also implement a three phase process involving a reader, processor, and writer that works by chunks.

Batch job is defined by using a XML file located in resources/META-INF/batch-jobs/csvJob.xml. The reader-writer-processor triad will be implemented through the name CDI beans.

<?xml version="1.0" encoding="UTF-8"?>
<job id="csvJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/jobXML_1_0.xsd"
    version="1.0">

    <step id="loadAndSave" >
        <chunk item-count="5">
            <reader ref="movieItemReader"/>
            <processor ref="movieItemProcessor"/>
            <writer ref="movieItemWriter"/>
        </chunk>
    </step>
</job>


The MovieItemReader reads the csv file line per line and wraps the result using a Movie object for the next step, note that open, readItem, and checkpointInfo methods that are overwritten to ensure that the task restarts properly if needed.

@Named
public class MovieItemReader extends AbstractItemReader {

    @Inject
    private JobContext jobContext;

    @Inject
    private Logger logger;

    private FileInputStream is;
    private BufferedReader br;
    private Long recordNumber;

    @Override
    public void open(Serializable prevCheckpointInfo) throws Exception {
        recordNumber = 1L;
        JobOperator jobOperator = BatchRuntime.getJobOperator();
        Properties jobParameters = jobOperator.getParameters(jobContext.getExecutionId());
        String resourceName = (String) jobParameters.get("csvFileName");
        is = new FileInputStream(resourceName);
        br = new BufferedReader(new InputStreamReader(is));

        if (prevCheckpointInfo != null)
            recordNumber = (Long) prevCheckpointInfo;
        for (int i = 0; i < recordNumber; i++) { // Skip until recordNumber
            br.readLine();
        }
        logger.log(Level.WARNING, "Reading started on record " + recordNumber);
    }

    @Override
    public Object readItem() throws Exception {

        String line = br.readLine();

        if (line != null) {
            String[] movieValues = line.split(",");
            Movie movie = new Movie();
            movie.setName(movieValues[0]);
            movie.setReleaseYear(movieValues[1]);

            // Now that we could successfully read, Increment the record number
            recordNumber++;
            return movie;
        }
        return null;
    }

    @Override
    public Serializable checkpointInfo() throws Exception {
        return recordNumber;
    }
}


Since this is a POC "processing" step, it just converts the movie title to uppercase and pauses the thread a half second on each row:

@Named
public class MovieItemProcessor implements ItemProcessor {

    @Inject
    private JobContext jobContext;

    @Override
    public Object processItem(Object obj) throws Exception {
        Movie inputRecord = (Movie) obj;

        // "Complex processing"
        inputRecord.setName(inputRecord.getName().toUpperCase());
        Thread.sleep(500);

        return inputRecord;
    }
}


Finally, each chunk is written on MovieItemWriter using a DeltaSpike repository:

@Named
public class MovieItemWriter extends AbstractItemWriter {

    @Inject
    MovieRepository movieService;

    @Inject
    Logger logger;

    public void writeItems(List list) throws Exception {
        for (Object obj : list) {
            logger.log(Level.INFO, "Writing " + obj);
            movieService.save((Movie) obj);
        }
    }
}


For reference, this is the Movie Object:

@Entity
@Table(name = "movie")
public class Movie implements Serializable {

    @Override
    public String toString() {
        return "Movie [name=" + name + ", releaseYear=" + releaseYear + "]";
    }

    private static final long serialVersionUID = 1L;

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @Column(name = "movie_id")
    private int id;

    @Column(name = "name")
    private String name;

    @Column(name = "release_year")
    private String releaseYear;

    //Getters and setters

}


Default datasource is configured on resources/META-INF/persistence.xml. Note that I'm using a JTA Data Source:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
             version="2.1">

    <persistence-unit name="batchee-persistence-unit" transaction-type="JTA">
        <description>BatchEE Persistence Unit</description>
        <jta-data-source>jdbc/__default</jta-data-source>
        <exclude-unlisted-classes>false</exclude-unlisted-classes>
 <properties>
      <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
      <property name="javax.persistence.schema-generation.scripts.action" value="drop-and-create"/>
      <property name="javax.persistence.schema-generation.scripts.create-target" value="sampleCreate.ddl"/>
      <property name="javax.persistence.schema-generation.scripts.drop-target" value="sampleDrop.ddl"/>
    </properties>
    </persistence-unit>
</persistence>


To test the JSON marshalling throug JAX-RS, I also implemented a Movie endpoint with the GET method. The repository (AKA DAO) is defined by using DeltaSpike:

@RequestScoped
@Path("/movies")
@Produces({ "application/xml", "application/json" })
@Consumes({ "application/xml", "application/json" })
public class MovieEndpoint {

    @Inject
    MovieRepository movieService;

    @GET
    public List<Movie> listAll(@QueryParam("start") final Integer startPosition,
            @QueryParam("max") final Integer maxResult) {
        final List<Movie> movies = movieService.findAll();
        return movies;
    }

}


The repository:

@Repository(forEntity = Movie.class)
public abstract class MovieRepository extends AbstractEntityRepository<Movie, Long> {

    @Inject
    public EntityManager em;
}

Test 1: Java EE 7 Server With Java EE 7 Pom

Since the objective is to test real backward (lower EE level than server) and forward (Micprofile and DeltaSpike extensions) compatibility, first, I built and deployed this project with the following dependencies on pom.xml. The EE 7 Pom vs EE 7 Server test is only executed to verify that project works properly:

<dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>7.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.eclipse.microprofile</groupId>
            <artifactId>microprofile</artifactId>
            <version>1.3</version>
            <type>pom</type>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.deltaspike.modules</groupId>
            <artifactId>deltaspike-data-module-api</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.deltaspike.modules</groupId>
            <artifactId>deltaspike-data-module-impl</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.deltaspike.core</groupId>
            <artifactId>deltaspike-core-api</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.deltaspike.core</groupId>
            <artifactId>deltaspike-core-impl</artifactId>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.apache.deltaspike.distribution</groupId>
                <artifactId>distributions-bom</artifactId>
                <version>${deltaspike.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
    <build>
        <finalName>batchee-demo</finalName>
    </build>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <deltaspike.version>1.8.2</deltaspike.version>
        <failOnMissingWebXml>false</failOnMissingWebXml>
    </properties>


As expected, the application loads the data properly. Here are two screenshots taken during batch job jexecution:

Payara 4 Demo 1

Payara 4 Demo 2

Test 2: Java EE 8 Server With Java EE 7 Pom

To test the real binary compatibility, the application is deployed without changes on Payara 5 (Java EE 8). This Payara release also switches Apache Derby with H2 database.

As expected and according with Java EE compatibility guidelines, the application works flawlessly.

Payara 5 Demo 1

Payara 5 Demo 2

To verify assumptions, this is a query launched through SQuirrel SQL:
SQuirrel SQL Demo

Test 3: Java EE 8 Server With Java EE 8 Pom

Finally, to enable new EE APIs, a little bit of tweaking is needed on pom.xml, specifically the JavaEE dependency

<dependency>
    <groupId>javax</groupId>
    <artifactId>javaee-api</artifactId>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>


Again, the application works like so:

Payara 5 Java EE 8

This is why standards matters!

Get the open source Atomist Software Delivery Machine and start automating your delivery right there on your own laptop, today!

Topics:
java ,java ee 8 ,javaee 7 ,microprofile ,tutorial ,apis

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}