{{announcement.body}}
{{announcement.title}}

The Anatomy of a Microservice, Java at Warp Speed

DZone 's Guide to

The Anatomy of a Microservice, Java at Warp Speed

Taking a look at Quarkus build artifacts with initial impressions of runtime performance and memory consumption. Does a native Java application outperform the JVM?

· Microservices Zone ·
Free Resource

It’s been a bit since I wrote the previous article in this series. Quarkus, which has been a topic in this series, has gone from version 1.2 to 1.6! As I started writing this article, I thought, why not, I’ll go ahead an upgrade. For the most part, it was very straightforward. The biggest change for the code used in this series is the updated gRPC support. Once I got the hang of the gRPC update, I have to say that it’s a nice addition to this relatively new framework. Another gotcha for me was what looks to be a tightening of the control over the reactive event loop. Before Quarkus, I’ve used the reactive pattern in other projects. While wanting to be a good citizen and keep the heavy lifting off the event loop, sometimes it’s difficult to know when code that probably shouldn’t be executing in the context of the event loop is. As I upgraded and executed this series’ sample code to Quarkus 1.6, I was caught off guard by the runtime environment alerting me to entity manager operations running on the event loop. While it did catch me off guard, I did appreciate the help in identifying possible trouble spots. There’s been a lot going on with Quarkus. Check it out at quarkus.io.

While wanting to be a good citizen and keep the heavy lifting off the event loop, sometimes it’s difficult to know when code that probably shouldn’t be executing in the context of the event loop is.

The previous article in this series, The Anatomy of a Microservice, One Service, Multiple Servers, completed a high-level overview of a sample microservice architecture – from a code perspective. However, what good is an application if it’s not deployed? We’ll now turn our focus to building and deploying these microservices to Kubernetes. Before we get there, we’ll take a look at Quarkus runtime artifacts.

…what good is an application if it’s not deployed?

Java at Warp Speed is the subtitle of this article. I’ve been developing in Java since the early days – Java 1.0. I’ve experienced the evolution of the Java Virtual Machine into a runtime environment that, in many cases, competes with any natively compiled code. The trade-off in performance of write once, run anywhere has gradually and greatly diminished. However, in the world of Docker, Kubernetes, containers, cloud providers, and so on, traditional Java applications have a couple of challenges – startup time and memory consumption. Quarkus aims to address both head on.

For me, I was initially intrigued by the thought of a different process leveraging a relatively new technology, the GraalVM, that compiles Java code into native images. I started thinking that this could be considered Java’s response to the Go ecosystem. Although packaging Java code in Docker images has taken much of the hassle out of ensuring the right JVM is installed for an application, it’s still an additional step, plus JDK 11 adds an additional 300+ MB to the image. With a native Java binary, the only artifact to package is the Java binary. In the sample code used for this series (available here on GitHub), the uber jar created by the Quarkus build is 715KB. The native executable created by the GraalVM compilation process is 73MB. Quite a difference, and it somewhat reduces the impact in comparing shipping the JVM with the uber jar. Why the difference in size? Like Go, the GraalVM generated native executable contains components for handling items such as memory management and thread scheduling.

Does the native executable perform better than the traditional JVM-based user jar? That got a lot more complicated than I anticipated it would.

The next question for me was, “Does the native executable perform better than the traditional JVM-based user jar?” That got a lot more complicated than I anticipated it would. In fact, I had to do some searching to see if perhaps my test case was incorrect. It turns out that the test case, unscientific as it is, wasn’t far off from what others were seeing. The results weren’t what I was expecting.

Executable

Start Time

Memory Usage

QPS

JVM

122ms

13.5MB

192

Native

16ms

3.7MB

106

Compared to the JVM uber jar, the numbers for the native executable look pretty good. That is, until you get to performance with the JVM version outrunning the native version just short of a factor of two. That was disappointing. As I mentioned, even though my tests aren’t very sophisticated (using JMeter with random queries), the numbers are representative of the reading I did relating to Quarkus performance.

Even though the article is over a year old, and Quarkus has evolved quite bit since then, to me, the best summary I found on this topic was on the Quarkus blog site. The article is titled, Quarkus Runtime Performance. Instead of me reciting text or copy directly from the article, I encourage you to read it. It explains what I was seeing and gives a perspective on performance that I really hadn’t considered. That is, measuring performance for cloud native applications requires a slightly different perspective.

…it appears that the best way to gauge the performance of a Quarkus application is by running it in Kubernetes

After reading through this and a few other articles, all of which were consistent in their presentation, it appears that the best way to gauge the performance of a Quarkus application is by running it in Kubernetes. This, of course, makes sense as Quarkus labels itself, “A Kubernetes Native Java stack…” While both the JVM and native sample servers perform well from the command line, I’ll be interested to see these processes running in a Kubernetes cluster.

For now, we’ll take a look at building the Quarkus servers – creating and executing both a JVM and native runtime artifacts. As a reminder, the code for this series is available in GitHub in the Microservice Sample Project repository.

As described in the previous article in this series, The Anatomy of a Microservice, One Service, Multiple Servers, the project contains two servers supporting the sample business service. Both can be built from the top-level of the project by invoking:

mvn clean install 

This invocation will activate the Maven profile prod, which when built, include the “production” implementation of the business service, and will require a PostgeSQL server.

The project’s top-level README discusses various build profiles. For the remainder of this article, we’ll leverage the mock implementation of the service that doesn’t require a database. In an upcoming article in this series, we’ll create a Kubernetes cluster with PostgreSQL installed to exercise the production configuration of the services. The mock implementation of the business service is included in the final build by enabling the mock Maven profile and configuring Quarkus with a mock profile using this invocation:

mvn clean install -Dquarkus.profile=mock

This build generates uber jars in both the grpc/target and rest/target subdirectories. The servers can be started using standard Java invocation:

java -jar server-[grpc/rest]-VERSION-runner.jar

Without additional configuration, both can’t be executed at the same time as they share ports. This will be addressed when we deploy these to a Kubernetes cluster. Validating the services can be done using curl for the RESTful server and grpcurl (https://github.com/fullstorydev/grpcurl) for the gRPC server.

Here are sample invocations for each server type, invoking the same service:

grpcurl -plaintext localhost:9000 Movies/Get

curl http://localhost:8080/media/movies 

Feel free to read the Javadoc for additional service invocations.

Now for something new and different; building native executables. For Quarkus, this is done by invoking the same Maven command and adding a couple of additional options:

mvn clean install -Dquarkus.profile=mock -Dnative -Dquarkus.native.container-build=true

The -Dnative is likely obvious. It enables the Quarkus native build profile. The option I found interesting is -Dquarkus.native.container-build=true. This option configures Quarkus to build the native image using a Docker container. With this option set, Quarkus will start a pre-defined Docker image containing the necessary tools (GraalVM, compiler, etc.) and build a Linux binary. This is a pattern that’s becoming quite popular. I’ve leveraged this pattern in projects. Building a Docker image configured with all the tools required for some type of development process ensures a consistent build environment while also not requiring team members to ensure the right tools with the right versions are installed on their workstations. In an upcoming article, I will demonstrate this pattern for another build process.

Building a Docker image configured with all the tools required for some type of development process ensures a consistent build environment while also not requiring team members to ensure the right tools with the right versions are installed on their workstations.

Notice is how long this build takes to execute. On my i9, 8 core, 32GB RAM, 1TB SSD system, each service takes about 2 ½ minutes to build, and, from the image below, it does gobble up the CPU.

A screenshot of a computer

Description automatically generated

There’s a lot more to Quarkus. It’s philosophy is to create a stack that lends itself to the Kubernetes native ecosystem. In my next article, I’ll take a break from developing microservices to present an automated process to build a local Kubernetes cluster configured with tools such as a local Docker registry and Helm Chart repository, Prometheus, Grafana, and the Kubernetes Dashboard.

Stay tuned for The Anatomy of a Microservice, Building a Local Deployment Environment

 

Topics:
java, kubernates, microservice architecture, quarkus tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}