{{announcement.body}}
{{announcement.title}}

Building Cloud-Native Java Applications With AOT and Kubernetes

DZone 's Guide to

Building Cloud-Native Java Applications With AOT and Kubernetes

A combination of technologies like GraalVM, Micronaut, and Kubernetes unlocks a whole new world for developers looking to build and implement Java cloud solutions.

· Cloud Zone ·
Free Resource

Native cloud applications use services and infrastructure provided by AWS, Microsoft Azure, and Google Cloud among others. Essentially, this means you can code, test, maintain, and run your app all in the cloud.

This approach gives developers an opportunity to:

  • Build and release applications faster

  • Connect mammoth enterprise apps with nimble end-user applications via APIs

  • Segment bulky apps into self-sufficient services, which can be managed and updated at scale using Kubernetes

  • Increase system reliability

With modern platforms like Kubernetes and Istio, the need to have smaller runtimes that can scale up, down, and even all the way down to zero is becoming critical. However, applications built on a traditional Java stack (even if optimized for cloud-native environments) are memory-intensive and take longer to start than software created using other popular programming languages. 

In this article, I’ll provide a brief overview of the technologies and tools that allow software engineers to simultaneously launch more Java applications in the cloud and sonically improve their performance.

Barriers to Implementing Java in a Cloud-Native Way

  • Inefficient memory usage. When you launch a Java application in the cloud, the Java Virtual Machine (JVM) starts converting bytecode into native code using just-in-time (JIT) compilers. With this process going on, you cannot execute other applications and use the server capacity to the fullest.
  • High execution time. “Write once, run anywhere” is the essence of Java applications. The cross-platform functionality is enabled through code compilation. The time it takes a JIT compiler to optimize the bytecode is added to the overall execution time.
  • Communication overhead. A containerized Java app needs additional libraries to communicate efficiently in a cloud-native and microservices environment. These include service discovery, circuit breaker, and distributed tracing instrumentation libraries.

Optimizing JVM Memory Usage and Startup Time With Ahead of Time (AoT) Compilation

To resolve the issues with significant memory usage and startup time, Java 9 supports Ahead-Of-Time (AOT) compilation. The technique helps compile bytecode to native before executing an application, so that we don’t have to spend system resources on JIT compilation at run time.

 



��
 

Currently, the most potent AOT compiler is provided by the GraalVM project. However, it still comes with certain limitations, including the lack of support for dynamic class loading, finalizers, serialization, and Java Management Extensions (JMX).

There are two ways to convert a Java application into fully compiled native code, which is also known as native image:

  1. Create your Java archive (JAR) file as a usual application

  2. Create a native image from your JAR file

Shell
 




xxxxxxxxxx
1
34


1
native-image -cp build/hello-native-0.1-all.jar                                     
2
 
          
3
 
          
4
[hello-native:19]    classlist:   6,238.75 ms
5
 
          
6
[hello-native:19]        (cap):   1,046.36 ms
7
 
          
8
[hello-native:19]        setup:   2,673.82 ms
9
 
          
10
[hello-native:19]   (typeflow):  48,004.19 ms
11
 
          
12
[hello-native:19]    (objects):  24,675.85 ms
13
 
          
14
[hello-native:19]   (features):   2,264.51 ms
15
 
          
16
[hello-native:19]     analysis:  78,074.57 ms
17
 
          
18
[hello-native:19]     (clinit):   1,048.50 ms
19
 
          
20
[hello-native:19]     universe:   3,079.67 ms
21
 
          
22
[hello-native:19]      (parse):   9,075.28 ms
23
 
          
24
[hello-native:19]     (inline):  50,002.33 ms
25
 
          
26
[hello-native:19]    (compile): 117,425.27 ms
27
 
          
28
[hello-native:19]      compile: 187,438.96 ms
29
 
          
30
[hello-native:19]        image:  15,428.14 ms
31
 
          
32
[hello-native:19]        write:   2,457.20 ms
33
 
          
34
[hello-native:19]      [total]: 295,887.91 ms



Then you can run the native code and use it without any dependency on Java Runtime Environment (JRE). This helps reduce the JVM startup time to just 54 milliseconds.

Shell
 




xxxxxxxxxx
1


1
./hello-native
2
 
           
3
 
           
4
11:32:09.499 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 54ms. Server Running: http://localhost:8080



If we execute the same application using the default JIT compilation functionality provided by the virtual machine, the startup time will jump to 1677 milliseconds.

Shell
 




xxxxxxxxxx
1


1
java -jar ./hello-native-0.1-all.jar
2
 
           
3
 
           
4
17:20:50.910 [main] [INFO] mio.micronaut.runtime.Micronaut - Startup completed in 1677ms. Server Running: http://localhost:8080



Java application performance with AOT vs. JIT compilation

Parameters

Native Application

Standard Java Application

Startup Time (ms)

54

1677

Memory Footprint (MB)

61

415

Choosing a Web Framework With AOT Compilation Capabilities

The traditional Java development frameworks like Spring or Java EE are not suitable for building native images. As a result, several solutions with robust AOT capabilities emerged as an alternative. These include Helidon by Oracle, Quarkus by RedHat, and Micronaut by Object Computing.

Framework / Attributes

Micronaut

Quarkus

Helidon

Developed by

Object Computing

Red Hat

Oracle

Launched in

2017

2018

2018

Supported languages

Java, Kotlin, Groovy

Java, Kotlin

Java


To show you how to create a simple Java app with Micronaut and run it as a native image, I’ve uploaded a sample project on GitHub

Below you will find an application entry point:

Java
 




xxxxxxxxxx
1
11


1
package micronaut.hello.native1;
2
 
          
3
import io.micronaut.runtime.Micronaut;
4
 
          
5
public class Application {
6
 
          
7
  public static void main(String[] args) {
8
    Micronaut.run(Application.class);
9
  }
10
 
          
11
}



And here’s a sample controller for the Micronaut application:

Java
 




x


 
1
import io.micronaut.http.annotation.Controller;
2
import io.micronaut.http.annotation.Get;
3
import io.micronaut.http.annotation.PathVariable;
4
 
          
5
import java.util.HashMap;
6
import java.util.Map;
7
 
          
8
@Controller("/issues")
9
public class IssuesController {
10
 
          
11
  @Get("/{number}")
12
  public Map<String, String> issue(@PathVariable Integer number) {
13
    return new HashMap<String, String>() {{
14
      put("key", "Issue # " + number + "!");
15
    }};
16
  }
17
 
          
18
}



Now we can compile the bytecode to native, run it, and see the results.

Shell
 




xxxxxxxxxx
1


1
> curl http://localhost:8080/issues/1
2
{"key":"Issue # 1!"}



Enabling Communication Between Java Application Services With Kubernetes and Istio

To further optimize the performance of a containerized Java application in the cloud, you can utilize the Kubernetes system. This approach allows developers to:

  • Scale all the microservices that comprise a Java app in the same transparent way

  • Minimize the infrastructure configuration and management efforts

  • Establish efficient communication between microservices

You can also use the Istio service to facilitate communication between microservices, which allows developers to run sidecar applications against every microservice and handle communication through a local proxy. For example, you make a call to a sidecar proxy, and the server will resolve and forward the request to the appropriate component of the service mesh. This method simplifies the application logic and moves load-balancing, circuit breaking, and service discovery logic to the service mesh.

You can install Istio using the Helm package manager:

Shell
 




xxxxxxxxxx
1


1
helm install --name istio-init --namespace istio-system istio.io/istio-init
2
 
          
3
helm install --name istio --namespace istio-system istio.io/istio



To create a communication point, you need to launch a Kubernetes service. Istio will intercept and enhance communication between your microservices without any additional actions.

YAML
 




x


 
1
apiVersion: v1
2
kind: Service
3
metadata:
4
name: istio-service-b
5
labels:
6
  app: istio-service-b
7
  service: istio-service-b
8
spec:
9
  ports:
10
  - port: 3000
11
  name: http
12
  selector:
13
    app: istio-service-b


Take-Home Message

A combination of technologies like GraalVM, Micronaut, and Kubernetes unlocks a whole new world for developers looking to build and implement Java cloud solutions. The technology stack allows us to deploy eight times more microservices comprising a containerized app while using the same cloud capacity. With Istio, we can further shift the execution of complex features, such as circuit breaker, call tracing, and service discovery from the application side to the cloud infrastructure and simultaneously enhance communication between the microservices.

You can find all the related code for your project in my GitHub repository.

Topics:
cloud (add topic), graalvm, istio, java, kubernetes, micronaut, tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}