{{announcement.body}}
{{announcement.title}}

Spring2quarkus — Spring Boot to Quarkus Migration

DZone 's Guide to

Spring2quarkus — Spring Boot to Quarkus Migration

Find out how!

· Microservices Zone ·
Free Resource

Spring

Time to boot up your Spring with Quarkus.


Recently the "fattest" of my Spring Boot based microservices became too big. The entire project was hosted on the AWS EC2 environment and the instances used were t2.micro or t3.micro. The service started to get down very often even with minimum load on it. The obvious option was to choose a bigger instance for the service (t3.small) which I did initially.

You may also like: Lightweight Serverless Java Functions With Quarkus

Meanwhile, I decided to try another approach — migrate the microservice to a framework with a lower footprint and to see the results. Of course, my choice was to use Quarkus. I already have two started microservice-based projects with it and the results are awesome — the footprint of the microservices is nearly zero when they are compiled to native code. Forgot to mention the boot time too — it is between 10 and 50 ms.

Unfortunately, for this particular migration, I’ve failed to migrate the microservice to the native build, but I’ve decided not to give up and to take the journey to the end and then see what will happen. I’ve managed to complete the migration in two weeks. These were very interesting and challenging weeks and I’ve learned a lot for Quarkus during this time and I am sure that the next microservice migration will go much more smoothly.

First of all, the Quarkus has some basic functionalities which will help you to migrate an existing Spring Boot application. Here they are:

  1. Quarkus Extension for Spring DI API.
  2. Quarkus Extension for Spring Web API.
  3. Extension for Spring Data API.

I’ve decided not to make a new git branch from my microservice and do the migration there. Instead, I’ve created a new git project and started to copy and migrate the microservice components there one by one.

Creating the Quarkus Project

mvn io.quarkus:quarkus-maven-plugin:0.22.0:create \
    -DprojectGroupId=org.otaibe.at.flight \
    -DprojectArtifactId=otaibe-at-flight-quarkus \
    -DclassName="org.otaibe.at.flight.web.controller.UApiController" \
    -Dpath="/uapi" \
    -Dextensions="spring-di,spring-web,resteasy-jackson"


The up to date Quarkus version was 0.22.0 so for this article I will use that particular version.

I’ve decided to keep the package organization of my microservice the same as in my Spring Boot based project and to copy and paste code from one project to another and then make it work with Quarkus. The only thing that changed was the maven artifact — from otaibe-at-flight to otaibe-at-flight-quarkus.

Finding the Missing Parts From the Puzzle

Some missing technologies didn’t have support in Quarkus or were handled differently.

Netflix Eureka Service Discovery Client

Netflix Eureka is a discovery service that is the heart of my AWS based microservice architecture for that project.

Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers.

On top of it, I also have built additional monitoring and autoscaling logic to ensure the elasticity and high availability of the entire system.

Having the Eureka Client on board was the critical part here. I started researching the possibilities and making some tests/experiments.

I tried to include the client as Maven dependency and to compile it; however, it is Guice based and the compilation on Quarkus failed. It was clear that I would have to find other options.

The first thing that came to my mind was to ask Google about it. I found a new feature request about the Eureka Client: Eureka discovery service with Quarkus #2052.

The answer there was very discouraging —

Hi, there is no built in Eureka support in Quarkus. Quarkus mainly targets Kubernetes/Openshift cloud environments where no discovery service is needed because plain old DNS works out of the box when using the service name.

So, I decided to check the other side of the coin — what Spring Boot documentation would advise for cases like this one. Spring Boot is a very comprehensive and mature framework and you can find pretty much everything in the documentation. The result was: Polyglot support with Sidecar.

The idea here is to use the Sidecar Pattern and to start an additional application that will register your app to the Eureka server and handle the traffic between the microservices.

This was doable, but...I didn’t like it at all. I was terrified from the thought of putting a second JVM application on AWS t2.micro instance. I knew the result — the two JVM applications would consume so much CPU Credits and I would be forced to choose a bigger AWS instance, which would make the whole exercise pointless 

The third thing was to review which functionalities I was using in Eureka Client, to strip them only to the needed parts and to check if it was possible to rewrite them somehow. Luckily the only functionality used for this microservice was to be registered with the Eureka server. I’ve found the Eureka REST API page and I wrote for myself the Simple Netflix Eureka client using Quarkus.

This is the bare minimum Eureka Client built with a single purpose — to register the microservice with the Eureka Server on every 30 seconds. That’s it! No complicated logic, not even a heartbeat request implementation. It turned out that thus the service was registered fine and that the other microservices were able to find it and consume it without a problem.

Now the Simple Netflix Eureka client using Quarkus can register with the Eureka server; however, on my Eureka Server, I have additional checks, for instance, health, metrics, and info. So I had to implement them for my Custom Eureka Server to work with the rewritten to Quarkus microservice.

This lead to the next missing part of the puzzle:

Migrating Spring Boot Actuator Features

Again I asked myself which Spring Boot Actuator features I was using.

Health Check

That was the easy one — the Quarkus has it: Quarkus — MicroProfile Health.

It works out of the box and is registered with Eureka Server.

Info Endpoint

Well, in my custom Eureka Server Implementation I need this endpoint and the Git information provided from it. There is no such tutorial in Quarkus — Guides page, so again I had to write this endpoint by myself. Git information is provided by maven git commit id plugin.

I’ve added the plugin to my pom.xml file with some configurations:


<plugin>
    <groupId>pl.project13.maven</groupId>
    <artifactId>git-commit-id-plugin</artifactId>
    <version>3.0.1</version>
    <executions>
        <execution>
            <id>get-the-git-infos</id>
            <goals>
                <goal>revision</goal>
            </goals>
            <!-- *NOTE*: The default phase of revision is initialize, but in case you want to change it, you can do so by adding the phase here -->
            <phase>initialize</phase>
        </execution>
        <execution>
            <id>validate-the-git-infos</id>
            <goals>
                <goal>validateRevision</goal>
            </goals>
            <!-- *NOTE*: The default phase of validateRevision is verify, but in case you want to change it, you can do so by adding the phase here -->
            <phase>package</phase>
        </execution>
    </executions>
    <configuration>
        <generateGitPropertiesFile>true</generateGitPropertiesFile>
        <generateGitPropertiesFilename>${project.build.outputDirectory}/META-INF/resources/git.properties</generateGitPropertiesFilename>
        <format>json</format>
        <includeOnlyProperties>
            <includeOnlyProperty>^git.commit.id$</includeOnlyProperty>
            <includeOnlyProperty>^git.commit.time$</includeOnlyProperty>
            <includeOnlyProperty>git.branch</includeOnlyProperty>
            <includeOnlyProperty>git.dirty</includeOnlyProperty>
            <includeOnlyProperty>git.build.time</includeOnlyProperty>
        </includeOnlyProperties>
    </configuration>
</plugin>


As a result, the plugin generates ‘git.properties’ file in JSON format in the ‘META-INF/resources’ directory. So the only thing left was to create a REST controller endpoint with this information:

@GET
@Path(INFO)
@Produces(MediaType.APPLICATION_JSON)
public CompletionStage<Response> info() {
    CompletableFuture<Response> result = new CompletableFuture<>();

    return Optional.ofNullable(info)
            .map(info1 -> {
                result.complete(Response.ok().entity(info1).build());
                return result;
            })
            .orElseGet(() -> {
                info = new Info();
                getVertx().fileSystem().rxReadFile(GIT_PROPERTIES)
                        .map(Buffer::toString)
                        .doOnSuccess(s -> log.info("git props: {}", s))
                        .map(s -> getJsonUtils().readValue(s, Map.class, getObjectMapper()))
                        .map(map -> map.orElse(new HashMap()))
                        .map(map -> {
                            GitInfo gitInfo = new GitInfo();
                            gitInfo.setBranch((String) map.get(GitInfo.GIT_BRANCH));
                            GitInfo.CommitInfo commit = new GitInfo.CommitInfo();
                            gitInfo.setCommit(commit);
                            commit.setId((String) map.get(GitInfo.GIT_COMMIT_ID));
                            commit.setTime((String) map.get(GitInfo.GIT_COMMIT_TIME));
                            return gitInfo;
                        })
                        .map(gitInfo -> {
                            info.setGit(gitInfo);
                            return info;
                        })
                        .doOnSuccess(info1 -> result.complete(Response.ok().entity(info1).build()))
                        .subscribe()
                ;
                return result;
            });
}


You can check the entire Actuator controller here.

The third endpoint which I rely on is:

Metrics Endpoint

Quarkus has such extension: Quarkus — MicroProfile Metrics; however for my custom Eureka Server Implementation this was not useful for me. The most important feature here was to be able to monitor microservice memory usage, so I’ve implemented the metrics endpoint with memory information on it:

@GET
@Path(METRICS)
@Produces(MediaType.APPLICATION_JSON)
public String metrics() {
    Metrics result = new Metrics();
    Runtime runtime = Runtime.getRuntime();
    result.setMemory(runtime.totalMemory());
    result.setMemoryFree(runtime.freeMemory());

    return jsonUtils.toStringLazy(result, getObjectMapper()).toString();
}


You can check the entire Actuator controller here.

Migrating Spring Boot Configurations

Quarkus — Configuring Your Application was the first guide that I read when Quarkus came into my view. The configurations there are very flexible and they are serving me pretty well when I am working with a brand new project. Here; however, I have to deal with a bunch of .yml files and the properties are overridden depending on the Spring profiles.

Some of these .yml files are bundled onto .jar package, but others are overridden from the Spring Cloud Config Server. My settings were loaded as POJOs through the Spring Boot annotation — @ConfigurationProperties, which is not present in Quarkus-Spring modules.

It became obvious for me that there was no way to perform the proper migration from Spring Boot .yml files to Quarkus .properties files configurations. I could not say that this rock was unexpected, but when I started dealing with it — it turned out that my previous thoughts about it were only scratching the surface.

After a while I came up with the solution — at the end, all these configurations are some king of Maps. Right? Then what if I just read these maps in proper order and merge them?

I’ve made a test project where I could experiment with cases like this one: otaibe-at-flight-quarkus-test

First of all, I added the ObjectMapper capable to read .yml files by adding this to my pom.xml file:

<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-yaml</artifactId>
    <version>2.3.0</version>
</dependency>


Then I added a utility capable to merge Maps. The actual map merge is a few lines of code:

/**
    * @param map1
    * @param map2
    * @return a new merged map where if duplicate key exists the map2 will override the value of map1
    */
public Map<String, Object> mergeStringObjectMap(Map<String, Object> map1, Map<String, Object> map2) {
    return Stream.of(map1, map2)
            .flatMap(map -> map.entrySet().stream())
            .collect(Collectors.toMap(
                    Map.Entry::getKey,
                    Map.Entry::getValue,
                    (v1, v2) -> {
                        if (Map.class.isAssignableFrom(v2.getClass())) {
                            return mergeStringObjectMap((Map) v1, (Map) v2);
                        }
                        return v2;
                    })
            );
}


In all microservice settings, POJOs with @ConfigurationProperties annotation were replaced with String array describing the path to the configuration. The code went from:

@Data
@NoArgsConstructor
@ConfigurationProperties(prefix="spring.mail")
public class Settings3 {
    String key3;
}


to:

@Data
@NoArgsConstructor
public class Settings3 {

    public static final String ROOT_PATH[] = {"spring", "mail"};

    String key3;
}


The next step was to create ConfigService responsible to read all .yml files and then to merge them properly.

I was using the Vert.x fileSystem() method and I had to be sure that the files were loaded in the proper order:

@PostConstruct
public void init() {

    Flux.interval(Duration.ofMillis(1))
            .retry()
            .filter(aLong -> getJsonConfig().getIsInitialized().get())
            .next()
            .doOnNext(aLong -> {
                yamlMapper = getJsonConfig().getYamlMapper();

                Flux.fromIterable(getConfigFiles())
                        .flatMap(s -> readYmlMap(s)
                                .doOnNext(map -> log.info("file {} was loaded", s))
                                .map(map -> Tuples.of(s, map))
                        )
                        .collectMap(objects -> objects.getT1(), objects -> objects.getT2())
                        .map(map -> getConfigFiles().stream()
                                .map(s -> map.get(s))
                                .collect(Collectors.toList())
                        )
                        //.doOnNext(maps -> log.info("yml config list: {}", getJsonUtils().toStringLazy(maps, getObjectMapper())))
                        .map(maps -> maps
                                .stream()
                                .reduce(allSettings, (map, map2) -> getMapWrapper().mergeStringObjectMap(map, map2))
                        )
                        .doOnNext(map -> {
                            //log.info("yml config: {}", getJsonUtils().toStringLazy(map, getObjectMapper()));
                            settings1 = settings1(map);
                            settings2 = settings2(map);
                            settings3 = settings3(map);
                            setAllSettings(map);
                            getIsInitialized().set(true);
                        })
                        .subscribe();

            })
            .subscribe()

    ;
}


Instead of using:

spring:
  profiles:
    active: prod,test


I added the following to my application.properties file:

service.config.files=application.yml
%prod.service.config.files=application.yml,application-prod.yml
%test.service.config.files=application.yml,application-prod.yml,application-test.yml


Now depending on the ‘quarkus.profile’ you can load different configurations capable to overload each other.

The above code was working perfectly on My Test Project and my development environment. It was about time to face the next question — how to deal with configurations that came from Spring Cloud Config Server?

I got lucky for that microservice: I was using the config server in an unorthodox way — the config files were downloaded from the Cloud-Init Script and stored in the file system

download_file ${CONFIG_SERVER_HOST}/at-flight-appprod.yml 
/opt/application-appprod.yml


Then for the prod environment, the file order was changed:

%prod.service.config.files=application.yml,application-prod.yml,
/opt/application-appprod.yml


Now the configurations are loaded properly and they can be consumed through the ConfigService

Using Pom.xml Properties

I need to inject a property in my configuration from my pom.xml file. If there is such property declared:

<properties>
    <key1.key2>someValue</key1.key2>
</properties>


In Spring Boot you can access it out of the box in the following way:

key1:
  key3: @key1.key2@


and then you can use:

@Value("${key1.key3}")
String key3;


You cannot do this on Quarkus. Instead, you should use the properties-maven-plugin in the following way:

In your pom.xml file:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>properties-maven-plugin</artifactId>
    <version>1.0.0</version>
    <executions>
        <execution>
            <phase>generate-resources</phase>
            <goals>
                <goal>write-project-properties</goal>
            </goals>
            <configuration>
                <outputFile>
                    ${project.build.outputDirectory}/misc.properties
                </outputFile>
            </configuration>
        </execution>
    </executions>
</plugin>


It will generate a properties file misc.properties and you can read your property from there.

Migrating Spring Cache Abstraction

So far in my microservices, I was using Ehcache. The Quarkus way is to use Infinispan Client. To do this, I have to install the Infinispan Server. I didn’t want to do it. So I changed cache in all necessary places with my custom in-memory implementation. Something like this:

private Map<String, Tuple2<DateTime, CurrencyConversionRsp>> 
currencyConversionCache = new ConcurrentHashMap<>();

@PostConstruct
public void init() {
    Flux.interval(Duration.ofMinutes(5))
            .retry()
            .doOnNext(aLong -> {
                DateTime threshold = DateTime.now().minusHours(4);
                getCurrencyConversionCache().entrySet().stream()
                        .filter(entry -> entry.getValue().getT1()
                                .isBefore(threshold))
                        .map(entry -> entry.getKey())
                        .collect(Collectors.toList())
                        .forEach(s -> getCurrencyConversionCache().remove(s));
            });
}

//@Cacheable(cacheNames = EXCHANGE_RATE_CACHE, key = 
CURRENCY_CONVERSION_CACHE_KEY)
public CurrencyConversionRsp exchange(CurrencyConversionReq req) {
    String key = buildKey(req);
    if (getCurrencyConversionCache().containsKey(key)) {
        return getCurrencyConversionCache().get(key).getT2();
    }
    return null;
}

//@CachePut(cacheNames = EXCHANGE_RATE_CACHE, 
key = CURRENCY_CONVERSION_CACHE_KEY)
public CurrencyConversionRsp fill(CurrencyConversionReq req, 
                                  CurrencyConversionRsp rsp) {
    getCurrencyConversionCache().put(buildKey(req), 
                                     Tuples.of(DateTime.now(), rsp));
    return rsp;
}


This was the final missing part! Now, I should be able to perform the migration of my microservice.

Migrating The Common Technologies

My initial thought was that this should be the easiest part of the Spring2Quarkus migration. I was so naive 

Then I started migrating the components one by one.

Spring Data JPA Migration

It is very well described in Quarkus — Extension for Spring Data API.

Well, the annotations were the same, I’ve copy/paste my DAO and started to test how it should work.

My DAO was something like this:

public interface SwsSessionDao extends JpaRepository<SwsSession, String>, 
JpaSpecificationExecutor {
    Optional<SwsSession> findByConversationId(String conversationId);
    Optional<SwsSession> findTopByApplicationAndConversationIdIsNull
      (String application);
    List<SwsSession> removeByLastUsedBefore(ZonedDateTime lastUsed);
}


It turned out that there were four errors in this three method interface :)

  1. In Row 1 — JpaSpecificationExecutor is not supported
  2. in Row 2 — Optional does not work
  3. In Row 3 — Optional does not work
  4. In Row 4 — ZonedDateTime is not accepted as a method argument.

So I changed it:

public interface SwsSessionDao extends JpaRepository<SwsSession, 
String> {
    SwsSession findByConversationId(String conversationId);
    SwsSession findTopByApplicationAndConversationIdIsNull
      (String application);
    SwsSession removeByLastUsedBefore(LocalDateTime lastUsed);
}


The difference between methods was handled in the service which encapsulates the DAO. This fixed it and I was able to continue in that way almost to the end of the migration process.

At some point — when the migration was nearly completed I started to receive some warnings from Quarkus that I had to register for reflection some Hibernate classes/interfaces (I’ve lost the message).

Then, I realized that now was a good moment to switch to Quarkus — Reactive Postgres Client. So far my microservice was ‘semireactive’. With moving the database communication in a reactive manner the only blocking part left was the communication with AWS S3, but I will explain it in the next chapter.

Once after switched to the Quarkus — Reactive Postgres Client everything was ok...almost.

Once after the microservice was migrated — there were some final touches like Flyway. There is a nice guide about it: Quarkus — Using Flyway.

I added the maven dependencies and amended the application.properties file with the needed Flyway configuration. Then I started the microservice. Can you guess the result? Right — it failed! Can you guess why? I bet you can’t!

The answer was: Flyway is using the PostgreSQL JDBC driver, but the microservice now using vert.x PostgreSQL reactive driver.

The solution: Use the vanilla Flyway plugin in a dependent module or maven profile:

<plugin>
    <groupId>org.flywaydb</groupId>
    <artifactId>flyway-maven-plugin</artifactId>
    <version>5.0.7</version>
    <configuration>
        <url>${pg.url}</url>
        <user>${pg.user}</user>
        <password>${pg.password}</password>
        <table>schema_version</table>
    </configuration>
    <dependencies>
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>42.2.5</version>
        </dependency>
    </dependencies>
</plugin>


Because of this issue, there already was a separate dependent module for that microservice: ClassNotFoundException when using quarkus:dev #2809.

Then start the migration manually, before the update. Not the perfect solution, but the database changes in this microservice are not so frequent.

Migrating AWS S3 Related Logic

So far I was using Spring Cloud AWS. It is based on AWS 1.xx.xx SDK in which all operations are blocking. You can use the asynchronous operators there, by they are simply executed in a dedicated thread pool. I was pleasantly surprised when I’ve found this: AWS SDK for Java version 2.

I’ve implemented it reactively. You can find the implementation in my otaibe-at-flight-quarkus-test project → branch ‘aws’.

To test how it works you have to provide your AWS credentials:

-Daws.accessKeyId=XXXXX -Daws.secretAccessKey=YYYYYY


Also in AwsService.java, you have to change the bucket and the region:

public static final String REGION = "eu-west-1";
public static final String BUCKET = "s3-stage-otaibea955a0bc5d0f";


Now my microservice is FULLY REACTIVE!

Migrating Spring Dependency Injection Module

Now came the most interesting and tricky part.

How to migrate from Spring application is well described here: Quarkus — Quarkus Extension for Spring DI API.

The "spring-di" was added to my pom.xml file. The usual approach was the same as the migration done so far:

  • Migrating the part of the code.
  • Testing the functionality.
  • If it does not work then making the prototype in my otaibe-at-flight-quarkus-test project, making it work and then fixing the migrated code.

As you probably know from the documentation (Quarkus — Quarkus Extension for Spring DI API):

Important Technical Note

Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. Spring classes and annotations are only used for reading the metadata and/or are used as user code method return types or parameter types. What that means for end-users, is that adding arbitrary Spring libraries will not have any effect. Moreover Spring infrastructure classes (like org.springframework.beans.factory.config.BeanPostProcessor for example) will not be executed.

What Does This Mean?

  • @PostConstruct annotation will not be going to work. After some tests, it turns out that it works for @ApplicaitonScoped beans, but if you produce the bean throughout @Configuration → @Bean annotation is not working.
  • All your code which relies on ApplicationManager to retrieve Spring beans is not going to work.

Again the migration was split into parts. The details are below.

Migrate Prototype(Dependent) Beans

It was clear for me that I would have problems with this kind of beans:

@Bean
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)


Or if I have to translate it to Quarkus:

@Dependent


In my microservice, this kind of beans is accessed through ApplicationContext. This means that I have to find all the places where code like this one is used:

public Entity001 entity001(Entity003 context) {
    Entity001 result = getApplicationContext().getBean(Entity001.class, context);
    return result;
}


Where Entity001 class is declared in the following way:

@Data
@NoArgsConstructor
@AllArgsConstructor
public class Entity001 {
    String prop001 = UUID.randomUUID().toString();
    String prop002 = UUID.randomUUID().toString();

    @Autowired
    Entity002 entity002;

    Entity003 entity003;

    @PostConstruct //not working
    public void init() {
        setEntity003(new Entity003());
    }
}


With the following configuration:

@Bean
@Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public Entity001 entity001(Entity003 entity003) {
    Entity001 result = new Entity001();
    result.setEntity003(entity003);
    return result;
}


and to replace it with something else. But with what?

Before continuing further on, please, pay attention to how the above code is working in the Spring environment:

  • entity003 is NOT autowired.
  • it is added as a parameter to the producer method entity001.
  • then the Spring automatically will autowire entity002.

Let us continue now.

First — with what to replace the ApplicationContext? Maybe I wasn’t paying enough attention while reading all Quarkus guides, but I didn’t find it in any guide. The answer is: javax.enterprise.inject.spi.BeanManager

Then I created utility class: BeanManagerUtils:

@Component
@Getter
@Setter
public class BeanManagerUtils {

    public <T> List<T> getReferences(BeanManager beanManager, Class<T> clazz) {
        Set<Bean<?>> beans = beanManager.getBeans(clazz);
        return beans.stream()
                .map(bean1 -> bean1.getTypes()
                        .stream()
                        .filter(type -> !type.equals(clazz) && clazz.isAssignableFrom((Class<?>) type))
                        .findFirst()
                        .map(type -> (Class<?>) type)
                        .orElse(null)
                )
                .filter(aClass -> aClass != null)
                .map(aClass -> createBean(beanManager, aClass))
                .map(o -> (T) o)
                .collect(Collectors.toList());
    }

    public <T> T createBean(BeanManager beanManager, Class<T> clazz) {
        Set<Bean<?>> beans = beanManager.getBeans(clazz);
        return createBean(beanManager, clazz, beans);
    }

    public <T> T createBean(BeanManager beanManager, String name, Class<T> clazz) {
        Set<Bean<?>> beans = beanManager.getBeans(name);
        return createBean(beanManager, clazz, beans);
    }

    public <T> T createBean(BeanManager beanManager, Class<T> clazz, Set<Bean<?>> beans) {
        Bean<?> bean = beans.stream()
                .filter(bean1 -> bean1.getTypes()
                        .stream()
                        .filter(type -> type.equals(clazz))
                        .findFirst()
                        .isPresent()
                )
                .findFirst()
                .get();
        CreationalContext<?> creationalContext = beanManager.createCreationalContext(bean);

        return (T) beanManager.getReference(
                bean,
                clazz,
                creationalContext);
    }
}


Now the producer method should look like this one:

public Entity001 entity001(Entity003 context) {
    Entity001 result = BeanManagerUtils().createBean(getBeanManager(), Entity001.class);
    result.setEntity003(context);
    return result;
}


This wasn’t working as expected at all.

The field entity002 wasn’t injected, and the init method wasn’t called.

If you want to make it work the following changes are required:

  • In the configuration bean, you have to put all dependent beans and to set them manually.
  • Then if you want you can call the init method there.
  • Also turns out that such bean should be named.

Now the producer and the configuration should look in the following way:

SpringConfig.java.

@Configuration
public class SpringConfig {

    public static final String ENTITY_001_SPRING = "entity001-spring";
    public static final String ENTITY_002_SPRING = "entity002-spring";
    public static final String ENTITY_004_SPRING = "entity004-spring";

    @Bean(name = ENTITY_001_SPRING)
    @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
    public Entity001 entity001(@Qualifier(SpringConfig.ENTITY_002_SPRING) Entity002 entity002) {
        Entity001 result = new Entity001();
        result.setEntity002(entity002);
        return result;
    }

    @Bean(name = ENTITY_002_SPRING)
    @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
    public Entity002 entity002() {
        return new Entity002();
    }
}


Producer method:

public Entity001 entity001(Entity003 context) {
    Entity001 result = BeanManagerUtils().createBean(getBeanManager(), SpringConfig.ENTITY_001_SPRING, Entity001.class);
    result.setEntity003(context);
    result.init(); //call the init method manually
    return result;
}


This was all about the migration of Prototype(Dependent) beans. For more details, you can check tests in My otaibe-at-flight-quarkus-test project. The tests are ServiceTests and BeanManagerUtilsTests.

Migrate Singleton Beans

There are no surprises if you follow the conventional way: If your bean classes are annotated with @Service or @Component it works as with Spring framework.

Inject the Interfaces

However, if you want to inject the interface and the implementation is created with @Configuration → @Bean you have to follow the rules from the previous section (Migrate Prototype(Dependent) Beans).

Eagerly Load Beans

I was surprised when I called my BeanManagerUtils → createBean method and it returned null result for Singleton bean.

The reason for this to happen is that by default all beans are lazy-loaded. My createBean method is written only to search for the beans, not to create them. If you want the result — the bean must be eagerly loaded before that.

If you want to eagerly load bean you have to do it on a startup event. Example code you can find in the Simple Netflix Eureka client using Quarkus. There are dedicated configuration which loads eagerly the required beans:

EagerBeansLoader

@ApplicationScoped
@Getter
@Setter
@Slf4j
public class EagerBeansLoader {
    @Inject
    JsonConfig jsonConfig;
    @Inject
    EurekaClient eurekaClient;
    @Inject
    ObjectMapper objectMapper;

    public void init(@Observes StartupEvent event) {
        log.info("init start");
        getEurekaClient().registerApp(); //not enough just to inject bean ...
        log.info("init end");
    }
}


In the code above the init method is called on StartupEvent. It would create the EagerBeansLoader

The other dependent beans are injected, but it seems that their @PostConstruct methods are not called (maybe this will be fixed in future versions of Quarkus). That’s why on row 15 there is an explicit call to register the application.

Injecting Collections

In my microservice I often have a code like this one:

@Autowired
List<BaseProcessor> processors;


This doesn’t work for Quarkus. I didn’t spend much time on research here. Instead, I came up with a quick (and dirty) solution — all processors are eagerly loaded and static collection with references to each one of them is kept. During the eager load, every processor self registers with that collection.

@Getter
@Setter
@ToString
@Slf4j
public abstract class BaseProcessor {

    public static final Collection<BaseProcessor> PROCESSORS = new ConcurrentLinkedQueue();

    public void init(@Observes StartupEvent event) {
        log.info("init started: {}", this.getClass().getSimpleName());
        PROCESSORS.add(this);
    }
}


Then, in-service class, the code is changed to:

Collection<BaseProcessor> processors = new ArrayList<>();

@PostConstruct
public void init() {
    processors = BaseProcessor.PROCESSORS;
}


You can check the full implementation in My otaibe-at-flight-quarkus-test project.

Circular Dependencies

The circular dependencies are forbidden for Quarkus. I’ve tried to unbind them, but it turned out that I had to rewrite the entire Price manipulation module in my microservice. I didn’t have enough amount of time to do that and again — I came up with a similar solution like the one in the previous section:

@ApplicationScoped
@Getter
@Setter
public class StorageServiceImpl implements StorageService {

    public static StorageServiceImpl INSTANCE;

    public void init(@Observes StartupEvent event) {
        INSTANCE = this;
    }

}


Then I rewrote the getter method.

public StorageService getStorageService() {
  return StorageServiceImpl.INSTANCE;
}


Injecting Spring IO Resources

Beans injected with @Value annotation work in Quarkus. Some special cases like this one do not work:

@Value("classpath:/uapi/RAIR.TXT")
private Resource carriersData;


I changed it to:

public static final String CARRIERS_DATA = "uapi/RAIR.TXT";

private Mono<ReferenceDataRetrieveRsp> retrieveReferenceDataFromPath(String path) {
    return RxJava2Adapter.singleToMono(
            getVertx().fileSystem().rxReadFile(path)
    )
            .map(Buffer::getBytes)
            .map(bytes -> new ByteArrayInputStream(bytes))
            .map(byteArrayInputStream -> retrieveReferenceDataFromStream(byteArrayInputStream))
            ;
}


Inject Beans From Dependent Modules

In the entire system, the commons classes are in a separate maven multimodule project. If you want classes from there to be injected you have to add the beans.xml file to the resources/META-INF folder of the module. The content of the file should be:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd"
       bean-discovery-mode="all"
       version="2.0">
</beans>


Miscellaneous Problems

I had one been of the type Guava EnumHashBiMap. For some reason, Quarkus refuses to inject it. This forced me to change the bean to POJO.

Migrating Spring Boot Web (WebFlux) Module

It is explained in Quarkus — Quarkus Extension for Spring Web API.

I am trying to keep all my REST Controllers as thin as possible. This helped me a lot because I almost rewrote them. The reasons were:

I am using Spring WebFlux Module and most of my REST Controller methods return Mono<T>. To preserve the reactivity, I changed the controllers to return CompletionStage<Response>.

Here it is how you can traverse from Mono to CompletionStage<Response>. The utility class is created which fills the CompletionStage<Response>:

@ApplicationScoped
@Getter
@Setter
@Slf4j
public class ControllerUtils {
    public <T> Mono<T> processResult(CompletableFuture<Response> webResult,
                                      Mono<T> result,
                                      T defaultIfEmpty,
                                      Function<T, Boolean> isValid) {
        return result
                .defaultIfEmpty(defaultIfEmpty)
                .doOnNext(t1 -> Optional.ofNullable(t1)
                        .filter(t -> isValid.apply(t))
                        .map(t -> webResult.complete(Response.ok(t).build()))
                        .orElseGet(() -> webResult.complete(
                                buildErrorResponse(Response.Status.BAD_REQUEST,
                                        Response.Status.BAD_REQUEST.getReasonPhrase()))
                        ))
                .doOnError(throwable -> webResult.complete(
                        buildErrorResponse(
                                Response.Status.INTERNAL_SERVER_ERROR,
                                throwable.getMessage())))
                .doOnTerminate(() -> log.debug("processResult end."));
    }

    private Response buildErrorResponse(Response.Status status, String message) {
        return Response.serverError()
                .status(status)
                .entity(message)
                .build();
    }

}


This is the real REST Controller method that uses it:

@RequestMapping(
        method = RequestMethod.POST,
        value = LOW_FARE_SEARCH,
        produces = {MediaType.APPLICATION_JSON_UTF8_VALUE},
        consumes = {MediaType.APPLICATION_JSON_VALUE}
)
@ResponseStatus(HttpStatus.OK)
public CompletionStage<Response> lowFareSearch(@RequestBody @NotBlank String rqString) throws Exception {
    logger.debug("lowFareSearch start");
    CompletableFuture<Response> result = new CompletableFuture<>();
    LowFareSearchReq req = jsonUtils.readValue(rqString, LowFareSearchReq.class, jsonConfig.getObjectMapper()).get();

    controllerUtils.processResult(
            result,
            flightService.lowFareSearch(req)
                    .map(rsp -> jsonUtils.toStringLazy(rsp, jsonConfig.getObjectMapper()).toString()),
            StringUtils.EMPTY,
            rs -> true
    )
            .doOnTerminate(() -> logger.debug("lowFareSearch end"))
            .subscribe();

    return result;
}


I will elaborate more on this method and in that way will show some other problems that I’ve faced during the migration process.

On rows 8, 11 and 16 — as you can see the result/response is String (on a later stage I will rewrite this to use the WebServer input/output streams directly), but not a LowFareSearchReq object. Why do I use String for the request/response? The reason is that I am using a very customized ObjectMapper which is doing all the transformations. I’ve read very carefully Quarkus — Writing JSON REST Services guide but wasn’t able to register this ObjectMapper with my REST Controller.

On rows 11, 13, 15 and 16 — why am I calling the class fields directly, but not through getters? I am using Lombok annotations to reduce the Java boilerplate code. Turns out that if I annotate with some Lombok annotation some of the REST Controller Fields — there will be Null Pointer Exception in Quarkus.

One other thing — the REST Controller always has to be annotated with @RequestMapping annotation.

Migrating Spring Mail

How to send mails is described in Quarkus — Sending emails. The migration went smoothly. I used the Reactive Mail Sender.

Only one thing should be kept in mind — in test and dev mode mails are not sent. Instead, their output is written to the console. If you want to send emails any way you should add this to your Quarkus configuration:

#
# Enables the mock mode, not sending emails.
# The content of the emails is printed on the console.
#
# Disabled by default on PROD, enabled by default on DEV and TEST modes.
#
quarkus.mailer.mock=false


Other Unexpected Problems

ClassNotFoundException When Using Quarkus:dev #2809

It is described here.

In short — if you use reflection in quarkus: dev mode:

  • Class from dependency is loaded.
  • The class from a project is not loaded.

In some rare cases, I am deep cloning beans. Because of this issue — I was unable to debug the microservice. I was forced to move all needed beans in a separate maven multimodule project.

Deployment on AWS EC2

So far my microservice was built to a fully executable jar file (fat jar). Then was installed as Installation as an init.d Service (System V). The microservice was installed on AWS EC2. The fat jar installation and configuration happened during the Cloud-Init phase.

With slight changes in my scripts, I managed to achieve the pretty same result.

There is also a fat jar in Quarkus, but here it is called Uber Jar. You have to change the quarkus-maven-plugin configuration in the following way:

<plugin>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-maven-plugin</artifactId>
    <version>${quarkus.version}</version>
    <configuration>
        <uberJar>true</uberJar>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>build</goal>
            </goals>
        </execution>
    </executions>
</plugin>


Now I started this Uber Jar file the same way as a Java jar file is started, but preserving the process owner and permissions.

In other words — my Cloud-Init configuration remained almost unchanged — the only difference was that now the Fat Jar was started as Java jar file instead of as an Init.d Service (System V).

Some Application Performance Measurements

CPU Usage

Compared to the other microservices the CPU usage is slightly lower (According to AWS Cloud Watch)

No need to say that the blue line is the microservice migrated to Quarkus. The green one — the load of this microservice is almost zero, but it still consumes more CPU than the migrated one.

Memory Consumption

In production, for this microservice, I am using AWS EC2 t3.micro instances. The memory reserved for the JVM is in the range 512-768 Mb. Currently, the migrated to Quarkus microservice consumes approximately 25% from the lower memory level — 128 Mb total.

Let me repeat what I have said at the beginning — the reason to start the migration was that the service got down very often. There was not enough memory for it — which meant that the memory consumption of the SpringBoot based microservice was nearly 768 Mb.

Now the memory consumption is 128 Mb — this is a noticeable difference for me.

Microservice Boot-Time

Now it is 3.06 seconds:

2019-10-14 07:34:33,601 ip-172-31-40-55 otaibe-at-flight-quarkus[2345] INFO  [io.quarkus] (main) Quarkus 0.22.0 started in 3.060s. Listening on: http://[::]:9888


Before it was nearly 30 seconds.

With this kind of boot time, I am seriously considering the possibility of migrating the microservice to AWS Lambda.

Conclusion

I didn’t manage to achieve the ultimate goal — migrate the microservice to the Quarkus Native Application. In my opinion — this is because I still have intensive usage of Spring Framework components and I have some SOAP-based XML transformations.

Nevertheless — now there is no need to move the service to the bigger instance because the memory is reduced with approximately 400 percent and the boot time is tremendously reduced (at least 900%).

Last, but not least — during this journey, I acquired a bigger understanding of the Quarkus framework and now I am certain that I can successfully plug-in Quarkus based applications in my existing infrastructure.


Further Reading

Developing Serverless Applications With Quarkus

Using Quarkus to Run Java Apps on Kubernetes

What I've Learned While Building a To-Do App Using Quarkus

Topics:
java ,spring boot ,quarkus ,aws s3 ,hibernate ,postgresql ,spring web flux ,spring ,reactor ,microservices

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}