Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

JMH: Benchmark REST APIs

DZone's Guide to

JMH: Benchmark REST APIs

This tutorial explores JMH, a benchmark for REST APIs.

· Integration Zone ·
Free Resource

Discover how you can get APIs and microservices to work at true enterprise scale.

Java Microbenchmark Harness 

The journey started with my boss asking me to measure the performance of a function. 

I am sure we all have done this in the past:

long startTime = java.lang.System.currentTimeMillis();
    Some code here….
long endTime = java.lang.System.currentTimeMillis();
System.out.println(“Time consumed - “ + (endTime-startTime) );

I then thought, "I am better than this. Let me create an annotation with @TrackTime to impress him with Spring AOP." 

public class Profiler {
    Log log = LogFactory.getLogger("ELASTIC");
    @Around("@annotation(com.pcl.core.profiler.TrackTime)")
    public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
        long startTime = System.currentTimeMillis();
        Object out = joinPoint.proceed();
        long timeTaken = System.currentTimeMillis() - startTime;
        logTimeTaken(joinPoint, timeTaken);
        return out;
    }

    protected void logTimeTaken(ProceedingJoinPoint joinPoint, long timeTaken) {
        HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.currentRequestAttributes()).getRequest();
        LogRecord item = new LogRecord( request.getHeader("AppId"), "Time Taken by " + joinPoint + ": " + timeTaken, request.getRequestURI(), request );
        log.error(item); 
    }
}
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface TrackTime {
}

I was not happy with my approach and couldn’t sleep well for a night. It made me start exploring, and I landed on JMH, Java harness for analyzing benchmarks. It reports average execution time, throughput (operations per milliseconds) of our code, along with other details in a near-production environment. The environment was simulated by running in multiple JVMs multiple times before recording the measurements. We can also have the report generated in multiple formats like text, csv, etc. We can have different benchmarks depending on the sizes — micro, meso, macro. 

The JMH report indicates how the JVM optimized the code in a near-production environment. 

  1. Default — executes the code in 10 @Fork (separate JVM environments), 20 @Warmup cycles for 1 sec (JVM to optimize the code, not used for measurement), records 20 @Measure cycles for 1 sec. Can be changed.
  2. @BenchmarkModes — Mode.AverageTime (lower number is good), Mode.Throughput — operations per milliseconds (higher score is good) 
  3. Run concurrent benchmarks (class with multiple @GenerateMicroBenchmark) — especially challenging when classes have state variables. Threads running the benchmarks in a JVM need to consider that the states of these variables may change by another benchmark
  4. Generate results in many formats apart from plain text. 
Options opts = new OptionsBuilder()
    .include(".*.ConcurrentBench.*")
    .warmupIterations(5)
    .measurementIterations(5)
    .measurementTime(TimeValue.milliseconds(5000))
    .forks(3)
    .result("results.csv")
    .resultFormat(ResultFormatType.CSV)
    .build();
new Runner(opts).run();

You should go through the best practices for JMH. I found these really helpful:

  1. Make sure that your benchmark returns something (or at least consume the data using BlackHole). As a part of dead code elimination, JVM tends to discard the computation if the results are not used anywhere. Returning a value and using a Blackhole object are equivalent.
  2. Loop Optimizations — Avoid looping in your benchmark if you need to run the benchmark multiple times.
  3. Avoid constant folding and dead code in your benchmarks.

The next step was checking the performance of REST APIs in my Spring Boot application.

The trick is to start the Spring Boot application when the benchmark is getting initialized. I had used a SpringContext class, which creates the application context using the spring boot Application class. It can also be created using the applicationContext.xml file in the JMH resources folder.

public class SpringContext {
    private static volatile ConfigurableApplicationContext context;

    public static void setContext() {
    context = SpringApplication.run(ProductApplication.class);
    // If you want to intialize from applicationContext.xml
    // return new ClassPathXmlApplicationContext("applicationContext.xml");
    }

    // autowire the Benchmark class into the application context. 
    // This lets you use all other autowired components from the benchmark class.
    // No need declaring them individually as 
    //  testService = context.getBean("testService", TestService.class);
    //  productService = context.getBean("productRepository", ProductRepository.class);
    public static void autowireBean(Object bean) {
        AutowireCapableBeanFactory factory = context.getAutowireCapableBeanFactory();
        factory.autowireBean(bean);
    }

public static void close() throws IOException {
context.close();
}
}
@State(Scope.Thread)
public class ProductControllerBenchmark {
    @Autowired
    private ConfigurableApplicationContext context;

    @Autowired
    private ProductRepository productService;

    @Autowired
    private TestService testService;

    @Setup
    public void init() {
      SpringContext.setContext();
      SpringContext.autowireBean(this);
    }

@Benchmark
    public Long checkCacheableAnnotationPerformance() {
return testService.annotatedService();
    }

@Benchmark
    public Long checkManualCachePerformance() {
return testService.manualProducts("key");
    }

@Benchmark
    public List<ProductDto> productH2call() {
    return productService.findProducts("A");
    }

  @TearDown
  public void close() throws IOException {
  SpringContext.close();
  }
}

I was excited with the results, but then I thought of the requirement from my boss. He wanted to see the response times of the REST API calls in the production environment, which changes based on traffic, load, network, memory, and many more parameters. The benchmarks are good to find bottlenecks in the code or evaluate a new idea/framework before pushing to production.

I resorted to using @Tracktime to find the response time in production.

Sample JMH configurations:

Using the Options object provided in Java:

Options opts = new OptionsBuilder()
    .include(".*.ConcurrentBench.*")
    .warmupIterations(5)
    .measurementIterations(5)
    .measurementTime(TimeValue.milliseconds(5000))
    .forks(3)
    .result("results.csv")
    .resultFormat(ResultFormatType.CSV)
    .build();
new Runner(opts).run();

Using annotations: 

@Benchmark
@BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MICROSECONDS)
@Fork(value = 1)
@Warmup(iterations = 1)
@Measurement(iterations = 1)
public void test() {
some code ...;
}

Use the gradle plugin, me.champeau.gradle:JMH-gradle-plugin. You will find me using this in the attached code base.

jmh {
  humanOutputFile = null
  fork = 1
  warmupIterations = 1
  benchmarkMode = ['thrpt','avgt']
  iterations = 1
  duplicateClassesStrategy = 'warn'
  timeUnit = 'ns'
} 

Below is a list of all options available to configure JMH along with the equivalent grade plugin property names.

JMH Option

Gradle Plugin Extension Property

-bm <mode>

benchmarkMode

-bs <int>

batchSize

-e <regexp+>

exclude

-f <int>

fork

-foe <bool>

failOnError

-gc <bool>

forceGC

-i <int>

iterations

-jvm <string>

jvm

-jvmArgs <string>

jvmArgs

-jvmArgsAppend <string>

jvmArgsAppend

-jvmArgsPrepend <string>

jvmArgsPrepend

-o <filename>

humanOutputFile

-opi <int>

operationsPerInvocation

-p <param={v,}*>

benchmarkParameters?

-prof <profiler>

profilers

-r <time>

timeOnIteration

-rf <type>

resultFormat

-rff <filename>

resultsFile

-si <bool>

synchronizeIterations

-t <int>

threads

-tg <int+>

threadGroups

-to <time>

timeout

-tu <TU>

timeUnit

-v <mode>

verbosity

-w <time>

warmup

-wbs <int>

warmupBatchSize

-wf <int>

warmupForks

-wi <int>

warmupIterations

-wm <mode>

warmupMode

-wmb <regexp+>

warmupBenchmarks

I performed 3 different tests.

1. Wanted to check the performance of jpa using Hibernate from an embedded H2.

2. Wanted to check how a manual cache operation is more efficient than @Cacheable.

Test results below:

Benchmark Mode Score Error Units
ProductControllerBenchmark.checkCacheableAnnotationPerformance thrpt ≈ 10⁻⁵ ops/ns
ProductControllerBenchmark.checkManualCachePerformance thrpt ≈ 10⁻⁵ ops/ns
ProductControllerBenchmark.productH2call thrpt ≈ 10⁻⁵ ops/ns
ProductControllerBenchmark.checkCacheableAnnotationPerformance avgt 59226.216 ns/op
ProductControllerBenchmark.checkManualCachePerformance avgt 56493.119 ns/op
ProductControllerBenchmark.productH2call avgt 44943.767 ns/op

APIs and microservices are maturing, quickly. Learn what it takes to manage modern APIs and microservices at enterprise scale.

Topics:
java ,spring boot ,rest api ,h2 database ,tutorial ,integration ,rest api benchmarks

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}