Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Testing HTTP Clients Using Spark, Revisited

DZone's Guide to

Testing HTTP Clients Using Spark, Revisited

See how you can spin up multiple Spark server instances as a part of your test suite — both quickly and easily! You just have to dig a little deep.

Free Resource

Modernize your application architectures with microservices and APIs with best practices from this free virtual summit series. Brought to you in partnership with CA Technologies.

In a previous post, I described the very small sparkjava-testing library I created to make it really simple to test HTTP client code using the Spark micro-framework. It is basically one simple JUnit 4 rule (SparkServerRule) that spins up a Spark HTTP server before tests run and shuts it down once tests have executed. It can be used either as a @ClassRule or as a @Rule.

Using @ClassRule is normally what you want to do, which starts an HTTP server before any tests have run and shuts it down after all tests have finished.

In that post, I mentioned that I needed to do an "incredibly awful hack" to reset the Spark HTTP server to non-secure mode so that, if tests run securely using a test key store, other tests can also run either non-secure or secure, possibly with a different key store. I also said the reason I did that was because "there is no way I found to easily reset security." The reason for all that nonsense was because I was using the static methods on the Spark class such as port, secure, get, post, and so on. Using the static methods also implies only one server instance across all tests, which is also not so great.

Well, it turns out I didn't really dig deep enough into Spark's features because there is a really simple way to spin up separate and independent Spark server instances. You simply use the Service#ignite method to return an instance of Service. You then configure the Service however you want, i.e., change the port, add routes, filters, set the server to run securely, etc. Here's an example:

Service http = Service.ignite();
http.port(56789);
http.get("/hello", (req, resp) -> "Hello, Spark service!");

So, now you can create as many servers as you want. This is exactly what is needed for the SparkServerRule, which has been refactored to use Spark#ignite to get separate servers for each test. It now has only one constructor which takes a ServiceInitializer and can be used to do whatever configuration you need, add routes, filters, etc. Since ServiceInitializer is a @FunctionalInterface you can simply supply a lambda expression, which makes it cleaner. Here is a simple example:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(http -> {
    http.get("/ping", (request, response) -> "pong");
    http.get("/health", (request, response) -> "healthy");
});

This is a rule that, before any test is run, spins up a Spark server on the default port 4567 with two GET routes, and shuts the server down after all tests have completed. To do things like change the port and IP address in addition to adding routes, you just call the appropriate methods on the Service instance (in the example above, the http object passed to the lambda). Here's an example:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(https -> {
    https.ipAddress("127.0.0.1");
    https.port(56789);
    URL resource = Resources.getResource("sample-keystore.jks");
    https.secure(resource.getFile(), "password", null, null);
    https.get("/ping", (request, response) -> "pong");
    https.get("/health", (request, response) -> "healthy");
});

In this example, tests will be able to access a server with two secure (https) endpoints at IP 127.0.0.1 on port 56789. So that's it. On the off chance someone was actually using this rule other than me, the migration path is really simple. You just need to configure the Service instance passed in the SparkServerRule constructor as shown above. Now, each server is totally independent which allows tests to run in parallel (assuming they're on different ports). And better, I was able to remove the hack where I used reflection to go under the covers of Spark and manipulate fields, etc. So, test away on that HTTP client code!

This blog was originally published on the Fortitude Technologies blog here.

The Integration Zone is proudly sponsored by CA Technologies. Learn from expert microservices and API presentations at the Modernizing Application Architectures Virtual Summit Series.

Topics:
integration ,spark ,http ,tutorial

Published at DZone with permission of Scott Leberknight, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}