Launching Vert.x Dynamically
This article shows you how to launch Vert.x, the toolkit for creating reactive apps on the JVM, in a dynamic way.
Join the DZone community and get the full member experience.
Join For FreeVert.x started back in 2011 and it was one of the first projects to push the reactive microsystem model of the modern applications that need to handle a lot of concurrency. Since back then, people have developed best practices from writing good quality code using Rxfied Vert.x, RxJava's Observable, and JoinObservable to its deployment using Docker, Kubernetes, or Swarm. Vert.x does not restrict developers to obey certain rules and standards, therefore, it is a better fit for our current Agile environments and Lean Entreprises. Thus, Developers like us, who are keen on freedom, can try new ways of doing things. With that in mind, we did not want to launch our microservices in statically defined ways. So in this article, I want to introduce how we launch the Vert.x in a dynamic way and in the coming days we want to publish series of articles about how we use brand new methods related to things like service discovery and deployment.
Vert.x can be launched in a bunch of ways. One way is Vert.x Shell, which lets you use a Maven repo to pull artifacts directly to the environment. One needs to pass JSON formatted conf with -conf command line argument to configure the app. The other method is putting everything all together with one fat-jar which I and perhaps most of us think of as an ugly practice. All these methods are good enough but not fit for our organization, because we have a dynamic production environment where configuration is not static and all our instances are being deployed as Docker containers to the public cloud. For this reason we integrated a Gradle application plugin with Docker . So with that integration, what we need is just to define a main class and a configuration path inside the build.gradle. Besides that, some configurations, such as cluster host ip and port, are dynamic and depend on the environment. Moreover, after upgrading to Vert.x 3.3.0 now we can use Hazelcast 3.6.3 as a cluster manager. Therefore, Hazelcast Plugins can now be used for service discovery, which adds another level of dynamism to the production environment. To solve this issue we need to actually generate a config file at runtime rather than statically define them. There is already an attempt to achieve this using confd. Which requires a little bit of learning curve, such as one need to understand what .toml and .tmpl is. And it is written in go-lang, so it does not go well with our JVM environment. So instead we developed an even simpler flow where a generic launcher reads the .json config file as a Freemarker Template, if there is one, then renders it with default JVM arguments. After that using this config it creates vert.x instance and immediately deploy verticles on top of it. In the future we may even want to use etcd or Zookeeper to store key-values to render the .json template just like confd does. But right now it is not a requirement for us.
Here I would like to introduce some code to see how we implemented using the functional programming paradigm. Thanks to Rxified Vert.x, we can define this flow in a very declarative way, like so:
readConf(System.getProperty("conf"))
.ifPresent(c -> {
createVertx(new VertxOptions(c.getJsonObject("vertxOptions")))
.flatMap(vertx -> deployVerticles(vertx,c.getJsonObject("verticles")))
.subscribe(logSuccess, logError);
});
Of course, behind the scenes there is a little bit more work to make this process as easy as it looks up there. You can refer to our GitHub repo to see the rest of the code.
Because RxJava applies the functional programming paradigm, it rescues us from writing too much boilerplate code. But, in my opinion, we are still writing a lot of boilerplate code. I would expect Java to have Scala's syntactic sugar for-yield. Then it would look like this:
val conf = readConf("conf/sercan").get()
for (vertx <- createVertx(conf \ "vertxOptions");
success <- deployVerticles(vertx, conf \ "verticles")
) yield (success) {
LOGGER.info("Verticle Deployed {}", success)
}
So let's keep going with an example use case. Imagine the scenerio where we have a bunch of Verticles which create a shared MongoClient. In this case, all of the Verticles would require the same Mongo config, which leads to the config file growing needlessly.
However, now it looks as follows, which I think is elegant indeed:
mainClassName = 'com.foreks.vertx.launcher.VertxConfigLauncher'
startScripts {
doLast {
unixScript.text = unixScript.text.replace('\\$NODEIP', '$(hostname -i)')
//windowsScript.text = windowsScript.text.replace('\\$NODEIP', '%PORT%') //untested
}
}
applicationDefaultJvmArgs = [
//We need to pass mongoConfig as JVM arguement like this
"-DmongoOptions=${file('conf/mongoConf.json').text}",
'-Dnodeip=$NODEIP',
'-Dcluster-xml=conf/cluster.xml',
... etc other options
...
]
As you can see, it reads the conf/mongoConf.json file and uses it as a JVM argument to pass our renderer to load the config. And below we define the ${mongoOptions} parameter, which is where the JVM argument -DmongoClient's value going to place. Even if this file is .json, actually behind the scenes it is a Freemarker Template file. So that any logic using this template language can be used here.
{
"verticles": {
"com.foreks.feed.tip.filereader.FirstVerticle": {
"deploymentOptions": {
"config": {
"mongoClient":${mongoOptions}
}
},
"instances": 1,
"ha": true,
"worker": false,
"multiThreaded": false
},
"com.foreks.feed.tip.filereader.SecondVerticle": {
"deploymentOptions": {
"config": {
"mongoClient":${mongoOptions}
}
},
"instances": 1,
"ha": true,
"worker": false,
"multiThreaded": false
},
},
"vertxOptions": {
"clustered": true,
"clusterHost": "${nodeip}",
"quorumSize": 1,
"haEnabled": true,
"haGroup": "definition",
"eventLoopPoolSize": 4,
"workerPoolSize": 12,
}
}
Hope you found this very helpful.
Opinions expressed by DZone contributors are their own.
Comments