Over a million developers have joined DZone.

A Beginner's Guide to Deploying a Lagom Service Without ConductR

DZone 's Guide to

A Beginner's Guide to Deploying a Lagom Service Without ConductR

In this tutorial, we'll see how Lagom can work without ConductR, the container orchestration tool dedicated to Lagom. Read on to find out how.

· Microservices Zone ·
Free Resource

How can we deploy a Lagom Service without ConductR? This question has been asked and answered by many, on different forums. For example, take a look at this question on StackOverflow. Here, the user is trying to learn whether it is possible to use Lagom in production without ConductR or not. To which the best answer that came up was – “Yes, it is possible!”. Similarly, there are other forums too where we can find an answer to this question.

So, we decided to find a solution for it and share it. In this blog post, we will guide you towards deploying a Lagom Microservice in production without using ConductR with a simple java -cp command. Now, let's take a look at the steps.

Step One - Configuring Cassandra Contact Points

If you are planning to use dynamic service location for your service but need to statically locate Cassandra, which is obvious in Production, then modify the application.conf of your service. Also, disable Lagom's ConfigSessionProvider and fall back to the one provided in akka-persistence-cassandra, which uses the list of endpoints listed in contact-points. Your Cassandra configuration should look something like this-

cassandra.default {
## list the contact points here
contact-points = [""]
## override Lagom’s ServiceLocator-based ConfigSessionProvider
session-provider = akka.persistence.cassandra.ConfigSessionProvider

cassandra-journal {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}

cassandra-snapshot-store {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}

lagom.persistence.read-side.cassandra {
contact-points = ${cassandra.default.contact-points}
session-provider = ${cassandra.default.session-provider}

Step Two - Providing Kafka Broker Settings

Next step is to provide Kafka broker settings if you plan to use Lagom's streaming service. For this, you need to modify the application.conf of your service, if Kafka service is to be statically located, which is the case when your service acts only like a consumer, otherwise, you do not need to give following configurations.

lagom.broker.kafka {
  service-name = ""

  brokers = ""

  client {
    default {
      failure-exponential-backoff {
        min = 3s
        max = 30s
        random-factor = 0.2

    producer = ${lagom.broker.kafka.client.default}
    producer.role = ""

    consumer {
      failure-exponential-backoff = ${lagom.broker.kafka.client.default.failure-exponential-backoff}
      offset-buffer = 100
      batching-size = 20
      batching-interval = 5 seconds

Step Three - Creating Akka Cluster

At last, we need to create an Akka cluster on our own. Since we are not using ConductR, we need to implement the joining yourself. This can be done by adding following lines in application.conf.

akka.cluster.seed-nodes = [

Now, we know what configurations we need to provide to our service, let's take a look at the steps of deployment. Since we are using just java -cp command, we need to package our service and run it. To simplify the process, we have created a shell script for it.

For a complete example, you can refer to our GitHub repo - Lagom Scala SBT Standalone project.

I hope you found this blog helpful. If you have any suggestion or question, then please comment below.

This article was first published on the Knoldus blog.

lagom ,conductr ,microservices ,containers ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}