{{announcement.body}}
{{announcement.title}}

Running Axon Server in Docker

DZone 's Guide to

Running Axon Server in Docker

Axon server in Docker, both using the public image up on Docker Hub as well as with a locally built image, and why you might want to do that.

· Microservices Zone ·
Free Resource

In my previous post, I showed you how to run Axon Server locally and configure it for secure operations. We also looked at the possibilities for configuring storage locations. This time around we’ll look at running it in Docker, both using the public image up on Docker Hub as well as with a locally built image, and why you might want to do that.

Note: We now have a repository live on GitHub with scripts, configuration files, and deployment descriptors. You can find it on https://github.com/AxonIQ/running-axon-server.

Axon Server in a Container

Running Axon Server in a container is actually pretty simple using the provided image, with a few predictable gotchas. Let’s start with a simple test:

Plain Text
 




x


1
$ docker run axoniq/axonserver
2
Unable to find image 'axoniq/axonserver:latest' locally
3
latest: Pulling from axoniq/axonserver
4
9ff2acc3204b: Pull complete 
5
69e2f037cdb3: Pull complete 
6
3e010093287c: Pull complete 
7
3aaf8fbd9150: Pull complete 
8
1a945471328b: Pull complete 
9
1a3fb0c2d12b: Pull complete 
10
cb60bf4e2607: Pull complete 
11
1ce42d85789e: Pull complete 
12
b400281f4b04: Pull complete 
13
Digest: sha256:514c56bb1a30d69c0c3e18f630a7d45f2dca1792ee7801b7f0b7c22acca56e17
14
Status: Downloaded newer image for axoniq/axonserver:latest
15
     _                     ____
16
    / \   __  _____  _ __ / ___|  ___ _ ____   _____ _ __
17
   / _ \  \ \/ / _ \| '_ \\___ \ / _ \ '__\ \ / / _ \ '__|
18
  / ___ \  >  < (_) | | | |___) |  __/ |   \ V /  __/ |
19
 /_/   \_\/_/\_\___/|_| |_|____/ \___|_|    \_/ \___|_|
20
 Standard Edition                        Powered by AxonIQ
21
 
          
22
version: 4.3
23
2020-02-27 13:45:40.156  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Starting AxonServer on c23aa95bb8ec with PID 1 (/app/classes started by root in /)
24
2020-02-27 13:45:40.162  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : No active profile set, falling back to default profiles: default
25
2020-02-27 13:45:44.523  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8024 (http)
26
2020-02-27 13:45:44.924  INFO 1 --- [           main] A.i.a.a.c.MessagingPlatformConfiguration : Configuration initialized with SSL DISABLED and access control DISABLED.
27
2020-02-27 13:45:49.453  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Axon Server version 4.3
28
2020-02-27 13:45:53.414  INFO 1 --- [           main] io.axoniq.axonserver.grpc.Gateway        : Axon Server Gateway started on port: 8124 - no SSL
29
2020-02-27 13:45:54.070  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8024 (http) with context path ''
30
2020-02-27 13:45:54.075  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Started AxonServer in 15.027 seconds (JVM running for 15.942)
31
When we see that last line, we open a second window and query the REST API:
32
$ curl -s http://localhost:8024/v1/public/me
33
curl: (7) Failed to connect to localhost port 8024: Connection refused
34
$ 



Ok, anyone who ever ran Docker containers before would see that coming: the container may have announced that ports 8024 and 8124 are to be exposed, but that was just a statement of intent. So we ^C ourselves out of here and add “-p 8024:8024 -p 8124:8124”. On the Axon Server side nothing looks different, but now we can get access:

JSON
 




x
21


 
1
$ curl -s http://localhost:8024/v1/public/me | jq
2
{
3
  "authentication": false,
4
  "clustered": false,
5
  "ssl": false,
6
  "adminNode": true,
7
  "developmentMode": false,
8
  "storageContextNames": [
9
    "default"
10
  ],
11
  "contextNames": [
12
    "default"
13
  ],
14
  "name": "87c201162360",
15
  "hostName": "87c201162360",
16
  "grpcPort": 8124,
17
  "httpPort": 8024,
18
  "internalHostName": null,
19
  "grpcInternalPort": 0
20
}
21
$



As discussed last time, having a node name “87c201162360” is no problem, but the hostname will be, as a client application will by default follow Axon Server’s request to switch to that name without question, and fail to find it. We can reconfigure Axon Server without much trouble, but let me start with telling a bit about the container’s structure.

It was made using Axon Server SE, which is Open Source and can be found at https://github.com/AxonIQ/axon-server-se. The container is built using a compact image from Google’s “distroless” base images at the gcr.io repository, in this case “gcr.io/distroless/java:11”. The application itself is installed in the root with a minimal properties file:

Properties files
 




xxxxxxxxxx
1
13
9


 
1
axoniq.axonserver.event.storage=/eventdata
2
axoniq.axonserver.snapshot.storage=/eventdata
3
axoniq.axonserver.controldb-path=/data
4
axoniq.axonserver.pid-file-location=/data
5
logging.file=/data/axonserver.log
6
logging.file.max-history=10
7
logging.file.max-size=10MB



The “/data” and “/eventdata” directories are created as volumes, and their data will be accessible on your local filesystem somewhere in Docker’s temporary storage tree. Alternatively, you can tell docker to use a specific directory, which will allow you to put it at a more convenient location. A third directory, not marked as a volume in the image, is important for our case: If you put an “axonserver.properties” file in “/config”, it can override the settings above and add new ones:

Plain Text
 




xxxxxxxxxx
1
23


 
1
$ mkdir -p axonserver/data axonserver/events axonserver/config
2
$ (
3
> echo axoniq.axonserver.name=axonserver 
4
> echo axoniq.axonserver.hostname=localhost
5
> ) > axonserver/config/axonserver.properties
6
$ docker run -d --rm --name axonserver -p 8024:8024 -p 8124:8124 \
7
> -v `pwd`/axonserver/data:/data \
8
> -v `pwd`/axonserver/events:/eventdata \
9
> -v `pwd`/axonserver/config:/config \
10
> axoniq/axonserver
11
4397334283d6185506ad27a024fbae91c5d2918e1314d19fcaf2dc20b4e400cb
12
$



Now if you query the API (either using the “docker logs” command to verify startup has finished, or simply repeating the “curl” command until it responds) it will show that it is running with name “axonserver” and hostname “localhost”. Also, if you look at the data directory, you will see the ControlDB file, PID file, and a copy of the log output, while the “events” directory will have the event and snapshot data.

From Docker to Docker-Compose

Running Axon Server in a Docker container has several advantages, the most important of which being the compact distribution format: With one single command we have installed and started Axon Server, and it will always work in the same, predictable, fashion. You will most likely use this for local development and demonstration scenarios, as well as for tests of Axon Framework client applications. 

That said, Axon Server is mainly targeted at a distributed usage scenario, where you have several application components exchanging Events, Commands, and Queries. For this you will more likely employ docker-compose, or larger-scale infrastructural products such as Kubernetes, Cloud-Foundry, and Red Hat OpenShift.

To start with docker-compose, the following allows you to start Axon Server with “./data”, “./events”, and “./config” mounted as volumes, where the config directory is actually Read-Only. Note: This has been tested on MacOS and Linux. On Windows 10 named volume mapping using the “local” driver will not work, so you need to remove the “driver” and “driver_opts” sections in the file below. 

The new Windows Subsystem for Linux (WSL version 2) in combination with the Docker Desktop based on it will hopefully bring relief, but for the moment you will not be able to use volumes in docker-compose and then access the files from the host on Windows.

YAML
 




xxxxxxxxxx
1
43


 
1
version: '3.3'
2
services:
3
  axonserver:
4
    image: axoniq/axonserver
5
    hostname: axonserver
6
    volumes:
7
      - axonserver-data:/data
8
      - axonserver-events:/eventdata
9
      - axonserver-config:/config:ro
10
    ports:
11
      - '8024:8024'
12
      - '8124:8124'
13
      - '8224:8224'
14
    networks:
15
      - axon-demo
16
 
          
17
volumes:
18
  axonserver-data:
19
  axonserver-events:
20
  axonserver-config:
21
 
          
22
volumes:
23
  axonserver-data:
24
    driver: local
25
    driver_opts:
26
      type: none
27
      device: ${PWD}/data
28
      o: bind
29
  axonserver-events:
30
    driver: local
31
    driver_opts:
32
      type: none
33
      device: ${PWD}/events
34
      o: bind
35
  axonserver-config:
36
    driver: local
37
    driver_opts:
38
      type: none
39
      device: ${PWD}/config
40
      o: bind
41
 
          
42
networks:
43
  axon-demo:



This also sets the container’s hostname to “axonserver”, so all you need to add is an “axonserver.properties” file:

Plain Text
 




xxxxxxxxxx
1
24


 
1
$ echo “axoniq.axonserver.hostname=localhost” > config/axonserver.properties
2
$ docker-compose up
3
Creating network "docker-compose_axon-demo" with the default driver
4
Creating volume "docker-compose_axonserver-data" with local driver
5
Creating volume "docker-compose_axonserver-events" with local driver
6
Creating volume "docker-compose_axonserver-config" with local driver
7
Creating docker-compose_axonserver_1 ... done
8
Attaching to docker-compose_axonserver_1
9
axonserver_1  |      _                     ____
10
axonserver_1  |     / \   __  _____  _ __ / ___|  ___ _ ____   _____ _ __
11
axonserver_1  |    / _ \  \ \/ / _ \| '_ \\___ \ / _ \ '__\ \ / / _ \ '__|
12
axonserver_1  |   / ___ \  >  < (_) | | | |___) |  __/ |   \ V /  __/ |
13
axonserver_1  |  /_/   \_\/_/\_\___/|_| |_|____/ \___|_|    \_/ \___|_|
14
axonserver_1  |  Standard Edition                        Powered by AxonIQ
15
axonserver_1  | 
16
axonserver_1  | version: 4.3
17
axonserver_1  | 2020-03-10 13:17:26.134  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Starting AxonServer on axonserver with PID 1 (/app/classes started by root in /)
18
axonserver_1  | 2020-03-10 13:17:26.143  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : No active profile set, falling back to default profiles: default
19
axonserver_1  | 2020-03-10 13:17:32.383  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8024 (http)
20
axonserver_1  | 2020-03-10 13:17:32.874  INFO 1 --- [           main] A.i.a.a.c.MessagingPlatformConfiguration : Configuration initialized with SSL DISABLED and access control DISABLED.
21
axonserver_1  | 2020-03-10 13:17:38.741  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Axon Server version 4.3
22
axonserver_1  | 2020-03-10 13:17:43.586  INFO 1 --- [           main] io.axoniq.axonserver.grpc.Gateway        : Axon Server Gateway started on port: 8124 - no SSL
23
axonserver_1  | 2020-03-10 13:17:44.341  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8024 (http) with context path ''
24
axonserver_1  | 2020-03-10 13:17:44.349  INFO 1 --- [           main] io.axoniq.axonserver.AxonServer          : Started AxonServer in 19.86 seconds (JVM running for 21.545)



Now you have it running locally, with a fresh and predictable environment, and easy access to the properties file. Also, as long as you leave the “data” and “events” directories untouched, you will get the same event store over subsequent runs, while cleaning them is simply a matter of removing and recreating those directories.

Differences With Axon Server EE

To extend this docker-compose setup to Axon Server EE, we first need to build an image, as there is no public image for it. Using the same approach as with SE however, it is a relatively simple one that will work for OpenShift, Kubernetes, as well as Docker and docker-compose. Also, we can be a bit more security conscious and run Axon Server as a non-root user. This last bit forces the usage of a two-stage Dockerfile, since the Google “distroless” images do not contain a shell, and we want to run a few commands:

Dockerfile
 




xxxxxxxxxx
1
21


 
1
FROM busybox as source
2
RUN addgroup -S axonserver \
3
    && adduser -S -h /axonserver -D axonserver \
4
    && mkdir -p /axonserver/config /axonserver/data \
5
                /axonserver/events /axonserver/log \
6
    && chown -R axonserver:axonserver /axonserver
7
 
          
8
FROM gcr.io/distroless/java:11
9
 
          
10
COPY --from=source /etc/passwd /etc/group /etc/
11
COPY --from=source --chown=axonserver /axonserver /axonserver
12
 
          
13
COPY --chown=axonserver axonserver.jar axonserver.properties /axonserver/
14
 
          
15
USER axonserver
16
WORKDIR /axonserver
17
 
          
18
VOLUME [ "/axonserver/config", "/axonserver/data", "/axonserver/events", "/axonserver/log" ]
19
EXPOSE 8024/tcp 8124/tcp 8224/tcp
20
 
          
21
ENTRYPOINT [ "java", "-jar", "axonserver.jar" ]



The first stage creates a user and group named “axonserver”, as well as the directories that will become our volumes, and finally sets the ownership. The second stage begins by copying the account (in the form of the “passwd” and “group” files) and the home directory with its volume mount points, carefully keeping ownership set to the new user. The last steps are the “regular” steps, copying the executable jar and a common set of properties, marking the volume mounting points and exposed ports, and specifying the command to start Axon Server.

For the common properties we’ll use just enough to make it use our volume mounts, and add a log file for good measure:

Properties files
 




xxxxxxxxxx
1


1
axoniq.axonserver.event.storage=/axonserver/events
2
axoniq.axonserver.snapshot.storage=/axonserver/events
3
axoniq.axonserver.replication.log-storage-folder=/axonserver/log
4
axoniq.axonserver.controldb-path=/axonserver/data
5
axoniq.axonserver.pid-file-location=/axonserver/data
6
 
          
7
logging.file=/axonserver/data/axonserver.log
8
logging.file.max-history=10
9
logging.file.max-size=10MB



You can build this image and push it to your local repository, or keep it local if you only want to run it on your development machine. On the docker-compose side we now can specify three instances of the same container image, using separate volumes for “data”, “events”, and “log”, but we haven’t provided it yet with a license file and token. We’ll use secrets for that:

YAML
 




xxxxxxxxxx
1
15
9


1
# ...services, volumes, and networks sections skipped…
2
secrets:
3
  axonserver-properties:
4
    file: ./axonserver.properties
5
  axoniq-license:
6
    file: ./axoniq.license
7
  axonserver-token:
8
    file: ./axonserver.token



All three files will be placed in the “config” directory using a “secrets” section in the service definition, with an environment variable added to tell Axon Server about the location of the license file. As an example, here is the resulting definition of the first node’s service:

YAML
 




xxxxxxxxxx
1
43


1
axonserver-1:
2
    image: axonserver-ee:test
3
    hostname: axonserver-1
4
    volumes:
5
      - axonserver-data1:/axonserver/data
6
      - axonserver-events1:/axonserver/events
7
      - axonserver-log1:/axonserver/log
8
    secrets:
9
      - source: axoniq-license
10
        target: /axonserver/config/axoniq.license
11
      - source: axonserver-properties
12
        target: /axonserver/config/axonserver.properties
13
      - source: axonserver-token
14
        target: /axonserver/config/axonserver.token
15
    environment:
16
      - AXONIQ_LICENSE=/axonserver/config/axoniq.license
17
    ports:
18
      - '8024:8024'
19
      - '8124:8124'
20
      - '8224:8224'
21
    networks:
22
      - axon-demo



Note that for “axonserver-2” and “axonserver-3” you’ll have to adjust the port definitions, for example using “8025:8024” and “8026:8024” for the first port, to prevent all three trying to claim the same host-port. The properties file referred to in the secrets’ definition section is:

Properties files
 




xxxxxxxxxx
1
11
9


 
1
axoniq.axonserver.autocluster.first=axonserver-1
2
axoniq.axonserver.autocluster.contexts=_admin,default
3
 
          
4
axoniq.axonserver.accesscontrol.enabled=true
5
axoniq.axonserver.accesscontrol.internal-token=2843a447-4da5-4b54-af27-7a8e0d857e87
6
axoniq.axonserver.accesscontrol.systemtokenfile=/axonserver/config/axonserver.token



Just like last time we enable auto-clustering and access control, with a generated token for the communication between nodes. The “axonserver-token” secret is used to allow the CLI to talk with nodes. A similar approach can be used to configure more secrets for the certificates, and so enable SSL.

Kubernetes and StatefulSets

Kubernetes has quickly become the “de facto” solution for running containerized applications on distributed infrastructure. Thanks to its API-first approach it allows for flexible deployments using modern “infrastructure as code” design patterns. Due to the tight integration possible with Continuous Integration pipelines, it is also perfect for “ephemeral infrastructure” testing, with complete environments set up and torn down with minimal work. It is also the go-to platform for microservices architectures, and I think I have collected enough bonus points in the buzzword bingo with it.

All jokes aside, many of our customers deploy their applications on Kubernetes clusters, and we get regular questions about “the best way” to run Axon Server on it. With a platform like Kubernetes, you’ll find that there are a lot of customization points, but they are all subject to the underlying deployment model, which is (preferably) that of a horizontally scalable and stateless (micro-)service, where the lifecycle is easily automatable. 

For Axon Server Standard Edition, scalability is vertical, as it has no concept of a clustered deployment. Even stronger, a running Axon Server instance is definitely stateful due to the event store. So for now let’s focus on the most important aspect of a Kubernetes deployment of Axon Server: fixing the server’s identity and persistence using StatefulSets.

As stated before, an Axon Server instance has a clear and persistent identity, in that it saves identifying information about itself and (in the case of Axon Server EE) other nodes in the cluster, in the controlDB. Also, if it is used as an event store, the context’s events will be stored on disk as well, and whereas a client application can survive restarts and version upgrades by rereading the events, Axon Server is the one providing those. 

In the context of Kubernetes that means we want to bind every Axon Server deployment to its own storage volumes, and also to a predictable network identity. Kubernetes provides us with a StatefulSet deployment class which does just that, with the guarantee that it will preserve the automatically allocated volume claims even if migrated to another (k8s) node.

The welcome package downloaded in part one includes an example YAML descriptor for Axon Server, which I have included below (with just minor differences):

YAML
 




xxxxxxxxxx
1
123
99


1
apiVersion: apps/v1
2
kind: StatefulSet
3
metadata:
4
  name: axonserver
5
  labels:
6
    app: axonserver
7
spec:
8
  serviceName: axonserver
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
      app: axonserver
13
  template:
14
    metadata:
15
      labels:
16
        app: axonserver
17
    spec:
18
      containers:
19
      - name: axonserver
20
        image: axoniq/axonserver
21
        imagePullPolicy: Always
22
        ports:
23
        - name: grpc
24
          containerPort: 8124
25
          protocol: TCP
26
        - name: http
27
          containerPort: 8024
28
          protocol: TCP
29
        volumeMounts:
30
        - name: eventstore
31
          mountPath: /eventdata
32
        - name: data
33
          mountPath: /data
34
        readinessProbe:
35
          httpGet:
36
            port: http
37
            path: /actuator/info
38
          initialDelaySeconds: 30
39
          periodSeconds: 5
40
          timeoutSeconds: 1
41
        livenessProbe:
42
          httpGet:
43
            port: gui
44
            path: /actuator/info
45
          initialDelaySeconds: 60
46
          periodSeconds: 5
47
          timeoutSeconds: 1
48
  volumeClaimTemplates:
49
    - metadata:
50
        name: eventstore
51
      spec:
52
        accessModes: [ "ReadWriteOnce" ]
53
        resources:
54
          requests:
55
            storage: 5Gi
56
    - metadata:
57
        name: data
58
      spec:
59
        accessModes: [ "ReadWriteOnce" ]
60
        resources:
61
          requests:
62
            storage: 1Gi



Highlighted in the listing above are two important lines for Axon Server SE, the first telling Kubernetes we want only a single instance, the second referring to the SE container image. Important to note here is that this is a pretty basic descriptor in the sense that it does not have any settings for the amount of memory and/or CPU to reserve for Axon Server, which you may want to do for long-running deployments, and it “just” claims 5GiB of disk space for the Event Store. Also, we have not yet provided any means of adjusting the configuration. To complete this we need to add Service definitions that expose the two ports:

YAML
 




xxxxxxxxxx
1
59


1
apiVersion: v1
2
kind: Service
3
metadata:
4
  name: axonserver-gui
5
  labels:
6
    app: axonserver
7
spec:
8
  ports:
9
  - name: gui
10
    port: 8024
11
    targetPort: 8024
12
  selector:
13
    app: axonserver
14
  type: LoadBalancer
15
  sessionAffinity: clientIP
16
---
17
apiVersion: v1
18
kind: Service
19
metadata:
20
  name: axonserver-grpc
21
  labels:
22
    app: axonserver
23
spec:
24
  ports:
25
  - name: grpc
26
    port: 8124
27
    targetPort: 8124
28
  clusterIP: None
29
  selector:
30
    app: axonserver



Now you’ll notice the HTTP port is exposed using a LoadBalancer, while the Service for the gRPC port has a defaulted type of “ClusterIP” with “clusterIP” set to “none”, making it (in Kubernetes terminology) a Headless Service. This is important because a StatefulSet needs at least one Headless Service to enable DNS exposure within the Kubernetes namespace. 

Additionally, client applications will use long-living connections to the gRPC port, and are expected to be able to explicitly connect to a specific node. Apart from that, the deployment model for the client applications is probably what brought you to Kubernetes in the first place, making an externally accessible interface less of a requirement. The client applications will be deployed in their own namespace and can connect to Axon Server using k8s internal DNS.

The elements in the DNS name are (from left to right):

  • The name of the StatefulSet, a dash, and a sequence number (starting at 0). You’ll recognize this as the Pod’s name.
  • The name of the service.
  • The name of the namespace. (“default” if unspecified)
  • svc.cluster.local”.

If you want to deploy Axon Server in Kubernetes, but run client applications outside of it, you actually can use a “LoadBalancer” type service since gRPC uses HTTP/2, but you will need to fix it to the specific pod using the “statefulset.kubernetes.io/pod-name” selector and the Pod’s name as value, and repeat for all nodes. However, as this is not recommended practice we’ll not go into that.

Differences With Axon Server EE

There are several ways we can deploy a cluster of Axon Server EE nodes to Kubernetes. The simplest approach, and most often correct one, is to use a scaling factor other than 1, letting Kubernetes take care of deploying several instances. This means we will get several nodes that Kubernetes can dynamically manage and migrate as needed, while at the same time fixing the name and storage. As we saw with SE you’ll get a number suffixed to the name starting at 0, so a scaling factor of 3 gives us “axonserver-0” through “axonserver-2”. 

Of course we still need a secret to add the license file, but when we try to add this to our mix we run into a big difference with docker-compose: Kubernetes mounts Secrets and ConfigMaps as directories rather than files, so we need to split license and configuration to two separate locations. For the license secret we can use a new location “/axonserver/license/axoniq.license” and adjust the environment variable to match. For the system token we’ll use “/axonserver/security/token.txt”, and for the properties file we’ll use a ConfigMap that we mount on top of the “/axonserver/config” directory. We can create them directly from their respective files:

Plain Text
 




xxxxxxxxxx
1
13
9


 
1
$ kubectl create secret generic axonserver-license --from-file=./axoniq.license
2
secret/axonserver-license created
3
$ kubectl create secret generic axonserver-token --from-file=./axoniq.token
4
secret/axonserver-token created
5
$ kubectl create configmap axonserver-properties --from-file=./axonserver.properties
6
configmap/axonserver-properties created
7
$



In the descriptor we now have to announce the secret, add a volume for it, and mount the secret on the volume:

YAML
 




xxxxxxxxxx
1
10
9


 
1
        volumeMounts:
2
        - name: eventstore
3
          mountPath: /eventdata
4
        - name: data
5
          mountPath: /data



Becomes:

YAML
 




xxxxxxxxxx
1
19


 
1
        env:
2
        - name: AXONIQ_LICENSE
3
          value: "/axonserver/license/axoniq.license"
4
        volumeMounts:
5
        - name: data
6
          mountPath: /axonserver/data
7
        - name: events
8
          mountPath: /axonserver/events
9
        - name: log
10
          mountPath: /axonserver/log
11
        - name: config
12
          mountPath: /axonserver/config
13
          readOnly: true
14
        - name: system-token
15
          mountPath: /axonserver/security
16
          readOnly: true
17
        - name: license
18
          mountPath: /axonserver/license
19
          readOnly: true



Then a list of volumes has to be added to link the actual license and properties:

YAML
 




xxxxxxxxxx
1
19


1
    volumes:
2
    - name: config
3
      configMap:
4
        name: axonserver-properties
5
    - name: system-token
6
      secret:
7
        secretName: axonserver-token
8
    - name: license
9
      secret:
10
        secretName: axonserver-license



It is arguable that the properties should also be in a secret, which tightens up security on the settings in there, but I’ll leave that “as an exercise for the reader.”

Now there is only one thing left, and that has to do with the image we built for docker-compose. If we try to start the StatefulSet with 1 replica just to test if everything works, you’ll find that it fails with a so-called “CrashLoopBackoff”. If you look at the logs, you’ll find that Axon Server was unable to create the controlDB, and that is odd given that it worked fine for SE.

The cause is a major difference between plain Docker and Kubernetes, in that volumes are mounted as owned by the mount location’s owner in Docker, while Kubernetes uses a special security context, defaulting to root. Since our EE image runs Axon Server under its own user, it has no rights on the mounted volume other than “read”.

The context can be specified, but only through the user or group’s ID, and not using their name as we did in the image, because that name does not exist in the k8s management context. So we have to adjust the first stage to specify a specific numeric value, and then use that value in the security context:

Dockerfile
 




xxxxxxxxxx
1
11
9


 
1
FROM busybox as source
2
RUN addgroup -S -g 1001 axonserver \
3
    && adduser -S -u 1001 -h /axonserver -D axonserver \
4
    && mkdir -p /axonserver/config /axonserver/data \
5
                /axonserver/events /axonserver/log \
6
    && chown -R axonserver:axonserver /axonserver



Now we have an explicit ID (1001 twice) and can add that to the StatefulSet:

YAML
 




xxxxxxxxxx
1
24


 
1
  template:
2
    metadata:
3
      labels:
4
        app: axonserver
5
    spec:
6
      securityContext:
7
        runAsUser: 1001
8
        fsGroup: 1001
9
      containers:
10
        - name: axonserver
11
          image: eu.gcr.io/axoniq-devops/axonserver-ee:running
12
          imagePullPolicy: Always



With this change we can finally run Axon Server successfully, and scale it up to the number of nodes we want. However, when the second node comes up and tries to register itself with the first, another typical Kubernetes behaviour turns up, when we see the logs of node “axonserver-enterprise-0” filling up with DNS lookup errors for “axonserver-enterprise-1”. This is caused by the way the StatefulSet Pods get added to the DNS registry, which is not done until the readiness probe is happy. Axon Server itself is by then already busily running the auto-cluster actions, and node 0 is known even though the way back to node 1 is still unknown. 

In a Pod migration scenario, if e.g. a k8s node has to be brought down for maintenance, this is exactly the behaviour we want, even though confusing when we see it here during cluster initialisation and registration. If you want, you can avoid this by simply not using the auto-cluster options, and doing everything by hand, but given that it really is a “cosmetic” issue and in no way has lasting effects, you can also simply ignore the errors.

Alternative Deployment Models

Using the scaling factor on our StatefulSet is pretty straightforward, but does have a potential disadvantage: we cannot disable (shutdown if you like) a node without also disabling all higher numbered nodes. If we decide to give the nodes different roles in the cluster, define contexts that are not available on all nodes, or want to bring a “middle” node down for maintenance, you run into the horizontal scaling model imposed by Kubernetes. 

It is possible to do individual restarts, simply by killing the Pod involved, which will prompt Kubernetes to start a new one for it, but we cannot shut it down “until further notice”. The assumption made by the StatefulSet is that each node needs its storage and identity, but they all provide the same service. If you want to reduce the scaling by one, the highest numbered one will be taken down. 

=Taking the whole cluster down for maintenance is easy, but that is not what we want. An alternative model is to create StatefulSets per role, with as ultimate version a collection of single-node sets. This may feel wrong from a Kubernetes perspective, but works perfectly for Axon Server.

Storage Considerations

In the first installments we discussed the different storage settings we can pass to Axon Server. In the context of a Docker or Kubernetes deployment this poses a double issue: As a kind of obvious first one, we want to ensure that the volume is persistent, and that we have direct access to it to enable us to make backups. However there is an additional issue that has to do with the implementation of the volume, in that it needs to be configurable so we can extend it when needed. 

For Docker and docker-compose it is quite possible to do this on Windows, but not with the easiest implementation of the “local” driver. Kubernetes on a laptop or desktop however is a very different scenario, where practically all implementations use a local VM to implement the k8s node, and that VM cannot easily mount host directories. So while this will work for an easily created test installation, if you want a long-running setup under Windows I would urge you to look at running it as a local installation.

In the cloud, both AWS and Google allow you to use volumes that can be extended as needed, without the need for further manual adjustments. As we’ll see in the next installment, for VMs you will be using disks that may need formatting before they can be properly mounted. A newly created volume in k8s is immediately ready for use, and resizing is only a matter of using the Console or CLI to pass the change, after which the extra space will be immediately usable.

In Closing

In the next installment we’ll be moving to VMs, and touch on a bit more OS specifics as we build a solution that can be used in a CI/CD pipeline, with a setup that does not require manual changes updates and version upgrades.

The example scripts and configuration files used in this blog series are available from GitHub! Please visit https://github.com/AxonIQ/running-axon-server to get a copy.

Topics:
axon, axon framework, cqrs, event sourcing, microservices, tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}