{{announcement.body}}
{{announcement.title}}

Hazelcast With Spring Boot on Kubernetes

DZone 's Guide to

Hazelcast With Spring Boot on Kubernetes

Let's combine the Hazelcast IMDG with Spring Boot and Kubernetes so you can bring in-memory data to your K8s clusters with ease.

· Cloud Zone ·
Free Resource

Hazelcast is the leading in-memory data grid (IMDG) solution. The main idea behind IMDG is to distribute data across many nodes inside a cluster. Therefore, it seems to be an ideal solution for running on a cloud platform like Kubernetes, where you can easily scale up or scale down the number of running instances. Since Hazelcast is written in Java, you can easily integrate it with your Java application using standard libraries. Something that can also simplify a start with Hazelcast is Spring Boot. You may also use an unofficial library implementing the Spring Repositories pattern for Hazelcast – Spring Data Hazelcast.

The main goal of this article is to demonstrate how to embed Hazelcast in a Spring Boot application and run it on Kubernetes as a multi-instance cluster. Thanks to Spring Data Hazelcast we won’t have to get into the details of Hazelcast data types. Although Spring Data Hazelcast does not provide many advanced features, it is very good for start.

Looking for something different? Check out How to Use Embedded Hazelcast on Kubernetes

Architecture

We are running multiple instances of a single Spring Boot application on Kubernetes. Each application exposes port 8080 for HTTP API access and 5701 for Hazelcast cluster members discovery. Hazelcast instances are embedded into Spring Boot applications. We are creating two services on Kubernetes. The first of them is dedicated for HTTP API access, while the second is responsible for enabling discovery between Hazelcast instances. HTTP API will be used for making some test requests that add data to the cluster and find data there. Let’s proceed to the implementation.

Example

The source code with sample application is, as usual, available on GitHub. It is available at https://github.com/piomin/sample-hazelcast-spring-datagrid.git. You should access module employee-kubernetes-service.

Dependencies

An integration between Spring and Hazelcast is provided by the hazelcast-spring library. The version of Hazelcast libraries is related to Spring Boot via dependency management, so we just need to define the version of Spring Boot to the newest stable 2.2.4.RELEASE. The current version of Hazelcast related to this version of Spring Boot is 3.12.5. In order to enable Hazelcast member discovery on Kubernetes, we also need to include the hazelcast-kubernetes dependency. Its versioning is independent from core libraries. The newest version 2.0 is dedicated for Hazelcast 4. Since we are still using Hazelcast 3, we are declaring version 1.5.2 of hazelcast-kubernetes. We also include Spring Data Hazelcast and, optionally, Lombok for simplification.

XML




xxxxxxxxxx
1
33


 
1
<parent>
2
    <groupId>org.springframework.boot</groupId>
3
    <artifactId>spring-boot-starter-parent</artifactId>
4
    <version>2.2.4.RELEASE</version>
5
</parent>
6
<dependencies>
7
    <dependency>
8
        <groupId>org.springframework.boot</groupId>
9
        <artifactId>spring-boot-starter-web</artifactId>
10
    </dependency>
11
    <dependency>
12
        <groupId>com.hazelcast</groupId>
13
        <artifactId>spring-data-hazelcast</artifactId>
14
        <version>2.2.2</version>
15
    </dependency>
16
    <dependency>
17
        <groupId>com.hazelcast</groupId>
18
        <artifactId>hazelcast-spring</artifactId>
19
    </dependency>
20
    <dependency>
21
        <groupId>com.hazelcast</groupId>
22
        <artifactId>hazelcast-client</artifactId>
23
    </dependency>
24
    <dependency>
25
        <groupId>com.hazelcast</groupId>
26
        <artifactId>hazelcast-kubernetes</artifactId>
27
        <version>1.5.2</version>
28
    </dependency>
29
    <dependency>
30
        <groupId>org.projectlombok</groupId>
31
        <artifactId>lombok</artifactId>
32
    </dependency>
33
</dependencies>


Enabling Kubernetes Discovery

After including the required dependencies, Hazelcast has been enabled for our application. The only thing we need to do is to enable discovery through Kubernetes. The HazelcastInstance bean is already available in the context, so we may change its configuration by defining the com.hazelcast.config.Config bean. We need to disable multicast discovery, which is enabled by default, and enable Kubernetes discovery in the network config as shown below. Kubernetes config requires setting a target namespace of Hazelcast deployment and its service name.

Java




xxxxxxxxxx
1
10


 
1
@Bean
2
Config config() {
3
    Config config = new Config();
4
    config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(false);
5
    config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
6
    config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true)
7
            .setProperty("namespace", "default")
8
            .setProperty("service-name", "hazelcast-service");
9
    return config;
10
}



We also have to define Kubernetes Service hazelcast-service on port 5701. It is referenced in the employee-service deployment.

YAML




xxxxxxxxxx
1
11


1
apiVersion: v1
2
kind: Service
3
metadata:
4
  name: hazelcast-service
5
spec:
6
  selector:
7
    app: employee-service
8
  ports:
9
    - name: hazelcast
10
      port: 5701
11
  type: LoadBalancer



Here’s the Kubernetes Deployment and Service definition for our sample application. We are setting three replicas for our deployment. We are also exposing two ports outside containers.

YAML




xxxxxxxxxx
1
38


1
apiVersion: apps/v1
2
kind: Deployment
3
metadata:
4
  name: employee-service
5
  labels:
6
    app: employee-service
7
spec:
8
  replicas: 3
9
  selector:
10
    matchLabels:
11
      app: employee-service
12
  template:
13
    metadata:
14
      labels:
15
        app: employee-service
16
    spec:
17
      containers:
18
        - name: employee-service
19
          image: piomin/employee-service
20
          ports:
21
            - name: http
22
              containerPort: 8080
23
            - name: multicast
24
              containerPort: 5701
25
---
26
apiVersion: v1
27
kind: Service
28
metadata:
29
  name: employee-service
30
  labels:
31
    app: employee-service
32
spec:
33
  ports:
34
    - port: 8080
35
      protocol: TCP
36
  selector:
37
    app: employee-service
38
  type: NodePort
4
  name: employee-service



In fact, that’s all that needs to be done to succesfully run a Hazelcast cluster on Kubernetes. Before proceeding to the deployment, let’s take a look at the application implementation details.

Implementation

Our application is very simple. It defines a single model object, which is stored in a Hazelcast cluster. Such a class needs to have an id – a field annotated with Spring Data @Id, and should implement a Seriazable interface.

Java




xxxxxxxxxx
1
18


 
1
@Getter
2
@Setter
3
@EqualsAndHashCode
4
@ToString
5
public class Employee implements Serializable {
6
 
7
    @Id
8
    private Long id;
9
    @EqualsAndHashCode.Exclude
10
    private Integer personId;
11
    @EqualsAndHashCode.Exclude
12
    private String company;
13
    @EqualsAndHashCode.Exclude
14
    private String position;
15
    @EqualsAndHashCode.Exclude
16
    private int salary;
17
 
18
}



With Spring Data Hazelcast, we can define repositories without using any queries or Hazelcast-specific API for queries. We are using a well-known method naming pattern defined by Spring Data to build find methods as shown below. Our repository interface should extends HazelcastRepository.

Java




xxxxxxxxxx
1


1
public interface EmployeeRepository extends HazelcastRepository<Employee, Long> {
2
 
3
    Employee findByPersonId(Integer personId);
4
    List<Employee> findByCompany(String company);
5
    List<Employee> findByCompanyAndPosition(String company, String position);
6
    List<Employee> findBySalaryGreaterThan(int salary);
7
 
8
}



To enable Spring Data Hazelcast Repositories, we should annotate the main class or the configuration class with @EnableHazelcastRepositories.

Java




xxxxxxxxxx
1


1
@SpringBootApplication
2
@EnableHazelcastRepositories
3
public class EmployeeApplication {
4
 
5
    public static void main(String[] args) {
6
        SpringApplication.run(EmployeeApplication.class, args);
7
    }
8
     
9
}



Finally, here’s the Spring controller implementation. It allows us to invoke all the find methods defined in the repository, add a new Employee object to Hazelcast, and remove the existing one.

Java




xxxxxxxxxx
1
55


1
@RestController
2
@RequestMapping("/employees")
3
public class EmployeeController {
4
 
5
    private static final Logger logger = LoggerFactory.getLogger(EmployeeController.class);
6
 
7
    private EmployeeRepository repository;
8
 
9
    EmployeeController(EmployeeRepository repository) {
10
        this.repository = repository;
11
    }
12
 
13
    @GetMapping("/person/{id}")
14
    public Employee findByPersonId(@PathVariable("id") Integer personId) {
15
        logger.info("findByPersonId({})", personId);
16
        return repository.findByPersonId(personId);
17
    }
18
     
19
    @GetMapping("/company/{company}")
20
    public List<Employee> findByCompany(@PathVariable("company") String company) {
21
        logger.info(String.format("findByCompany({})", company));
22
        return repository.findByCompany(company);
23
    }
24
 
25
    @GetMapping("/company/{company}/position/{position}")
26
    public List<Employee> findByCompanyAndPosition(@PathVariable("company") String company, @PathVariable("position") String position) {
27
        logger.info(String.format("findByCompany({}, {})", company, position));
28
        return repository.findByCompanyAndPosition(company, position);
29
    }
30
     
31
    @GetMapping("/{id}")
32
    public Employee findById(@PathVariable("id") Long id) {
33
        logger.info("findById({})", id);
34
        return repository.findById(id).get();
35
    }
36
 
37
    @GetMapping("/salary/{salary}")
38
    public List<Employee> findBySalaryGreaterThan(@PathVariable("salary") int salary) {
39
        logger.info(String.format("findBySalaryGreaterThan({})", salary));
40
        return repository.findBySalaryGreaterThan(salary);
41
    }
42
     
43
    @PostMapping
44
    public Employee add(@RequestBody Employee emp) {
45
        logger.info("add({})", emp);
46
        return repository.save(emp);
47
    }
48
 
49
    @DeleteMapping("/{id}")
50
    public void delete(@PathVariable("id") Long id) {
51
        logger.info("delete({})", id);
52
        repository.deleteById(id);
53
    }
54
 
55
}



Running on Minikube

We will test our sample application on Minikube.

Shell




xxxxxxxxxx
1


 
1
$ minikube start --vm-driver=virtualbox



The application is configured to able to run with Skaffold and the Jib Maven Plugin. I have already described both these tools in one of my previous articles. They simplify the build and deployment process on Minikube. Assuming we are in the root directory of our application, we just need to run the following command. Skaffold automatically build our application using Maven, creates a Docker image based on Maven settings, applies the deployment file from the k8s directory, and finally runs the application on Kubernetes.

Shell




xxxxxxxxxx
1


 
1
$ skaffold dev



Since, we have declared three instances of our application in the deployment.yaml, three pods are started. If Hazelcast discovery is successful, you should see the following fragment of pod logs printed out by Skaffold.

Let’s take a look on the running pods.

And the list of services. HTTP API is available outside Minikube under port 32090.

Now, we can send some test requests. We will start by calling the POST /employees method to add some Employee objects into the Hazelcast cluster. Then we will perform some find methods using GET /employees/{id}. Since all the methods have finished successfully, we should take a look at the logs that clearly show the Hazelcast cluster working.

Shell




xxxxxxxxxx
1
14


1
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}' -H "Content-Type: application/json"
2
{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}
3
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}' -H "Content-Type: application/json"
4
{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}
5
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}' -H "Content-Type: application/json"
6
{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}
7
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000}' -H "Content-Type: application/json"
8
{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000}
9
$ curl http://192.168.99.100:32090/employees/1
10
{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}
11
$ curl http://192.168.99.100:32090/employees/2
12
{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}
13
$ curl http://192.168.99.100:32090/employees/3
14
{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}



Here’s the screen with logs from pods printed out by Skaffold. Skaffold prints the pod id for every single log line. Let’s take a closer look at the logs. The request for adding Employee with id=1 is processed by the application running on pod 5b758cc977-s6ptd. When we call the find method using id=1, it is processed by the application on pod 5b758cc977-2fj2h. It proves that the Hazelcast cluster works properly. The same behavior may be observed for other test requests.

We may also call some other find methods.

Shell




x
4


1
$ curl http://192.168.99.100:32090/employees/company/Test2/position/Developer
2
[{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000},{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}]
3
$ curl http://192.168.99.100:32090/employees/salary/3000
4
[{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000},{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000},{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}]



Let’s test another scenario. We will remove one pod from the cluster as shown below.

Then we will send some test requests to GET /employees/{id}. No matter which instance of the application is processing, the request the object is being returned.

Further Reading

Running Spring Boot Application on Kubernetes Minikube on Windows (Part 1)

Refcard: Cloud Native Data Grids: Hazelcast IMDG With Kubernetes

Topics:
cloud ,kubernetes ,spring boot ,hazelcast ,tutorial

Published at DZone with permission of Piotr Mińkowski , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}