Spring Boot and Elasticsearch Tutorial
Spring Boot and Elasticsearch are two of the more powerful tools available to developers. Learn how to use them together.
Join the DZone community and get the full member experience.
Join For FreeOverview
In my last article, I talked about how we can use a Spring-Data-Elasticsearch project to connect with the Elasticsearch engineand perform CRUD operations. However, I did also mention that this project is not updated to be compatible with the latest version of the Elasticsearch engine.
So, in this post, I am going to cover how to interact with the latest version of Elasticsearch engine using the Transport Client library. I am going to use Spring Boot as a client application and then add dependencies for other required libraries.
Pre-Requisites
JDK 1.8
Maven
Elasticsearch engine download 5.x or 6.x ( I will explain the steps how to download)
Eclipse or VSD as IDE
Set Up Elasticsearch
Step 1 - Go to Elastic's official website.
Step 2 - Select Elasticsearch in the drop down and then version as 5.5.0 and click on the Download button.
Step 3 - It will give you options if you want to download as Zip, TAR, or RPM file. I selected the Zip format as using it on windows.
Step 4 - Unzip the downloaded content and go to the bin
folder. There will be a file namedelasticsearch.bat
.
Step 5 - Run this file on Windows OS through command prompt and it will start the Elasticsearch engine for you. Once started, it will start listening to port 9200. So the URL will be http://localhost:9200/. Also, port 9300 is exposed as a cluster node. Port 9200 is for REST communication and can be used by Java or any other language and 9300 is for Elasticsearch Cluster Node communication as well, and Java can also connect to this cluster node using the transport protocol.
Step 6 - Test to verify that it all started properly by launching the URL using the curl command. You can use PowerShell on Windows.
curl http://localhost:9200/
StatusCode : 200
StatusDescription : OK
Content : {
"name" : "ggyBmti",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Bp6EeKIoQNqGj0iV5sHtWg",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-...
RawContent : HTTP/1.1 200 OK
Content-Length: 327
Content-Type: application/json; charset=UTF-8
{
"name" : "ggyBmti",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "Bp6EeKIoQNqGj0iV5sHtWg",
"versi...
Forms : {}
Headers : {[Content-Length, 327], [Content-Type, application/json; charset=UTF-8]}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 327
Once the search engine starts up, let's try to test some of the REST APIs it provides to interact with the engine.
Launch http://localhost:9200/users/employee/1
using Postman or curlwith thePOST
method. Input should be in JSON format.
{
"userId" :"1",
"name" : "Rajesh",
"userSettings" : {
"gender" : "male",
"occupation" : "CA",
"hobby" : "chess"
}
}
The response will show that it has created "users" as an index name, "employee" as a type,and that a document has been created with an id of "1".
{
"_index": "users",
"_type": "employee",
"_id": "1",
"_version": 3,
"result": "updated",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"created": false
}
Now, to view the content of the document, we can call another REST API endpoint http://localhost:9200/users/employee/1
using the GET
method. This will result the below document information:
{
"_index": "users",
"_type": "employee",
"_id": "1",
"_version": 3,
"found": true,
"_source": {
"userId": "1",
"name": "Rajesh",
"userSettings": {
"gender": "male",
"occupation": "CA",
"hobby": "chess"
}
}
}
Now, if we want to search a document by any particular field, you need to add _search
as a path in the REST API URL. To launch this, run the following command: curl -XGET 'http://localhost:9200/users/employee/_search'
This will search for all the documents with an index of "users" and type of "employee." Now, if you want to search for a particular field, you need to add add some query match criteria as part of your JSON.
curl -XGET 'http://localhost:9200/users/employee/_search'
-H 'Content-Type: application/json' -d
' {"query": { "match": {"name" : "Rajesh" } }}'
This will search for a document that has the field 'name' set as 'Rajesh.'
Now, we have seen how Elasticsearch REST APIs work to perform create, retrieve, and other operations for the documents; let's try to understand how we can connect to Elasticsearch engine to our application code. We can either make calls to these REST APIs directly from the code or we can use the transport client provided by Elasticsearch. Let's develop a Spring Boot application to showcase all the CRUD operations.
Developing a Spring Boot Application
Maven Dependencies
Other than Spring Boot jars, we need Elasticsearch, a transport client, and log4j jars.
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.7</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.7</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-web</artifactId>
<version>2.7</version>
</dependency>
</dependencies>
Configuration
As we will be using a transport client to connect to the Elasticsearch engine, we need to give URL path for the cluster node of the engine. so I have put properties in an application.properties file for the host and port of the URL.
# Local Elasticsearch config
elasticsearch.host=localhost
elasticsearch.port=9300
# App config
server.port=8102
spring.application.name=BootElastic
Domain
Create a domain class named User
. The JSON input will be mapped to this User
object. This will be used to create user document associated with the index and type.
public class User {
private String userId;
private String name;
private Date creationDate = new Date();
private Map<String, String> userSettings = new HashMap<>();
-- getter/setter methods
}
Configuration
A Java configuration file has been created to create a Transport client which connects to the Elasticsearch cluster node. It is also loading the value of the host and port from the environment configured through the application.properties file.
@Configuration
public class config{
@Value("${elasticsearch.host:localhost}")
public String host;
@Value("${elasticsearch.port:9300}")
public int port;
public String getHost() {
return host;
}
public int getPort() {
return port;
}
@Bean
public Client client(){
TransportClient client = null;
try{
System.out.println("host:"+ host+"port:"+port);
client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port));
} catch (UnknownHostException e) {
e.printStackTrace();
}
return client;
}
}
Controller
UserController
is created to showcase the below features:
Create an index named "users" and type "employee." It will create a document for storing user information. The id for a document can be passed as JSON input or, if it is not passed, Elasticsearch will generate its own id. The client has a method called
prepareIndex()
which builds the document object and store against the index and type. This method is aPOST
method call whereUser
information will be passed as JSON.
@Autowired
Client client;
@PostMapping("/create")
public String create(@RequestBody User user) throws IOException {
IndexResponse response = client.prepareIndex("users", "employee", user.getUserId())
.setSource(jsonBuilder()
.startObject()
.field("name", user.getName())
.field("userSettings", user.getUserSettings())
.endObject()
)
.get();
System.out.println("response id:"+response.getId());
return response.getResult().toString();
}
2. View the user information based on tge "id" passed. The client has a prepareGet()
method to retrieve information based on index, type, and id. It will return the User Information in JSON format.
@GetMapping("/view/{id}")
public Map<String, Object> view(@PathVariable final String id) {
GetResponse getResponse = client.prepareGet("users", "employee", id).get();
return getResponse.getSource();
}
3. View the user information based on a field name. I have used matchQuery()
here to search by the "name" field and return User
information. However, there are many different types of query()
available with the QueryBuilders
class. For example, use rangeQuery()
to search a field value in a particular range, like ages between 10 to 20 years. There is a wildcardQuery()
method to search a field with a wildcard. termQuery()
is also available. You can play with all of these based on your needs.
@GetMapping("/view/name/{field}")
public Map<String, Object> searchByName(@PathVariable final String field) {
Map<String,Object> map = null;
SearchResponse response = client.prepareSearch("users")
.setTypes("employee")
.setSearchType(SearchType.QUERY_AND_FETCH)
.setQuery(QueryBuilders..matchQuery("name", field))
.get()
;
List<SearchHit> searchHits = Arrays.asList(response.getHits().getHits());
map = searchHits.get(0).getSource();
return map;
}
4. Update the document by searching it with Id and replace the field value. The client has a method called update()
. It accepts UpdateRequest
as input which builds the update query.
@GetMapping("/update/{id}")
public String update(@PathVariable final String id) throws IOException {
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.index("users")
.type("employee")
.id(id)
.doc(jsonBuilder()
.startObject()
.field("name", "Rajesh")
.endObject());
try {
UpdateResponse updateResponse = client.update(updateRequest).get();
System.out.println(updateResponse.status());
return updateResponse.status().toString();
} catch (InterruptedException | ExecutionException e) {
System.out.println(e);
}
return "Exception";
}
5. The last method is to showcase how to deletea document of an index and type. The client does have a prepareDelete()
method which accepts the index, type, and id to delete the document.
@GetMapping("/delete/{id}")
public String delete(@PathVariable final String id) {
DeleteResponse deleteResponse = client.prepareDelete("users", "employee", id).get();
return deleteResponse.getResult().toString();
}
The full code has been put on GitHub.
Build the Application
Run the mvn clean install
command to build the jar file.
Start Application
Run the java -jar target/standalone-elasticsearch-0.0.1-SNAPSHOT.jar
command to start the Spring Boot application.
Test Application
The application will be running on the http://localhost:8102
URL. Now let's test a couple of the use cases we talked about above.
1. Test for creating a document.
Launch http://localhost:8102/rest/users/create
as a POST
method either through curlor Postman.
Input:
{
"userId":"1",
"name": "Sumit",
"userSettings": {
"gender" : "male",
"occupation" : "CA",
"hobby" : "chess"
}
}
You will see the response showing "CREATED."
2. To test if the document has been created, let's test the view functionality.
Launch http://localhost:8102/rest/users/view/1
with the GET
method.
In response, you will see the User information for id with a value of "1."
{
"userSettings": {
"occupation": "CA",
"gender": "male",
"hobby": "chess"
},
"name": "Rajesh"
}
3. You can view User information via the name field as well by launching http://localhost:8102/rest/users/view/name/Rajesh
. This is passing "Rajesh" as the "name" field value.
Similarly, update and delete features can be tested by launching http://localhost:8102/rest/users/update/1
and http://localhost:8102/rest/users/delete/1
.
These are all the features I have played around with. As mentioned above, there are many different types of queries you can explore with the Elasticsearch transport client.
Opinions expressed by DZone contributors are their own.
Comments