Integrating AI With Spring Boot: A Beginner’s Guide
In this guide, you will learn how to integrate AI into your Spring Boot app using Spring AI and simplify your AI setup with familiar Spring abstractions.
Join the DZone community and get the full member experience.
Join For FreeDo you need to integrate artificial intelligence into your Spring Boot application? Spring AI reduces complexity using abstractions you are used to apply within Spring Boot. Let’s dive into the basics in this blog post. Enjoy!
Introduction
Artificial intelligence is not a Python-only party anymore. LangChain4j basically opened the Java toolbox for integrating with AI. Spring AI is the Spring solution for AI integration. It tries to reduce the complexity of integrating AI within a Java application, just like LangChain4j is doing. The difference is that you can use the same abstractions as you are used to apply within Spring Boot.
At the time of writing, only a milestone release is available, but it is a matter of months before the first General Availablity (GA) release will be released. In this blog, basic functionality will be demonstrated, mainly based on the official documentation of Spring AI. So, do check out the official documentation, next to reading this blog.
Sources used in this blog are available on GitHub.
Prerequisites
Prerequisites for reading this blog are:
- Basic knowledge of Java;
- Basic knowledge of Spring Boot;
- Basic knowledge of large language models (LLMs).
Project Setup
Navigate to Spring Initializr and add the Ollama and Spring Web dependencies. Spring Web will be used to invoke REST endpoints, Ollama will be used as LLM provider. An LLM provider is used to run an LLM.
Take a look at the pom and notice that the spring-ai-ollama-spring-boot-starter
dependency is added and the spring-ai-bom
. As mentioned before, Spring AI is still a milestone release; therefore, the Spring Milestones repositories need to be added.
<properties>
...
<spring-ai.version>1.0.0-M5</spring-ai.version>
</properties>
<dependencies>
...
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>
...
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>${spring-ai.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
As an LLM provider, Ollama will be used. Install it according to the installation instructions and install a model. In this blog, Llama 3.2 will be used as a model. Install and run the model with the following command:
$ ollama run llama3.2
Detailed information about the Ollama commands can be found on the GitHub page.
The Spring Boot Ollama Starter comes with some defaults. By default, the Mistral model is configured. The default can be changed in the application.properties
file.
spring.ai.ollama.chat.options.model=llama3.2
Chat Responses
1. String Content
Take a look at the following code snippet, which takes a message as an input parameter, sends it to Ollama, and returns the response.
- A preconfigured
ChatClient.Builder
instance is injected in the constructor ofMyController
. - A
chatClient
is constructed. - The
prompt
method is used to start creating the prompt. - The
user
message is added. - The
call
method sends the prompt to Ollama. - The
content
method contains the response.
@RestController
class MyController {
private final ChatClient chatClient;
public MyController(ChatClient.Builder chatClientBuilder) {
this.chatClient = chatClientBuilder.build();
}
@GetMapping("/basic")
String basic(@RequestParam String message) {
return this.chatClient.prompt()
.user(message)
.call()
.content();
}
}
Run the application.
$ mvn spring-boot:run
Invoke the endpoint with the message 'tell me a joke.' A joke is returned.
$ curl "http://localhost:8080/basic?message=tell%20me%20a%20joke"
Here's one:
What do you call a fake noodle?
An impasta.
2. ChatResponse
Instead of just returning the response from Ollama, it is also possible to retrieve a ChatResponse
object which contains some metadata. E.g., the number of input tokens (a token is a part of a word) and the number of output tokens (the number of tokens of the response). This might be interesting if you are using cloud models because these charge you based on the number of tokens.
@GetMapping("/chatresponse")
String chatResponse(@RequestParam String message) {
ChatResponse chatResponse = this.chatClient.prompt()
.user(message)
.call()
.chatResponse();
return chatResponse.toString();
}
Run the application and invoke the endpoint.
$ curl "http://localhost:8080/chatresponse?message=tell%20me%20a%20joke"
ChatResponse
[metadata={
id: ,
usage: {
promptTokens: 29,
generationTokens: 14,
totalTokens: 43
},
rateLimit: org.springframework.ai.chat.metadata.EmptyRateLimit@c069511 },
generations=[Generation
[assistantMessage=
AssistantMessage [
messageType=ASSISTANT,
toolCalls=[],
textContent=Why don't scientists trust atoms?
Because they make up everything.,
metadata={messageType=ASSISTANT}
],
chatGenerationMetadata=
ChatGenerationMetadata{finishReason=stop,contentFilterMetadata=null}
]
]
]
Other information in the metadata is the response itself, which is of type ASSISTANT
because the model has created this message.
3. Entity Response
You might want to process the response of the LLM in your application. In this case, it is very convenient when the response is returned as a Java object instead of having to parse it yourself. This can be done by means of the entity
method. Create a record ArtistSongs
which contains the artist's name and a list of songs. When invoking the entity method, you specify that you want the response to be returned as an ArtistSongs
record.
@GetMapping("/entityresponse")
String entityResponse() {
ArtistSongs artistSongs = this.chatClient.prompt()
.user("Generate a list of songs of Bruce Springsteen. Limit the list to 10 songs.")
.call()
.entity(ArtistSongs.class);
return artistSongs.toString();
}
record ArtistSongs(String artist, List<String> songs) {}
Run the application and invoke the endpoint.
$ curl "http://localhost:8080/entityresponse"
ArtistSongs[artist=Bruce Springsteen, songs=[Born to Run, Thunder Road, Dancing in the Dark, Hungry Heart, Jungleland, The River, Devil's Arcade, Badlands, Sherry Darling, Rosalita (Come Out Tonight)]]
However, sometimes the response is empty. It might be that the model does not return any songs at all and that the prompt should be made more specific (e.g., at least one song).
$ curl "http://localhost:8080/entityresponse"
ArtistSongs[artist=null, songs=null]
You might run into an exception when you invoke the endpoint many times. The cause has not been investigated, but it seems that Spring AI adds instructions to the model in order to return the response as a JSON object so that it can be converted easily to a Java object. Sometimes, the model might not return valid JSON.
2024-11-16T13:01:06.980+01:00 ERROR 21595 --- [MySpringAiPlanet] [nio-8080-exec-1] o.s.ai.converter.BeanOutputConverter : Could not parse the given text to the desired target type:{
"artist": "Bruce Springsteen",
"songs": [
"Born in the U.S.A.",
"Thunder Road",
"Dancing in the Dark",
"Death to My Hometown",
"The River",
"Badlands",
"Jungleland",
"Streets of Philadelphia",
"Born to Run",
"Darkness on the Edge of Town" into org.springframework.ai.converter.BeanOutputConverter$CustomizedTypeReference@75a77425
2024-11-16T13:01:06.981+01:00 ERROR 21595 --- [MySpringAiPlanet] [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: java.lang.RuntimeException: com.fasterxml.jackson.databind.JsonMappingException: Unexpected end-of-input: expected close marker for Array (start marker at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 3, column: 12])
at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 13, column: 35] (through reference chain: com.mydeveloperplanet.myspringaiplanet.MyController$ArtistSongs["songs"]->java.util.ArrayList[10])] with root cause
com.fasterxml.jackson.core.io.JsonEOFException: Unexpected end-of-input: expected close marker for Array (start marker at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 3, column: 12])
at [Source: REDACTED (`StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION` disabled); line: 13, column: 35]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:585) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.core.base.ParserBase._handleEOF(ParserBase.java:535) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.core.base.ParserBase._eofAsNextChar(ParserBase.java:552) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:2491) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:673) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextTextValue(ReaderBasedJsonParser.java:1217) ~[jackson-core-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.std.StringCollectionDeserializer.deserialize(StringCollectionDeserializer.java:203) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.std.StringCollectionDeserializer.deserialize(StringCollectionDeserializer.java:184) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.std.StringCollectionDeserializer.deserialize(StringCollectionDeserializer.java:27) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:545) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeWithErrorWrapping(BeanDeserializer.java:576) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:446) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1493) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:348) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:185) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4905) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3848) ~[jackson-databind-2.17.2.jar:2.17.2]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3831) ~[jackson-databind-2.17.2.jar:2.17.2]
at org.springframework.ai.converter.BeanOutputConverter.convert(BeanOutputConverter.java:191) ~[spring-ai-core-1.0.0-M3.jar:1.0.0-M3]
at org.springframework.ai.converter.BeanOutputConverter.convert(BeanOutputConverter.java:58) ~[spring-ai-core-1.0.0-M3.jar:1.0.0-M3]
at org.springframework.ai.chat.client.DefaultChatClient$DefaultCallResponseSpec.doSingleWithBeanOutputConverter(DefaultChatClient.java:349) ~[spring-ai-core-1.0.0-M3.jar:1.0.0-M3]
at org.springframework.ai.chat.client.DefaultChatClient$DefaultCallResponseSpec.entity(DefaultChatClient.java:355) ~[spring-ai-core-1.0.0-M3.jar:1.0.0-M3]
at com.mydeveloperplanet.myspringaiplanet.MyController.entityResponse(MyController.java:54) ~[classes/:na]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:255) ~[spring-web-6.1.14.jar:6.1.14]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:188) ~[spring-web-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:926) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:831) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014) ~[spring-webmvc-6.1.14.jar:6.1.14]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:903) ~[spring-webmvc-6.1.14.jar:6.1.14]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:564) ~[tomcat-embed-core-10.1.31.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[spring-webmvc-6.1.14.jar:6.1.14]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658) ~[tomcat-embed-core-10.1.31.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:195) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51) ~[tomcat-embed-websocket-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.1.14.jar:6.1.14]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.14.jar:6.1.14]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.1.14.jar:6.1.14]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.14.jar:6.1.14]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.1.14.jar:6.1.14]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.14.jar:6.1.14]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:164) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:140) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:483) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:384) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:905) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1190) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63) ~[tomcat-embed-core-10.1.31.jar:10.1.31]
at java.base/java.lang.Thread.run(Thread.java:1583) ~[na:na]
Stream Responses
When a large response is returned, it is better to stream the response in order that the user can start reading already instead of waiting for the full response to be returned. The only thing you need to do is to replace the call
method with stream
. The response will be a Flux
.
@GetMapping("/stream")
Flux<String> stream(@RequestParam String message) {
return this.chatClient.prompt()
.user(message)
.stream()
.content();
}
Run the application and invoke the endpoint. The result is that characters are displayed one after another.
$ curl "http://localhost:8080/stream?message=tell%20me%20a%20joke"
Here's one:
What do you call a fake noodle?
An impasta.
System Message
A system message is used to instruct the LLM on how it should behave. This can be done by invoking the system
method. In the code snippet, the LLM is instructed to use quotes from the movie The Terminator in the response.
@GetMapping("/system")
String system() {
return this.chatClient.prompt()
.system("You are a chat bot who uses quotes of The Terminator when responding.")
.user("Who is Bruce Springsteen?")
.call()
.content();
}
Run the application and invoke the endpoint. The response contains random quotes.
$ curl "http://localhost:8080/system"
"Hasta la vista, baby!" Just kidding, I think you meant to ask about the Boss himself, Bruce Springsteen! He's a legendary American singer-songwriter and musician known for his heartland rock style and iconic songs like "Born to Run," "Thunder Road," and many more. A true "I'll be back" kind of artist, with a career spanning over 40 years and countless hits that have made him one of the most beloved musicians of all time!
The system message can be applied to the ChatClient.Builder
itself, besides other more general options. This way, you only need to add this once, or you can create your own defaults and override it when necessary.
Chat Memory
You prompt an LLM, and it responds. Based on this response, you prompt again. However, the LLM will not know anything about the previous prompt and response. Let’s use the first endpoint and prompt your name to the LLM (my name is Gunter). After this, you ask the LLM your name (what is my name).
$ curl "http://localhost:8080/basic?message=my%20name%20is%20Gunter"
Hallo Gunter! It's nice to meet you. Is there something I can help you with, or would you like to chat?
$ curl "http://localhost:8080/basic?message=what%20is%20my%20name"
I don't have any information about your name. This conversation just started, and I don't have any prior knowledge or context to know your name. Would you like to share it with me?
As you can see, the LLM does not remember the information that was given before. In order to solve this, you need to add chat memory to consecutive prompts. In Spring AI, this can be done by adding advisors. In the code snippet below, an in-memory chat memory is used, but you are also able to persist it to Cassandra if needed.
private final InMemoryChatMemory chatMemory = new InMemoryChatMemory();
@GetMapping("/chatmemory")
String chatMemory(@RequestParam String message) {
return this.chatClient.prompt()
.advisors(new MessageChatMemoryAdvisor(chatMemory))
.user(message)
.call()
.content();
}
Run the application and invoke this new endpoint. The LLM does know your name now.
$ curl "http://localhost:8080/chatmemory?message=my%20name%20is%20Gunter"
Hallo Gunter! It's nice to meet you. Is there something I can help you with, or would you like to chat for a bit?
$ curl "http://localhost:8080/chatmemory?message=what%20is%20my%20name"
Your name is Gunter. You told me that earlier! Is there anything else you'd like to talk about?
Prompt Templates
Prompt templating allows you to create a template prompt and fill in some parameters. This is quite useful because creating a good prompt can be quite challenging, and you do not want to bother your users with it. The following code snippet shows how to create a PromptTemplate
and how to add the parameters to the template by adding key-value pairs.
@GetMapping("/promptwhois")
String promptWhoIs(@RequestParam String name) {
PromptTemplate promptTemplate = new PromptTemplate("Who is {name}");
Prompt prompt = promptTemplate.create(Map.of("name", name));
return this.chatClient.prompt(prompt)
.call()
.content();
}
Run the application and invoke the endpoint with different parameters.
$ curl "http://localhost:8080/promptwhois?name=Bruce%20Springsteen"
Bruce Springsteen (born September 23, 1949) is an American singer-songwriter and musician. He is one of the most influential and iconic figures in popular music, known for his heartland rock style and poignant lyrics.
...
$ curl "http://localhost:8080/promptwhois?name=Arnold%20Schwarzenegger"
Arnold Schwarzenegger is a world-renowned Austrian-born American actor, filmmaker, entrepreneur, and former politician. He is one of the most successful and iconic figures in the entertainment industry.
...
You can apply this to system messages as well. In the next code snippet you can instruct the LLM to add quotes to the response from a certain movie. The user message is fixed in this example. By means of a Prompt
object, you create a list of the system and user message and add it to the prompt
method.
@GetMapping("/promptmessages")
String promptMessages(@RequestParam String movie) {
Message userMessage = new UserMessage("Telll me a joke");
SystemPromptTemplate systemPromptTemplate = new SystemPromptTemplate("You are a chat bot who uses quotes of {movie} when responding.");
Message systemMessage = systemPromptTemplate.createMessage(Map.of("movie", movie));
Prompt prompt = new Prompt(List.of(systemMessage, userMessage));
return this.chatClient.prompt(prompt)
.call()
.content();
}
Run the application and invoke the endpoint using different movies like The Terminator and Die Hard.
$ curl "http://localhost:8080/promptmessages?movie=The%20Terminator"
"Hasta la vista, baby... to your expectations! Why couldn't the bicycle stand up by itself? Because it was two-tired!"
$ curl "http://localhost:8080/promptmessages?movie=Die%20Hard"
"Yippee ki yay, joke time!" Here's one:
Why did the scarecrow win an award?
Because he was outstanding in his field! (get it?)
"Now we're talking!" Hope that made you smile!
Conclusion
Spring AI already offers quite some functionality in order to interact with artificial intelligence systems. It is easy to use and uses familiar Spring abstractions. In this blog, the basics were covered. Now, it is time to experiment with more complex use cases!
Published at DZone with permission of Gunter Rotsaert, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments