Leverage Amazon BedRock Chat Model With Java and Spring AI
Learn to use Amazon Bedrock to build an app that sends text prompts to a model with Spring AI. This guide covers AWS setup, Spring AI config, and Bedrock integration.
Join the DZone community and get the full member experience.
Join For FreeHi community!
This is my third article in a series of introductions to Spring AI. You may find the first two on the link below:
- Using Spring AI to Generate Images With OpenAI's DALL-E 3
- Exploring Embeddings API with Java and Spring AI
In this article, I’ll skip the explanation of some basic Spring concepts like bean management, starters, etc., as the main goal of this article is to discover Spring AI capabilities. For the same reason, I’ll not create detailed instructions on generating the AWS Credentials. In case you don’t have one, follow the links in Step 0, that should give you enough context on how to create one.
Also, please be aware that executing code from this application may cost you some money for running AWS Bedrock.
The code I will share in this article is also available in the GitHub repo.
Introduction
Today, we are going to implement a simple proxy application that will forward our text prompt to a foundational model managed by AWS Bedrock and return a response. This will allow us to make our code more flexible and configurable.
But before we start, let me give you a little introduction to Amazon Bedrock.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Luma, Meta, Mistral AI, poolside (coming soon), Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
The single-API access of Amazon Bedrock, regardless of the models you choose, gives you the flexibility to use different FMs and upgrade to the latest model versions with minimal code changes.
You may find more information about Amazon Bedrock and its capabilities on the service website.
Now, let’s start implementing!
Step 0. Generate AWS Keys and Choose the Foundational Model to Use
If you don’t have an active AWS access key, do the following steps (copy-pasted from this SOF thread):
Go to: http://aws.amazon.com/
Sign Up & create a new account (they'll give you the option for 1 year trial or similar)
Go to your AWS account overview
Account menu in the upper-right (has your name on it)
sub-menu: Security Credentials
After you have your keys generated you should choose and enable the foundational model in Bedrock. Go to Amazon Bedrock, and from the Model Access menu on the left, configure access to the models you are going to use.
Step 1. Set Up a Project
To quickly generate a project template with all necessary dependencies, one may use https://start.spring.io/.
In my example, I’ll be using Java 17 and Spring Boot 3.4.1. Also, we need to include the following dependency:
- Amazon Bedrock Converse AI. Spring AI support for Amazon Bedrock Converse. It provides a unified interface for conversational AI models with enhanced capabilities, including function/tool calling, multimodal inputs, and streaming responses
- Web. Build web, including RESTful, applications using Spring MVC. Uses Apache Tomcat as the default embedded container.
After clicking generate, open downloaded files in the IDE you are working on and validate that all necessary dependencies and properties exist in pom.xml.
<properties>
<java.version>17</java.version>
<spring-ai.version>1.0.0-M5</spring-ai.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bedrock-converse-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
At the moment of writing this article, Spring AI version 1.0.0-M5 has not been published to the central maven repository yet and is only available in the Spring Repository. That’s why we need to add a link to that repo in our pom.xml
as well:
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
Step 2. Set Up the Configuration File
As a next step, we need to configure our property file. By default, Spring uses application.yaml or application.properties file. In this example, I’m using the YAML format. You may reformat code into .properties if you feel more comfortable working with this format.
Here are all the configs we need to add to the application.yaml file:
spring:
application:
name: aichatmodel
ai:
bedrock:
aws:
access-key: [YOUR AWS ACCCESS KEY]
secret-key: [YOUR AWS SECRET KEY]
converse:
chat:
options:
model: amazon.titan-text-express-v1
Model: The model ID to use. You can use the supported models and model features.
Model is the only required config to start our application. All the other configs are not required or have default values. As the main purpose of this article is to show the ease of Spring AI integration with AWS Bedrock, we will not go deeper into other configurations. You may find a list of all configurable properties in the Spring Boot docs.
Step 3: Create a Chat Service
Now, let’s create the main service of our application -> chat service.
To integrate our application with the Chat Model managed by AWS Bedrock, we need to autowire the interface with the same name: ChatModel. We already configured all the chat options in the application.yaml Spring Boot will automatically create and configure the Instance (Bean) of ChatModel.
To start a conversation with a particular prompt, we just need to write one line of code:
ChatResponse call = chatModel.call(prompt);
Let’s see what the whole service looks like:
@Service
public class ChatService {
@Autowired
private ChatModel chatModel;
public String getResponse(String promptText) {
Prompt prompt = new Prompt(promptText);
ChatResponse call = chatModel.call(prompt);
return call.getResult().getOutput().getContent();
}
}
That's it! We just need three lines of code to set up integration with the AWS Bedrock chat model. Isn't that amazing?
Let’s dive deep into this code:
- As I previously mentioned,
ChatModel
bean is already configured by Spring Boot as we provided all the necessary info in the application.yaml file. We just need to ask Spring to inject it usingAutowire
annotation. - We are creating a new instance of the
Prompt
object that provides the user with a prompt and executes the previously injected chat model. After receiving a response, we just return content, which is nothing but an actual chat response.
Step 4. Creating Web Endpoint
To easily reach our service, let’s create a simple GET endpoint.
@RestController
public class ChatController {
@Autowired
ChatService chatService;
@GetMapping("/chat")
public String getResponse(@RequestParam String prompt) {
return chatService.getResponse(prompt);
}
}
- We are creating a new class annotated with
@RestController
. - To reach our service, we need to inject it using field injection. To achieve this, we add our service and annotate it with
@Autowire
. - To create an endpoint, we introduce a new method and annotate it with
@GetMapping("/chat")
. This method does nothing but receive a user prompt and pass it to the service we created in the previous step.
Step 5. Run Our Application
To start our application, we need to run the following command:
mvn spring-boot:run
When the application is running, we may check the result by executing the following curl with any prompt you want. I used the following: What is the difference between Spring, String, and Swing? Don't forget to add %20 instead of whitespaces if you are using the terminal for calling your endpoint:
curl --location 'localhost:8080/chat?prompt=What%20is%20the%20difference%20between%20Spring%2C%20String%20and%20Swing%3F'
After executing, wait a few seconds as it takes some time for Amazon Bedrock to generate a response, and voila:
Spring is a framework for building enterprise-ready Java and Spring Boot applications. It offers a comprehensive set of tools and libraries to streamline development, reduce boilerplate code, and promote code reusability. Spring is known for its inversion of control (IoC) container, dependency injection, and aspect-oriented programming (AOP) features, which help manage the complexity of large applications. It also provides support for testing, caching, and asynchronous programming. Spring applications are typically built around the model-view-controller (MVC) architectural pattern, which separates the presentation, business logic, and data access layers. Spring applications can be deployed on a variety of platforms, including servlet containers, application servers, and cloud environments.
String is a data type that represents a sequence of characters. It is a fundamental building block of many programming languages, including Java, and is used to store and manipulate text data. Strings can be accessed using various methods, such as length, index, concatenation, and comparison. Java provides a rich set of string manipulation methods and classes, including StringBuffer, StringBuilder, and StringUtils, which provide efficient ways to perform string operations.
Swing is a graphical user interface (GUI) library for Java applications. It provides a set of components and APIs that enable developers to create and manage user interfaces with a graphical interface. Swing includes classes such as JFrame, JPanel, JButton, JTextField, and JLabel
Step 6: Give More Flexibility in Model Configuration (Optional)
In the second step, we configured the default behavior to our model and provided all the necessary configurations in the application.yaml file. But can we give more flexibility to our users and let them provide their configurations? The answer is yes!
To do this, we need to use the ChatOptions
interface.
Here is an example:
public String getResponseFromCustomOptions(String promptText) {
ChatOptions chatOptions = new DefaultChatOptionsBuilder()
.model("amazon.titan-text-express-v1")
.topK(10)
.temperature(0.1)
.build();
return ChatClient.create(this.chatModel)
.prompt(promptText)
.options(chatOptions)
.call()
.content();
}
To achieve this, we need to build options programmatically using all the configs we set up in application.yaml and provide these options into another interface called ChatClient. You may find more configuration options in Spring AI docs.
- Model: The model ID to use. You can use the supported models and model features
- Temperature: Controls the randomness of the output. Values can range over [0.0,1.0]
- TopK: Number of token choices for generating the next token.
Conclusion
Spring AI is a great tool that helps developers smoothly integrate with different AI models. As of writing this article, Spring AI supports a huge variety of chat models, including but not limited to Open AI, Ollama, and Deepseek AI.
I hope you found this article helpful and that it will inspire you to explore Spring AI more deeply.
Opinions expressed by DZone contributors are their own.
Comments