A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
API Appliance for Extreme Agility and Simplicity
Debugging Using JMX Revisited
Angular, a powerful framework for building dynamic web applications, is known for its component-based architecture. However, one aspect that often puzzles new developers is the fact that Angular components do not have a display: block style by default. This article explores the implications of this design choice, its impact on web development, and how developers can effectively work with it. The world of front-end development is replete with frameworks that aim to provide developers with robust tools to build interactive and dynamic web applications. Among these, Angular stands out as a powerful platform, known for its comprehensive approach to constructing applications’ architecture. Particularly noteworthy is the way Angular handles components — the fundamental building blocks of Angular applications. Understanding Angular Components In Angular, components are the fundamental building blocks that encapsulate data binding, logic, and template rendering. They play a crucial role in defining the structure and behavior of your application’s interface. Definition and Role A component in Angular is a TypeScript class decorated with @Component(), where you can define its application logic. Accompanying this class is a template, typically an HTML file, that determines the component's visual representation, and optionally CSS files for styling. The component's role is multifaceted: it manages the data and state necessary for the view, handles user interactions, and can also be reusable throughout the application. TypeScript import { Component } from '@angular/core'; @Component({ selector: 'app-my-component', templateUrl: './my-component.component.html', styleUrls: ['./my-component.component.css'] }) export class MyComponent { // Component logic goes here } Angular’s Shadow DOM Angular components utilize a feature known as Shadow DOM, which encapsulates their markup and styles, ensuring that they’re independent of other components. This means that styles defined in one component will not leak out and affect other parts of the application. Shadow DOM allows for style encapsulation by creating a boundary around the component. As a developer, it’s essential to understand the structure and capabilities of Angular components to fully leverage the power of the framework. Recognizing the inherent encapsulation provided by Angular’s Shadow DOM is particularly important when considering how components are displayed and styled within an application. Display Block: The Non-Default in Angular Components Angular components are different from standard HTML elements in many ways, one of which is their default display property. Unlike basic HTML elements, which often come with a display value of block or inline, Angular components are assigned none as their default display behavior. This decision is intentional and plays an important role in Angular’s encapsulation philosophy and component rendering process. Comparison With HTML Elements Standard HTML elements like <div>, <p>, and <h1> come with a default styling that can include the CSS display: block property. This means that when you drop a <div> into your markup, it naturally takes up the full width available to it, creating a "block" on the page. <!-- Standard HTML div element --> <div>This div is a block-level element by default.</div> In contrast, Angular components start without any assumptions on their display property. That is, they don’t inherently behave as block or inline elements; they are essentially “display-agnostic” until specified. Rationale Behind Non-Block Default Angular’s choice to diverge from the typical block behavior of HTML elements is deliberate. One reason for this is to encourage developers to consciously decide how each component should be displayed within the application’s layout. It prevents unexpected layout shifts and the overwriting of global styles that may occur when components with block-level styles are introduced into existing content. By not having a display property set by default, Angular invites developers to think responsively and adapt their components to various screen sizes and layout requirements by setting explicit display styles that suit the component’s purpose within the context of the application. In the following section, we will explore how to work with the display properties of Angular components, ensuring that they fit seamlessly into your application’s design with explicit and intentional styling choices. Working With Angular’s Display Styling When building applications with Angular, understanding and properly implementing display styling is crucial for achieving the desired layout and responsiveness. Since Angular components come without a preset display rule, it’s up to the developer to define how each component should be displayed within the context of the application. 1. Explicitly Setting Display Styles You have complete control over how the Angular component is displayed by explicitly setting the CSS display property. This can be defined inline, within the component's stylesheet, or even dynamically through component logic. /* app-example.component.css */ :host { display: block; } <!-- Inline style --> <app-example-component style="display: block;"></app-example-component> // Component logic setting display dynamically export class ExampleComponent implements OnInit { @HostBinding('style.display') displayStyle: string = 'block'; } Choosing to set your component’s display style via the stylesheet ensures that you can leverage CSS’s full power, including media queries for responsiveness. 2. Responsive Design Considerations Angular’s adaptability allows you to create responsive designs by combining explicit display styles with modern CSS techniques. Using media queries, flexbox, and CSS Grid, you can responsively adjust the layout of your components based on the viewport size. CSS /* app-example.component.css */ :host { display: grid; grid-template-columns: repeat(auto-fill, minmax(150px, 1fr)); } @media (max-width: 768px) { :host { display: block; } } By setting explicit display values in style sheets and using Angular’s data-binding features, you can create a responsive and adaptive user interface. This level of control over styling reflects the thoughtful consideration that Angular brings to the development process, enabling you to create sophisticated, maintainable, and scalable applications. Next, we will wrap up our discussion and revisit the key takeaways from working with Angular components and their display styling strategies. Conclusion Throughout this exploration of Angular components and their display properties, it’s become apparent that Angular’s choice to use a non-block default for components is a purposeful design decision. This approach promotes a more thoughtful application of styles and supports encapsulation, a core principle within Angular’s architecture. It steers developers toward crafting intentional and adaptive layouts, a necessity in the diverse landscape of devices and screen sizes. By understanding Angular’s component architecture and the reasoning behind its display styling choices, developers are better equipped to make informed decisions. Explicit display settings and responsive design considerations are not afterthoughts but integral parts of the design and development process when working with Angular. Embracing these concepts allows developers to fully leverage the framework’s capabilities, leading to well-structured, maintainable, and responsive applications that stand the test of time and technology evolution. The information provided in this article aims to guide Angular developers to harness these tools effectively, ensuring that the user experiences they create are as robust as the components they comprise.
MongoDB is a powerful, open-source, document-oriented database management system known for its flexibility, scalability, and wide range of features. It's part of the NoSQL family of database systems, designed to handle large volumes of data and to provide high performance, high availability, and easy scalability. MongoDB stores data in JSON-like documents in key-value pairs, which allows Java Spring Boot to leverage JSON structure in the code. Spring Boot With MongoDB We may come across situations where we use Spring boot with SQL databases, but to leverage MongoDB with Spring boot, Spring Data MongoDB offers lightweight repository data access and support for the MongoDB database, which reduces the complexity of the code. Assuming that you have quite a good understanding of MongoDB, we will take a quick look at building a Spring Boot application with Spring Data MongoDB. Prerequisites Java IDE of your choice Intellij IDEA Spring Tool Suite (STS) Eclipse 6 Steps to Creating a Spring Boot Application With Spring REST and Spring Data MongoDB In this article, I have used the MongoDB Atlas database, which is a multi-cloud developer database service that allows you to create and maintain databases on the cloud, free of cost. I also used MongoDB Compass, a GUI tool to visualize the database. If you don't have an account for MongoDB Atlas, you can try it for free. Step 1: Create a Spring Boot Application With Spring Initializer First, you'll want to create a Spring Boot application using Spring Initializr, which generates a Spring Boot project with selected dependencies. Once you have selected the fields as shown in the image below, click on Generate, and import the extracted project in your IDE. Project Structure: Step 2: Configure the Database To configure MongoDB in the Spring boot application, we are going to add the database URL in the src/main/resources/application.properties file as shown below: Properties files spring.data.mongodb.uri = mongodb+srv://username:password@student.port.mongodb.net/student Model: MongoDB is a non-relational, document-oriented database. We have created a Student and Address Java model to store objects. @Document annotation is used to provide the custom collection name, and @Field is used to provide the custom key name for the object. In the below code, we have created an example of variables with different data types like Date, List, etc. Student.java Java package com.example.studentmanagementsystem.model; import com.fasterxml.jackson.annotation.JsonFormat; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.index.Indexed; import org.springframework.data.mongodb.core.mapping.Document; import org.springframework.data.mongodb.core.mapping.Field; import java.time.LocalDate; import java.util.List; @Document("Student") public class Student { @Id @Indexed(unique = true) private String id; private String name; private double cgpa; @Field("has_arrears") private boolean hasArrears; @Field("course_list") private List<String> courseList; private Address address; @Field("enrollment_date") @JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") private LocalDate enrollmentDate; } Address.java Java package com.example.studentmanagementsystem.model; import jakarta.persistence.Entity; import org.springframework.data.mongodb.core.mapping.Field; @Entity public class Address { private String street; private String city; private String state; private String country; @Field("zip_code") private String zipcode; } Step 3: Create the Repository We have created an interface StudentRepository, which extends to MongoRepository. MongoRepository is an interface provided by Spring Data that allows pre-defined CRUD operations and automatic mapping. CRUD or REST operations are nothing but communication between services and data in a persistent and structured way. Spring @Repository annotation is used to indicate that the class provides the mechanism for storage, retrieval, search, update, and delete operations on objects and acts as the persistence layer. Let's create findBy methods to fetch data from the database as shown in the code below: Java package com.example.studentmanagementsystem.repository; import com.example.studentmanagementsystem.model.Student; import org.springframework.data.mongodb.repository.Aggregation; import org.springframework.data.mongodb.repository.MongoRepository; import org.springframework.stereotype.Repository; import java.time.LocalDate; import java.util.List; @Repository public interface StudentRepository extends MongoRepository<Student, String> { List<Student> findByNameAndCgpa(String name, Double cgpa); Student findByAddress_City(String city); List<Student> findByAddress_CountryOrHasArrears(String country, Boolean hasArrears); List<Student> findByEnrollmentDateBetweenOrderByEnrollmentDate(LocalDate startDate, LocalDate endDate); List<Student> findByCgpaGreaterThanEqual(Double cgpa); String findByNameIgnoreCase(String name); List<Student> findByCgpaOrderByNameDesc(Double cgpa, String name); //aggregation example for overall average cgpa @Aggregation("{ $group : { _id : null, averageCgpa : { $avg : $cgpa} } }") Long avgCgpa(); } Step 4: Create a Service Let's build a service layer for the Student repository in order to communicate with the data in the MongoDB database. We will create a few methods to leverage CRUD operations like insert, retrieve, and delete methods. Java package com.example.studentmanagementsystem.service; import com.example.studentmanagementsystem.model.Student; import com.example.studentmanagementsystem.repository.StudentRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.time.LocalDate; import java.util.List; import java.util.Optional; @Service public class StudentService { @Autowired private StudentRepository studentRepo; public void addStudentData(Student studentDetails) { studentRepo.insert(studentDetails); } public void addMultipleStudentsData(List<Student> studentsDetail) { studentRepo.insert(studentsDetail); } public List<Student> fetchAllStudentsData() { return studentRepo.findAll(); } public Optional<Student> fetchStudentDataById(String id) { return studentRepo.findById(id); } public List<Student> fetchStudentDataByNameAndCgpa(String name, Double cgpa) { return studentRepo.findByNameAndCgpa(name, cgpa); } public Student fetchStudentDataByCity(String city) { return studentRepo.findByAddress_City(city); } public List<Student> fetchStudentDataByCountryOrArrears(String country, Boolean hasArrears) { return studentRepo.findByAddress_CountryOrHasArrears(country, hasArrears); } public List<Student> fetchStudentDataByCgpa(Double cgpa) { return studentRepo.findByCgpaGreaterThanEqual(cgpa); } public List<Student> fetchStudentDataByEnrollmentDate(LocalDate startDate, LocalDate endDate) { return studentRepo.findByEnrollmentDateBetweenOrderByEnrollmentDate(startDate, endDate); } public List<Student> fetchStudentDataByCgpaAndName(Double cgpa, String name) { return studentRepo.findByCgpaOrderByNameDesc(cgpa, name); } public Long fetchAverageCgpa() { return studentRepo.avgCgpa(); } public String fetchStudentDataByName(String name) { return studentRepo.findByNameIgnoreCase(name); } public void deleteStudentData(Student studentDetails) { studentRepo.insert(studentDetails); } public void deleteAllStudentData() { studentRepo.deleteAll(); } } Step 5: Create a Controller Next, build CRUD REST API calls for the Student resource to fetch, insert, or delete resources in the MongoDB database. The Spring @RestController annotation is used to create RESTful web services, and it combines both @Controller and @Responsebody annotations, making it easy to write handler methods. Java package com.example.studentmanagementsystem.controller; import com.example.studentmanagementsystem.model.Student; import com.example.studentmanagementsystem.service.StudentService; import com.fasterxml.jackson.annotation.JsonFormat; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.time.LocalDate; import java.util.List; import java.util.Optional; @RestController @RequestMapping("/student") public class StudentController { @Autowired private StudentService studentService; @PostMapping("/addStudent") public void populateStudentData(@RequestBody Student student){ studentService.addStudentData(student); } @PostMapping("/addStudentsData") public void populateStudentsData(@RequestBody List<Student> students){ studentService.addMultipleStudentsData(students); } @GetMapping("/getAllStudentsData") public List<Student> fetchAllStudentsData(){ return studentService.fetchAllStudentsData(); } @GetMapping("/getStudentById/{id}") public Optional<Student> fetchStudentDataById(@PathVariable String id){ return studentService.fetchStudentDataById(id); } @GetMapping("/getStudentByNameAndCgpa") public List<Student> fetchStudentDataByNameAndCgpa(@RequestParam String name, @RequestParam Double cgpa){ return studentService.fetchStudentDataByNameAndCgpa(name, cgpa); } @GetMapping("/getStudentByCity/{city}") public Student fetchStudentDataByCity(@PathVariable String city){ return studentService.fetchStudentDataByCity(city); } @GetMapping("/getStudentByCountryOrArrears") public List<Student> fetchStudentDataByCountryOrArrears(@RequestParam String country,@RequestParam Boolean hasArrears){ return studentService.fetchStudentDataByCountryOrArrears(country, hasArrears); } @GetMapping("/getStudentByEnrollmentDate") public List<Student> fetchStudentDataByEnrollmentDate(@JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") LocalDate startDate, @JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") LocalDate endDate){ return studentService.fetchStudentDataByEnrollmentDate(startDate, endDate); } @GetMapping("/getStudentByName") public String fetchStudentDataByName(@RequestParam String name){ return studentService.fetchStudentDataByName(name); } @GetMapping("/getStudentByCgpa") public List<Student> fetchStudentDataByCgpa(@RequestParam Double cgpa){ return studentService.fetchStudentDataByCgpa(cgpa); } @GetMapping("/getAvgCgpa") public Long fetchStudentAvgCgpa(){ return studentService.fetchAverageCgpa(); } @DeleteMapping("/deleteStudent") public void deleteStudentData(Student student){ studentService.deleteStudentData(student); } @DeleteMapping("/deleteAllStudents") public void deleteAllStudentsData(){ studentService.deleteAllStudentData(); } } Step 6: Testing Now, let's test one of the API calls in Postman to fetch data from the database as shown in the below image. The below HTTP method returns all the student information in an array of JSON objects. Method: GET Request URL: http://localhost:8080/student/getAllStudentsData We have built a Spring Boot application leveraging the MongoDB database, and we have created CRUD operations such as creating, deleting, and fetching the data from the database, including different ways to fetch data. Spring Data MongoDB helps us to use the inbuilt methods for CRUD operations, which reduces the code complexity in the persistence layer.
In this example, we'll learn about the Strategy pattern in Spring. We'll cover different ways to inject strategies, starting from a simple list-based approach to a more efficient map-based method. To illustrate the concept, we'll use the three Unforgivable curses from the Harry Potter series — Avada Kedavra, Crucio, and Imperio. What Is the Strategy Pattern? The Strategy Pattern is a design principle that allows you to switch between different algorithms or behaviors at runtime. It helps make your code flexible and adaptable by allowing you to plug in different strategies without changing the core logic of your application. This approach is useful in scenarios where you have different implementations for a specific task of functionality and want to make your system more adaptable to changes. It promotes a more modular code structure by separating the algorithmic details from the main logic of your application. Step 1: Implementing Strategy Picture yourself as a dark wizard who strives to master the power of Unforgivable curses with Spring. Our mission is to implement all three curses — Avada Kedavra, Crucio and Imperio. After that, we will switch between curses (strategies) in runtime. Let's start with our strategy interface: Java public interface CurseStrategy { String useCurse(); String curseName(); } In the next step, we need to implement all Unforgivable curses: Java @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public String curseName() { return "Crucio"; } } @Component public class ImperiusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Imperio!"; } @Override public String curseName() { return "Imperio"; } } @Component public class KillingCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Avada Kedavra!"; } @Override public String curseName() { return "Avada Kedavra"; } } Step 2: Inject Curses as List Spring brings a touch of magic that allows us to inject multiple implementations of an interface as a List so we can use it to inject strategies and switch between them. But let's first create the foundation: Wizard interface. Java public interface Wizard { String castCurse(String name); } And we can inject our curses (strategies) into the Wizard and filter the desired one. Java @Service public class DarkArtsWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> name.equals(s.curseName())) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } UnsupportedCurseException is also created if the requested curse does not exist. Java public class UnsupportedCurseException extends RuntimeException { } And we can verify that curse casting is working: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("Crucio")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("Imperio")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("Avada Kedavra")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Abrakadabra")); } } Another popular approach is to define the canUse method instead of curseName. This will return boolean and allows us to use more complex filtering like: Java public interface CurseStrategy { String useCurse(); boolean canUse(String name, String wizardType); } @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public boolean canUse(String name, String wizardType) { return "Crucio".equals(name) && "Dark".equals(wizardType); } } @Service public class DarkArtstWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> s.canUse(name, "Dark"))) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } Pros: Easy to implement. Cons: Runs through a loop every time, which can lead to slower execution times and increased processing overhead. Step 3: Inject Strategies as Map We can easily address the cons from the previous section. Spring lets us inject a Map with bean names and instances. It simplifies the code and improves its efficiency. Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(Map<String, CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } This approach has a downside: Spring injects the bean name as the key for the Map, so strategy names are the same as the bean names like cruciatusCurseStrategy. This dependency on Spring's internal bean names might cause problems if Spring's code or our class names change without notice. Let's check that we're still capable of casting those curses: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("cruciatusCurseStrategy")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("imperiusCurseStrategy")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("killingCurseStrategy")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Crucio")); } } Pros: No loops. Cons: Dependency on bean names, which makes the code less maintainable and more prone to errors if names are changed or refactored. Step 4: Inject List and Convert to Map Cons of Map injection can be easily eliminated if we inject List and convert it to Map: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(List<CurseStrategy> curses) { this.curses = curses.stream() .collect(Collectors.toMap(CurseStrategy::curseName, Function.identity())); } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } With this approach, we can move back to use curseName instead of Spring's bean names for Map keys (strategy names). Step 5: @Autowire in Interface Spring supports autowiring into methods. The simple example of autowiring into methods is through setter injection. This feature allows us to use @Autowired in a default method of an interface so we can register each CurseStrategy in the Wizard interface without needing to implement a registration method in every strategy implementation. Let's update the Wizard interface by adding a registerCurse method: Java public interface Wizard { String castCurse(String name); void registerCurse(String curseName, CurseStrategy curse) } This is the Wizard implementation: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses = new HashMap<>(); @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } @Override public void registerCurse(String curseName, CurseStrategy curse) { curses.put(curseName, curse); } } Now, let's update the CurseStrategy interface by adding a method with the @Autowired annotation: Java public interface CurseStrategy { String useCurse(); String curseName(); @Autowired default void registerMe(Wizard wizard) { wizard.registerCurse(curseName(), this); } } At the moment of injecting dependencies, we register our curse into the Wizard. Pros: No loops, and no reliance on inner Spring bean names. Cons: No cons, pure dark magic. Conclusion In this article, we explored the Strategy pattern in the context of Spring. We assessed different strategy injection approaches and demonstrated an optimized solution using Spring's capabilities. The full source code for this article can be found on GitHub.
In the world of Spring Boot, making HTTP requests to external services is a common task. Traditionally, developers have relied on RestTemplate for this purpose. However, with the evolution of the Spring Framework, a new and more powerful way to handle HTTP requests has emerged: the WebClient. In Spring Boot 3.2, a new addition called RestClient builds upon WebClient, providing a more intuitive and modern approach to consuming RESTful services. Origins of RestTemplate RestTemplate has been a staple in the Spring ecosystem for years. It's a synchronous client for making HTTP requests and processing responses. With RestTemplate, developers could easily interact with RESTful APIs using familiar Java syntax. However, as applications became more asynchronous and non-blocking, the limitations of RestTemplate started to become apparent. Here's a basic example of using RestTemplate to fetch data from an external API: Java var restTemplate = new RestTemplate(); var response = restTemplate.getForObject("https://api.example.com/data", String.class); System.out.println(response); Introduction of WebClient With the advent of Spring WebFlux, an asynchronous, non-blocking web framework, WebClient was introduced as a modern alternative to RestTemplate. WebClient embraces reactive principles, making it well-suited for building reactive applications. It offers support for both synchronous and asynchronous communication, along with a fluent API for composing requests. Here's how you would use WebClient to achieve the same HTTP request: Java var webClient = WebClient.create(); var response = webClient.get() .uri("https://api.example.com/data") .retrieve() .bodyToMono(String.class); response.subscribe(System.out::println); Enter RestClient in Spring Boot 3.2 Spring Boot 3.2 brings RestClient, a higher-level abstraction built on top of WebClient. RestClient simplifies the process of making HTTP requests even further by providing a more intuitive fluent API and reducing boilerplate code. It retains all the capabilities of WebClient while offering a more developer-friendly interface. Let's take a look at how RestClient can be used: var response = restClient .get() .uri(cepURL) .retrieve() .toEntity(String.class); System.out.println(response.getBody()); With RestClient, the code becomes more concise and readable. The RestClient handles the creation of WebClient instances internally, abstracting away the complexities of setting up and managing HTTP clients. Comparing RestClient With RestTemplate Let's compare RestClient with RestTemplate by looking at some common scenarios: Create RestTemplate: var response = new RestTemplate(); RestClient: var response = RestClient.create(); Or we can use our old RestTemplate as well: var myOldRestTemplate = new RestTemplate(); var response = RestClient.builder(myOldRestTemplate); GET Request RestTemplate: Java var response = restTemplate.getForObject("https://api.example.com/data", String.class); RestClient: var response = restClient .get() .uri(cepURL) .retrieve() .toEntity(String.class); POST Request RestTemplate: Java ResponseEntity<String> response = restTemplate.postForEntity("https://api.example.com/data", request, String.class); RestClient: var response = restClient .post() .uri("https://api.example.com/data") .body(request) .retrieve() .toEntity(String.class); Error Handling RestTemplate: Java try { String response = restTemplate.getForObject("https://api.example.com/data", String.class); } catch (RestClientException ex) { // Handle exception } RestClient: String request = restClient.get() .uri("https://api.example.com/this-url-does-not-exist") .retrieve() .onStatus(HttpStatusCode::is4xxClientError, (request, response) -> { throw new MyCustomRuntimeException(response.getStatusCode(), response.getHeaders()) }) .body(String.class); As seen in these examples, RestClient offers a more streamlined approach to making HTTP requests compared to RestTemplate. Spring Documentation gives us many other examples. Conclusion In Spring Boot 3.2, RestClient emerges as a modern replacement for RestTemplate, offering a more intuitive and concise way to consume RESTful services. Built on top of WebClient, RestClient embraces reactive principles while simplifying the process of making HTTP requests. Developers can now enjoy improved productivity and cleaner code when interacting with external APIs in their Spring Boot applications. It's recommended to transition from RestTemplate to RestClient for a more efficient and future-proof codebase.
In this article, learn how the Dapr project can reduce the cognitive load on Java developers and decrease application dependencies. Coding Java applications for the cloud requires not only a deep understanding of distributed systems, cloud best practices, and common patterns but also an understanding of the Java ecosystem to know how to combine many libraries to get things working. Tools and frameworks like Spring Boot have significantly impacted developer experience by curating commonly used Java libraries, for example, logging (Log4j), parsing different formats (Jackson), serving HTTP requests (Tomcat, Netty, the reactive stack), etc. While Spring Boot provides a set of abstractions, best practices, and common patterns, there are still two things that developers must know to write distributed applications. First, they must clearly understand which dependencies (clients/drivers) they must add to their applications depending on the available infrastructure. For example, they need to understand which database or message broker they need and what driver or client they need to add to their classpath to connect to it. Secondly, they must know how to configure that connection, the credentials, connection pools, retries, and other critical parameters for the application to work as expected. Understanding these configuration parameters pushes developers to know how these components (databases, message brokers, configurations stores, identity management tools) work to a point that goes beyond their responsibilities of writing business logic for their applications. Learning best practices, common patterns, and how a large set of application infrastructure components work is not bad, but it takes a lot of development time out of building important features for your application. In this short article, we will look into how the Dapr project can help Java developers not only to implement best practices and distributed patterns out of the box but also to reduce the application’s dependencies and the amount of knowledge required by developers to code their applications. We will be looking at a simple example that you can find here. This Pizza Store application demonstrates some basic behaviors that most business applications can relate to. The application is composed of three services that allow customers to place pizza orders in the system. The application will store orders in a database, in this case, PostgreSQL, and use Kafka to exchange events between the services to cover async notifications. All the asynchronous communications between the services are marked with red dashed arrows. Let’s look at how to implement this with Spring Boot, and then let’s add Dapr. The Spring Boot Way Using Spring Boot, developers can create these three services and start writing the business logic to process the order placed by the customer. Spring Boot Developers can use http://start.spring.io to select which dependencies their applications will have. For example, with the Pizza Store Service, they will need Spring Web (to host and serve the FrontEnd and some REST endpoints), but also the Spring Actuators extension if we aim to run these services on Kubernetes. But as with any application, if we want to store data, we will need a database/persistent storage, and we have many options to select from. If you look into Spring Data, you can see that Spring Data JPA provides an abstraction to SQL (relational) databases. As you can see in the previous screenshot, there are also NoSQL options and different layers of abstractions here, depending on what your application is doing. If you decide to use Spring Data JPA, you are still responsible for adding the correct database Driver to the application classpath. In the case of PostgreSQL, you can also select it from the list. We face a similar dilemma if we think about exchanging asynchronous messages between the application’s services. There are too many options: Because we are developers and want to get things moving forward, we must make some choices here. Let’s use PostgreSQL as our database and Kafka as our messaging system/broker. I am a true believer in the Spring Boot programming model, including the abstraction layers and auto-configurations. However, as a developer, you are still responsible for ensuring that the right PostgreSQL JDBC driver and Kafka Client are included in your services classpath. While this is quite common in the Java space, there are a few drawbacks when dealing with larger applications that might consist of tens or hundreds of services. Application and Infrastructure Dependencies Drawbacks Looking at our simple application, we can spot a couple of challenges that application and operation teams must deal with when taking this application to production. Let’s start with application dependencies and their relationship with the infrastructure components we have decided to use. The Kafka Client included in all services needs to be kept in sync with the Kafka instance version that the application will use. This dependency pushes developers to ensure they use the same Kafka Instance version for development purposes. If we want to upgrade the Kafka Instance version, we need to upgrade, which means releasing every service that includes the Kafka Client again. This is particularly hard because Kafka tends to be used as a shared component across different services. Databases such as PostgreSQL can be hidden behind a service and never exposed to other services directly. But imagine two or more services need to store data; if they choose to use different database versions, operation teams will need to deal with different stack versions, configurations, and maybe certifications for each version. Aligning on a single version, say PostgreSQL 16.x, once again couples all the services that need to store or read persistent data with their respective infrastructure components. While versions, clients, and drivers create these coupling between applications and the available infrastructure, understanding complex configurations and their impact on application behavior is still a tough challenge to solve. Spring Boot does a fantastic job at ensuring that all configurations can be externalized and consumed from environment variables or property files, and while this aligns perfectly with the 12-factor apps principles and with container technologies such as Docker, defining these configurations parameter values is the core problem. Developers using different connection pool sizes, retry, and reconnection mechanisms being configured differently across environments are still, to this day, common issues while moving the same application from development environments to production. Learning how to configure Kafka and PostgreSQL for this example will depend a lot on how many concurrent orders the application receives and how many resources (CPU and memory) the application has available to run. Once again, learning the specifics of each infrastructure component is not a bad thing for developers. Still, it gets in the way of implementing new services and new functionalities for the store. Decoupling Infrastructure Dependencies and Reusing Best Practices With Dapr What if we can extract best practices, configurations, and the decision of which infrastructure components we need for our applications behind a set of APIs that application developers can consume without worrying about which driver/client they need or how to configure the connections to be efficient, secure and work across environments? This is not a new idea. Any company dealing with complex infrastructure and multiple services that need to connect to infrastructure will sooner or later implement an abstraction layer on top of common services that developers can use. The main problem is that building those abstractions and then maintaining them over time is hard, costs development time, and tends to get bypassed by developers who don’t agree or like the features provided. This is where Dapr offers a set of building blocks to decouple your applications from infrastructure. Dapr Building Block APIs allow you to set up different component implementations and configurations without exposing developers to the hassle of choosing the right drivers or clients to connect to the infrastructure. Developers focus on building their applications by just consuming APIs. As you can see in the diagram, developers don’t need to know about “infrastructure land” as they can consume and trust APIs to, for example, store and retrieve data and publish and subscribe to events. This separation of concern allows operation teams to provide consistent configurations across environments where we may want to use another version of PostgreSQL, Kafka, or a cloud provider service such as Google PubSub. Dapr uses the component model to define these configurations without affecting the application behavior and without pushing developers to worry about any of those parameters or the client/driver version they need to use. Dapr for Spring Boot Developers So, how does this look in practice? Dapr typically deploys to Kubernetes, meaning you need a Kubernetes cluster to install Dapr. Learning about how Dapr works and how to configure it might be too complicated and not related at all to developer tasks like building features. For development purposes, you can use the Dapr CLI, a command line tool designed to be language agnostic, allowing you to run Dapr locally for your applications. I like the Dapr CLI, but once again, you will need to learn about how to use it, how to configure it, and how it connects to your application. As a Spring Boot developer, adding a new command line tool feels strange, as it is not integrated with the tools that I am used to using or my IDE. If I see that I need to download a new CLI or if I depend on deploying my apps into a Kubernetes cluster even to test them, I would probably step away and look for other tools and projects. That is why the Dapr community has worked so hard to integrate with Spring Boot more natively. These integrations seamlessly tap into the Spring Boot ecosystem without adding new tools or steps to your daily work. Let’s see how this works with concrete examples. You can add the following dependency in your Spring Boot application that integrates Dapr with Testcontainers. <dependency> <groupId>io.diagrid.dapr</groupId> <artifactId>dapr-spring-boot-starter</artifactId> <version>0.10.7</version> </dependency> View the repository here. Testcontainers (now part of Docker) is a popular tool in Java to work with containers, primarily for tests, specifically integration tests that use containers to set up complex infrastructure. Our three Pizza Spring Boot services have the same dependency. This allows developers to enable their Spring Boot applications to consume the Dapr Building Block APIs for their local development without any Kubernetes, YAML, or configurations needed. Once you have this dependency in place, you can start using the Dapr SDK to interact with Dapr Building Blocks APIs, for example, if you want to store an incoming order using the Statestore APIs: Where `STATESTORE_NAME` is a configured Statestore component name, the `KEY` is just a key that we want to use to store this order and `order` is the order that we received from the Pizza Store front end. Similarly, if you want to publish events to other services, you can use the PubSub Dapr API; for example, to emit an event that contains the order as the payload, you can use the following API: The publishEvent API publishes an event containing the `order` as a payload into the Dapr PubSub component named (PUBSUB_NAME) and inside a specific topic indicated by PUBSUB_TOPIC. Now, how is this going to work? How is Dapr storing state when we call the saveState() API, or how are events published when we call publishEvent()? By default, the Dapr SDK will try to call the Dapr API endpoints to localhost, as Dapr was designed to run beside our applications. For development purposes, to enable Dapr for your Spring Boot application, you can use one of the two built-in profiles: DaprBasicProfile or DaprFullProfile. The Basic profile provides access to the Statestore and PubSub API, but more advanced features such as Actors and Workflows will not work. If you want to get access to all Dapr Building Blocks, you can use the Full profile. Both of these profiles use in-memory implementations for the Dapr components, making your applications faster to bootstrap. The dapr-spring-boot-starter was created to minimize the amount of Dapr knowledge developers need to start using it in their applications. For this reason, besides the dependency mentioned above, a test configuration is required in order to select which Dapr profile we want to use. Since Spring Boot 3.1.x, you can define a Spring Boot application that will be used for test purposes. The idea is to allow tests to set up your application with all that is needed to test it. From within the test packages (`src/test/<package>`) you can define a new @SpringBootApplication class, in this case, configured to use a Dapr profile. As you can see, this is just a wrapper for our PizzaStore application, which adds a configuration that includes the DaprBasicProfile. With the DaprBasicProfile enabled, whenever we start our application for testing purposes, all the components that we need for the Dapr APIs to work will be started for our application to consume. If you need more advanced Dapr setups, you can always create your domain-specific Dapr profiles. Another advantage of using these test configurations is that we can also start the application using test configuration for local development purposes by running `mvn spring-boot:test-run` You can see how Testcontainers is transparently starting the `daprio/daprd` container. As a developer, how that container is configured is not important as soon as we can consume the Dapr APIs. I strongly recommend you check out the full example here, where you can run the application on Kubernetes with Dapr installed or start each service and test locally using Maven. If this example is too complex for you, I recommend you to check these blog posts where I create a very simple application from scratch: Using the Dapr StateStore API with Spring Boot Deploying and configuring our simple application in Kubernetes
A sign of a good understanding of a programming language is not whether one is simply knowledgeable about the language’s functionality, but why such functionality exists. Without knowing this “why," the developer runs the risk of using functionality in situations where its use might not be ideal - or even should be avoided in its entirety! The case in point for this article is the lateinit keyword in Kotlin. Its presence in the programming language is more or less a way to resolve what would otherwise be contradictory goals for Kotlin: Maintain compatibility with existing Java code and make it easy to transcribe from Java to Kotlin. If Kotlin were too dissimilar to Java - and if the interaction between Kotlin and Java code bases were too much of a hassle - then adoption of the language might have never taken off. Prevent developers from declaring class members without explicitly declaring their value, either directly or via constructors. In Java, doing so will assign a default value, and this leaves non-primitives - which are assigned a null value - at the risk of provoking a NullPointerException if they are accessed without a value being provided beforehand. The problem here is this: what happens when it’s impossible to declare a class field’s value immediately? Take, for example, the extension model in the JUnit 5 testing framework. Extensions are a tool for creating reusable code that conducts setup and cleanup actions before and after the execution of each or all tests. Below is an example of an extension whose purpose is to clear out all designated database tables after the execution of each test via a Spring bean that serves as the database interface: Java public class DBExtension implements BeforeAllCallback, AfterEachCallback { private NamedParameterJdbcOperations jdbcOperations; @Override public void beforeAll(ExtensionContext extensionContext) { jdbcOperations = SpringExtension.getApplicationContext(extensionContext) .getBean(NamedParameterJdbcTemplate.class); clearDB(); } @Override public void afterEach(ExtensionContext extensionContext) throws Exception { clearDB(); } private void clearDB() { Stream.of("table_one", "table_two", "table_three").forEach((tableName) -> jdbcOperations.update("TRUNCATE " + tableName, new MapSqlParameterSource()) ); } } (NOTE: Yes, using the @Transactional annotation is possible for tests using Spring Boot tests that conduct database transactions, but some use cases make automated transaction roll-backs impossible; for example, when a separate thread is spawned to execute the code for the database interactions.) Given that the field jdbcOperations relies on the Spring framework loading the proper database interface bean when the application is loaded, it cannot be assigned any substantial value upon declaration. Thus, it receives an implicit default value of null until the beforeAll() function is executed. As described above, this approach is forbidden in Kotlin, so the developer has three options: Declare jdbcOperations as var, assign a garbage value to it in its declaration, then assign the “real” value to the field in beforeAll(): Kotlin class DBExtension : BeforeAllCallback, AfterEachCallback { private var jdbcOperations: NamedParameterJdbcOperations = StubJdbcOperations() override fun beforeAll(extensionContext: ExtensionContext) { jdbcOperations = SpringExtension.getApplicationContext(extensionContext) .getBean(NamedParameterJdbcOperations::class.java) clearDB() } override fun afterEach(extensionContext: ExtensionContext) { clearDB() } private fun clearDB() { listOf("table_one", "table_two", "table_three").forEach { tableName: String -> jdbcOperations.update("TRUNCATE $tableName", MapSqlParameterSource()) } } } The downside here is that there’s no check for whether the field has been assigned the “real” value, running the risk of invalid behavior when the field is accessed if the “real” value hasn’t been assigned for whatever reason. 2. Declare jdbcOperations as nullable and assign null to the field, after which the field will be assigned its “real” value in beforeAll(): Kotlin class DBExtension : BeforeAllCallback, AfterEachCallback { private var jdbcOperations: NamedParameterJdbcOperations? = null override fun beforeAll(extensionContext: ExtensionContext) { jdbcOperations = SpringExtension.getApplicationContext(extensionContext) .getBean(NamedParameterJdbcOperations::class.java) clearDB() } override fun afterEach(extensionContext: ExtensionContext) { clearDB() } private fun clearDB() { listOf("table_one", "table_two", "table_three").forEach { tableName: String -> jdbcOperations!!.update("TRUNCATE $tableName", MapSqlParameterSource()) } } } The downside here is that declaring the field as nullable is permanent; there’s no mechanism to declare a type as nullable “only” until its value has been assigned elsewhere. Thus, this approach forces the developer to force the non-nullable conversion whenever accessing the field, in this case using the double-bang (i.e. !!) operator to access the field’s update() function. 3. Utilize the lateinit keyword to postpone a value assignment to jdbcOperations until the execution of the beforeAll() function: Kotlin class DBExtension : BeforeAllCallback, AfterEachCallback { private lateinit var jdbcOperations: NamedParameterJdbcOperations override fun beforeAll(extensionContext: ExtensionContext) { jdbcOperations = SpringExtension.getApplicationContext(extensionContext) .getBean(NamedParameterJdbcOperations::class.java) clearDB() } override fun afterEach(extensionContext: ExtensionContext) { clearDB() } private fun clearDB() { listOf("table_one", "table_two", "table_three").forEach { tableName: String -> jdbcOperations.update("TRUNCATE $tableName", MapSqlParameterSource()) } } } No more worrying about silently invalid behavior or being forced to “de-nullify” the field each time it’s being accessed! The “catch” is that there’s still no compile-time mechanism for determining whether the field has been accessed before it’s been assigned a value - it’s done at run-time, as can be seen when decompiling the clearDB() function: Java private final void clearDB() { Iterable $this$forEach$iv = (Iterable)CollectionsKt.listOf(new String[]{"table_one", "table_two", "table_three"}); int $i$f$forEach = false; NamedParameterJdbcOperations var10000; String tableName; for(Iterator var3 = $this$forEach$iv.iterator(); var3.hasNext(); var10000.update("TRUNCATE " + tableName, (SqlParameterSource)(new MapSqlParameterSource()))) { Object element$iv = var3.next(); tableName = (String)element$iv; int var6 = false; var10000 = this.jdbcOperations; if (var10000 == null) { Intrinsics.throwUninitializedPropertyAccessException("jdbcOperations"); } } } Not ideal, considering what’s arguably Kotlin’s star feature (compile-time checking of variable nullability to reduce the likelihood of the “Billion-Dollar Mistake”) - but again, it’s a “least-worst” compromise to bridge the gap between Kotlin code and the Java-based code that provides no alternatives that adhere to Kotlin’s design philosophy. Use Wisely! Aside from the above-mentioned issue of conducting null checks only at run-time instead of compile-time, lateinit possesses a few more drawbacks: A field that uses lateinit cannot be an immutable val, as its value is being assigned at some point after the field’s declaration, so the field is exposed to the risk of inadvertently being modified at some point by an unwitting developer and causing logic errors. Because the field is not instantiated upon declaration, any other fields that rely on this field - be it via some function call to the field or passing it in as an argument to a constructor - cannot be instantiated upon declaration as well. This makes lateinit a bit of a “viral” feature: using it on field A forces other fields that rely on field A to use lateinit as well. Given that this mutability of lateinit fields goes against another one of Kotlin’s guiding principles - make fields and variables immutable where possible (for example, function arguments are completely immutable) to avoid logic errors by mutating a field/variable that shouldn’t have been changed - its use should be restricted to where no alternatives exist. Unfortunately, several code patterns that are prevalent in Spring Boot and Mockito - and likely elsewhere, but that’s outside the scope of this article - were built on Java’s tendency to permit uninstantiated field declarations. This is where the ease of transcribing Java code to Kotlin code becomes a double-edged sword: it’s easy to simply move the Java code over to a Kotlin file, slap the lateinit keyword on a field that hasn’t been directly instantiated in the Java code, and call it a day. Take, for instance, a test class that: Auto-wires a bean that’s been registered in the Spring Boot component ecosystem Injects a configuration value that’s been loaded in the Spring Boot environment Mocks a field’s value and then passes said mock into another field’s object Creates an argument captor for validating arguments that are passed to specified functions during the execution of one or more test cases Instantiates a mocked version of a bean that has been registered in the Spring Boot component ecosystem and passes it to a field in the test class Here is the code for all of these points put together: Kotlin @SpringBootTest @ExtendWith(MockitoExtension::class) @AutoConfigureMockMvc class FooTest { @Autowired private lateinit var mockMvc: MockMvc @Value("\${foo.value}") private lateinit var fooValue: String @Mock private lateinit var otherFooRepo: OtherFooRepo @InjectMocks private lateinit var otherFooService: OtherFooService @Captor private lateinit var timestampCaptor: ArgumentCaptor<Long> @MockBean private lateinit var fooRepo: FooRepo // Tests below } A better world is possible! Here are ways to avoid each of these constructs so that one can write “good” idiomatic Kotlin code while still retaining the use of auto wiring, object mocking, and argument capturing in the tests. Becoming “Punctual” Note: The code in these examples uses Java 17, Kotlin 1.9.21, Spring Boot 3.2.0, and Mockito 5.7.0. @Autowired/@Value Both of these constructs originate in the historic practice of having Spring Boot inject the values for the fields in question after their containing class has been initialized. This practice has since been deprecated in favor of declaring the values that are to be injected into the fields as arguments for the class’s constructor. For example, this code follows the old practice: Kotlin @Service class FooService { @Autowired private lateinit var fooRepo: FooRepo @Value("\${foo.value}") private lateinit var fooValue: String } It can be updated to this code: Kotlin @Service class FooService( private val fooRepo: FooRepo, @Value("\${foo.value}") private val fooValue: String, ) { } Note that aside from being able to use the val keyword, the @Autowired annotation can be removed from the declaration of fooRepo as well, as the Spring Boot injection mechanism is smart enough to recognize that fooRepo refers to a bean that can be instantiated and passed in automatically. Omitting the @Autowired annotation isn’t possible for testing code: test files aren't actually a part of the Spring Boot component ecosystem, and thus, won’t know by default that they need to rely on the auto-wired resource injection system - but otherwise, the pattern is the same: Kotlin @SpringBootTest @ExtendWith(MockitoExtension::class) @AutoConfigureMockMvc class FooTest( @Autowired private val mockMvc: MockMvc, @Value("\${foo.value}") private val fooValue: String, ) { @Mock private lateinit var otherFooRepo: OtherFooRepo @InjectMocks private lateinit var otherFooService: OtherFooService @Captor private lateinit var timestampCaptor: ArgumentCaptor<Long> @MockBean private lateinit var fooRepo: FooRepo // Tests below } @Mock/@InjectMocks The Mockito extension for JUnit allows a developer to declare a mock object and leave the actual mock instantiation and resetting of the mock’s behavior - as well as injecting these mocks into the dependent objects like otherFooService in the example code - to the code within MockitoExtension. Aside from the disadvantages mentioned above about being forced to use mutable objects, it poses quite a bit of “magic” around the lifecycle of the mocked objects that can be easily avoided by directly instantiating and manipulating the behavior of said objects: Kotlin @SpringBootTest @ExtendWith(MockitoExtension::class) @AutoConfigureMockMvc class FooTest( @Autowired private val mockMvc: MockMvc, @Value("\${foo.value}") private val fooValue: String, ) { private val otherFooRepo: OtherFooRepo = mock() private val otherFooService = OtherFooService(otherFooRepo) @Captor private lateinit var timestampCaptor: ArgumentCaptor<Long> @MockBean private lateinit var fooRepo: FooRepo @AfterEach fun afterEach() { reset(otherFooRepo) } // Tests below } As can be seen above, a post-execution hook is now necessary to clean up the mocked object otherFooRepo after the test execution(s), but this drawback is more than made up for by otherfooRepo and otherFooService now being immutable as well as having complete control over both objects’ lifetimes. @Captor Just as with the @Mock annotation, it’s possible to remove the @Captor annotation from the argument captor and declare its value directly in the code: Kotlin @SpringBootTest @AutoConfigureMockMvc class FooTest( @Autowired private val mockMvc: MockMvc, @Value("\${foo.value}") private val fooValue: String, ) { private val otherFooRepo: OtherFooRepo = mock() private val otherFooService = OtherFooService(otherFooRepo) private val timestampCaptor: ArgumentCaptor<Long> = ArgumentCaptor.captor() @MockBean private lateinit var fooRepo: FooRepo @AfterEach fun afterEach() { reset(otherFooRepo) } // Tests below } While there’s a downside in that there’s no mechanism in resetting the argument captor after each test (meaning that a call to getAllValues() would return artifacts from other test cases’ executions), there’s the case to be made that an argument captor could be instantiated as an object within only the test cases where it is to be used and done away with using an argument captor as a test class’s field. In any case, now that both @Mock and @Captor have been removed, it’s possible to remove the Mockito extension as well. @MockBean A caveat here: the use of mock beans in Spring Boot tests could be considered a code smell, signaling that, among other possible issues, the IO layer of the application isn’t being properly controlled for integration tests, that the test is de-facto a unit test and should be rewritten as such, etc. Furthermore, too much usage of mocked beans in different arrangements can cause test execution times to spike. Nonetheless, if it’s absolutely necessary to use mocked beans in the tests, a solution does exist for converting them into immutable objects. As it turns out, the @MockBean annotation can be used not just on field declarations, but also for class declarations as well. Furthermore, when used at the class level, it’s possible to pass in the classes that are to be declared as mock beans for the test in the value array for the annotation. This results in the mock bean now being eligible to be declared as an @Autowired bean just like any “normal” Spring Boot bean being passed to a test class: Kotlin @SpringBootTest @AutoConfigureMockMvc @MockBean(value = [FooRepo::class]) class FooTest( @Autowired private val mockMvc: MockMvc, @Value("\${foo.value}") private val fooValue: String, @Autowired private val fooRepo: FooRepo, ) { private val otherFooRepo: OtherFooRepo = mock() private val otherFooService = OtherFooService(otherFooRepo) private val timestampCaptor: ArgumentCaptor<Long> = ArgumentCaptor.captor() @AfterEach fun afterEach() { reset(fooRepo, otherFooRepo) } // Tests below } Note that like otherFooRepo, the object will have to be reset in the cleanup hook. Also, there’s no indication that fooRepo is a mocked object as it’s being passed to the constructor of the test class, so writing patterns like declaring all mocked beans in an abstract class and then passing them to specific extending test classes when needed runs the risk of “out of sight, out of mind” in that the knowledge that the bean is mocked is not inherently evident. Furthermore, better alternatives to mocking beans exist (for example, WireMock and Testcontainers) to handle mocking out the behavior of external components. Conclusion Note that each of these techniques is possible for code written in Java as well and provides the very same benefits of immutability and control of the objects’ lifecycles. What makes these recommendations even more pertinent to Kotlin is that they allow the user to align more closely with Kotlin’s design philosophy. Kotlin isn’t simply “Java with better typing:" It’s a programming language that places an emphasis on reducing common programming errors like accidentally accessing null pointers as well as items like inadvertently re-assigning objects and other pitfalls. Going beyond merely looking up the tools that are at one’s disposal in Kotlin to finding out why they’re available in the form that they exist will yield dividends of much higher productivity in the language, less risks of trying to fight against the language instead of focusing on solving the tasks at hand, and, quite possibly, making writing code in the language an even more rewarding and fun experience.
The Spring Framework stands as a comprehensive solution for developing Java applications, offering a plethora of features for streamlined development. Within its suite of functionalities, managing transactions and executing operations asynchronously is particularly crucial. They play significant roles in maintaining the consistency of data and enhancing application scalability and responsiveness, respectively. This article seeks to shed light on the synergistic use of Spring's @Transactional and @Async annotations, providing insights into their collective application to optimize the performance of Java applications. Understanding Transaction Management in Spring Transaction management is crucial in any enterprise application to ensure data consistency and integrity. In Spring, this is achieved through the @Transactional annotation, which abstracts the underlying transaction management mechanism, making it easier for developers to control transaction boundaries declaratively. The @Transactional Annotation The @Transactional annotation in Spring can be applied at both the class and method levels. It declares that a method or all methods of a class should be executed within a transactional context. Spring's @Transactional supports various properties such as propagation, isolation, timeout, and readOnly, allowing for fine-tuned transaction management. Propagation: Defines how transactions relate to each other; Common options include REQUIRED, REQUIRES_NEW, and SUPPORTS Isolation: Determines how changes made by one transaction are visible to others; Options include READ_UNCOMMITTED, READ_COMMITTED, REPEATABLE_READ, and SERIALIZABLE. Timeout: Specifies the time frame within which a transaction must be completed ReadOnly: Indicates whether the transaction is read-only, optimizing certain database operations Exploring Asynchronous Operations in Spring Asynchronous operations in Spring are managed through the @Async annotation, enabling method calls to run in a background thread pool, thus not blocking the caller thread. This is particularly beneficial for operations that are time-consuming or independent of the main execution flow. The @Async Annotation Marking a method with @Async makes its execution asynchronous, provided that the Spring task executor is properly configured. This annotation can be used for methods returning void, a Future, or a CompletableFuture object, allowing the caller to track the operation's progress and result. Combining @Transactional With @Async Integrating transaction management with asynchronous operations presents unique challenges, primarily because transactions are tied to a thread's context, and @Async causes the method execution to switch to a different thread. Challenges and Considerations Transaction context propagation: When an @Async method is invoked from within a @Transactional context, the transaction context does not automatically propagate to the asynchronous method execution thread. Best practices: To manage transactions within asynchronous methods, it's crucial to ensure that the method responsible for transaction management is not the same as the one marked with @Async. Instead, the asynchronous method should call another @Transactional method to ensure the transaction context is correctly established. Practical Examples Java @Service public class InvoiceService { @Async public void processInvoices() { // Asynchronous operation updateInvoiceStatus(); } @Transactional public void updateInvoiceStatus() { // Transactional operation } } In this example, processInvoices is an asynchronous method that calls updateInvoiceStatus, a transactional method. This separation ensures proper transaction management within the asynchronous execution context. Java @Service public class ReportService { @Async public CompletableFuture<Report> generateReportAsync() { return CompletableFuture.completedFuture(generateReport()); } @Transactional public Report generateReport() { // Transactional operation to generate a report } } Here, generateReportAsync executes asynchronously and returns a CompletableFuture, while generateReport handles the transactional aspects of report generation. Discussion on Transaction Propagation Transaction propagation behaviors in Spring define how transactions relate to each other, especially in scenarios where one transactional method calls another. Choosing the right propagation behavior is essential for achieving the desired transactional semantics. Common Propagation Behaviors REQUIRED (Default): This is the default propagation behavior. If there's an existing transaction, the method will run within that transaction. If there's no existing transaction, Spring will create a new one. REQUIRES_NEW: This behavior always starts a new transaction. If there's an existing transaction, it will be suspended until the new transaction is completed. This is useful when you need to ensure that the method executes in a new, independent transaction. SUPPORTS: With this behavior, the method will execute within an existing transaction if one is present. However, if there's no existing transaction, the method will run non-transactionally. NOT_SUPPORTED: This behavior will execute the method non-transactionally. If there's an existing transaction, it will be suspended until the method is completed. MANDATORY: This behavior requires an existing transaction. If there's no existing transaction, Spring will throw an exception. NEVER: The method should never run within a transaction. If there's an existing transaction, Spring will throw an exception. NESTED: This behavior starts a nested transaction if an existing transaction is present. Nested transactions allow for partial commits and rollbacks and are supported by some, but not all, transaction managers. Propagation Behaviors With Asynchronous Operations When combining @Transactional with @Async, understanding the implications of propagation behaviors becomes even more critical. Since asynchronous methods run in a separate thread, certain propagation behaviors might not work as expected due to the absence of a transaction context in the new thread. REQUIRED and REQUIRES_NEW: These are the most commonly used and straightforward behaviors. However, when used with @Async, a REQUIRES_NEW behavior is often more predictable because it ensures that the asynchronous method always starts a new transaction, avoiding unintended interactions with the calling method's transaction. SUPPORTS, NOT_SUPPORTED, MANDATORY, NEVER: These behaviors might lead to unexpected results when used with @Async, as the transaction context from the calling thread is not propagated to the asynchronous method's thread. Careful consideration and testing are required when using these behaviors with asynchronous processing. NESTED: Given the complexity of nested transactions and the separate thread context of @Async methods, using nested transactions with asynchronous operations is generally not recommended. It could lead to complex transaction management scenarios that are difficult to debug and maintain. Propagation With Asynchronous Operations To illustrate the interaction between different propagation behaviors and asynchronous operations, let's consider an example where an asynchronous service method calls a transactional method with varying propagation behaviors. Java @Service public class OrderProcessingService { @Autowired private OrderUpdateService orderUpdateService; @Async public void processOrdersAsync(List<Order> orders) { orders.forEach(order -> orderUpdateService.updateOrderStatus(order, Status.PROCESSING)); } } @Service public class OrderUpdateService { @Transactional(propagation = Propagation.REQUIRES_NEW) public void updateOrderStatus(Order order, Status status) { // Implementation to update order status } } In this example, processOrdersAsync is an asynchronous method that processes a list of orders. It calls updateOrderStatus on each order, which is marked with @Transactional(propagation = Propagation.REQUIRES_NEW). This ensures that each order status update occurs in a new, independent transaction, isolating each update operation from others and the original asynchronous process. Examples: @Transactional(REQUIRES_NEW) With @Async Java @Service public class UserService { @Async public void updateUserAsync(User user) { updateUser(user); // Delegate to the synchronous, transactional method } @Transactional(propagation = Propagation.REQUIRES_NEW) public void updateUser(User user) { // Logic to update the user // Database operations to persist changes } } Explanation: Here, updateUserAsync is an asynchronous method that calls updateUser, a method annotated with @Transactional and a REQUIRES_NEW propagation behavior. This configuration ensures that each user update operation occurs in a new transaction, isolated from any existing transaction. This is particularly useful in scenarios where the update operations must not be affected by the outcome of other transactions. Combining @Async With @Transactional on Class Level Java @Service @Transactional public class OrderService { @Async public void processOrderAsync(Order order) { processOrder(order); // Delegate to the synchronous method } public void processOrder(Order order) { // Logic to process the order // Database operations involved in order processing } } Explanation: In this scenario, the OrderService class is annotated with @Transactional, applying transaction management to all its methods by default. The processOrderAsync method, marked with @Async, performs the order processing asynchronously by calling processOrder. The class-level @Transactional annotation ensures that the order processing logic is executed within a transactional context, providing consistency and integrity to the database operations involved. @Async Method Calling Multiple @Transactional Methods Java @Service public class ReportGenerationService { @Autowired private DataService dataService; @Async public void generateReportAsync(ReportParameters params) { Report report = dataService.prepareData(params); dataService.saveReport(report); } } @Service public class DataService { @Transactional public Report prepareData(ReportParameters params) { // Data preparation logic return new Report(); // Placeholder for actual report generation } @Transactional(propagation = Propagation.REQUIRES_NEW) public void saveReport(Report report) { // Logic to save the report } } Explanation: This example features an asynchronous method, generateReportAsync, which orchestrates the report generation process by calling two separate transactional methods: prepareData and saveReport. The prepareData method is encapsulated within the default transaction context, while saveReport is explicitly configured to always execute in a new transaction. This setup is ideal for scenarios where the report-saving operation needs to be transactionally independent of the data preparation phase, ensuring that the saving of the report is not impacted by the success or failure of the preceding operations. Each of these examples demonstrates how different combinations of @Transactional and @Async can be employed to achieve specific transactional behaviors in asynchronous processing contexts, providing Spring developers with the flexibility to tailor transaction management strategies to their application's requirements. Conclusion Understanding and carefully choosing the appropriate transaction propagation behaviors are crucial in Spring applications, especially when combining transactional operations with asynchronous processing. By considering the specific requirements and implications of each propagation behavior, developers can design more robust, efficient, and reliable transaction management strategies in their Spring applications. This extended knowledge enables the handling of complex transactional scenarios with greater confidence and precision, ultimately leading to higher-quality software solutions.
It’s been more than 20 years since Spring Framework appeared in the software development landscape and 10 since Spring Boot version 1.0 was released. By now, nobody should have any doubt that Spring has created a unique style through which developers are freed from repetitive tasks and left to focus on business value delivery. As years passed, Spring’s technical depth has continually increased, covering a wide variety of development areas and technologies. On the other hand, its technical breadth has been continually expanded as more focused solutions have been experimented, proof of concepts created, and ultimately promoted under the projects’ umbrella (towards the technical depth). One such example is the new Spring AI project which, according to its reference documentation, aims to ease the development when a generative artificial intelligence layer is aimed to be incorporated into applications. Once again, developers are freed from repetitive tasks and offered simple interfaces for direct interaction with the pre-trained models that incorporate the actual processing algorithms. By interacting with generative pre-trained transformers (GPTs) directly or via Spring AI programmatically, users (developers) do not need to (although it would be useful) possess extensive machine learning knowledge. As an engineer, I strongly believe that even if such (developer) tools can be rather easily and rapidly used to produce results, it is advisable to temper ourselves to switch to a watchful mode and try to gain a decent understanding of the base concepts first. Moreover, by following this path, the outcome might be even more useful. Purpose This article shows how Spring AI can be integrated into a Spring Boot application and fulfill a programmatic interaction with Open AI. It is assumed that prompt design in general (prompt engineering) is a state-of-the-art activity. Consequently, the prompts used during experimentation are quite didactic, without much applicability. The focus here is on the communication interface, that is, Spring AI API. Before the Implementation First and foremost, one shall clarify the rationale for incorporating and utilizing a GPT solution, in addition to the desire to deliver with greater quality, in less time, and with lower costs. Generative AI is said to be good at doing a great deal of time-consuming tasks, quicker and more efficiently, and outputting the results. Moreover, if these results are further validated by experienced and wise humans, the chances of obtaining something useful increase. Fortunately, people are still part of the scenery. Next, one shall resist the temptation to jump right into the implementation and at least dedicate some time to get a bit familiar with the general concepts. An in-depth exploration of generative AI concepts is way beyond the scope of this article. Nevertheless, the “main actors” that appear in the interaction are briefly outlined below. The Stage – Generative AI is part of machine learning that is part of artificial intelligence Input – The provided data (incoming) Output – The computed results (outgoing) Large Language Model(LLM) – The fine-tuned algorithm based on the interpreted input produces the output Prompt – A state-of-the-art interface through which the input is passed to the model Prompt Template – A component that allows constructing structured parameterized prompts Tokens – The components the algorithm internally translates the input into, then uses to compile the results and ultimately constructs the output from Model’s context window – The threshold the model limits the number of tokens counts per call (usually, the more tokens are used, the more expensive the operation is) Finally, an implementation may be started, but as it progresses, it is advisable to revisit and refine the first two steps. Prompts In this exercise, we ask for the following: Plain Text Write {count = three} reasons why people in {location = Romania} should consider a {job = software architect} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." This basically represents the prompt. As advised, a clear topic, a clear meaning of the task, and additional helpful pieces of information should be provided as part of the prompts, in order to increase the results’ accuracy. The prompt contains three parameters, which allow coverage for a wide range of jobs in various locations. count – The number of reasons aimed as part of the output job – The domain, the job interested in location – The country, town, region, etc. the job applicants reside Proof of Concept In this post, the simple proof of concept aims the following: Integrate Spring AI in a Spring Boot application and use it. Allow a client to communicate with Open AI via the application. The client issues a parametrized HTTP request to the application. The application uses a prompt to create the input, sends it to Open AI retrieves the output. The application sends the response to the client. Setup Java 21 Maven 3.9.2 Spring Boot – v. 3.2.2 Spring AI – v. 0.8.0-SNAPSHOT (still developed, experimental) Implementation Spring AI Integration Normally, this is a basic step not necessarily worth mentioning. Nevertheless, since Spring AI is currently released as a snapshot, in order to be able to integrate the Open AI auto-configuration dependency, one shall add a reference to Spring Milestone/Snapshot repositories. XML <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> The next step is to add the spring-ai-openai-spring-boot-starter Maven dependency. XML <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-openai-spring-boot-starter</artifactId> <version>0.8.0-SNAPSHOT</version> </dependency> Open AI ChatClient is now part of the application classpath. It is the component used to send the input to Open AI and retrieve the output. In order to be able to connect to the AI Model, the spring.ai.openai.api-key property needs to be set up in the application.properties file. Properties files spring.ai.openai.api-key = api-key-value Its value represents a valid API Key of the user on behalf of which the communication is made. By accessing the Open AI Platform, one can either sign up or sign in and generate one. Client: Spring Boot Application Communication The first part of the proof of concept is the communication between a client application (e.g., browser, cURL, etc.) and the application developed. This is done via a REST controller, accessible via an HTTP GET request. The URL is /job-reasons together with the three parameters previously outlined when the prompt was defined, which conducts to the following form: Plain Text /job-reasons?count={count}&job={job}&location={location} And the corresponding controller: Java @RestController public class OpenAiController { @GetMapping("/job-reasons") public ResponseEntity<String> jobReasons(@RequestParam(value = "count", required = false, defaultValue = "3") int count, @RequestParam("job") String job, @RequestParam("location") String location) { return ResponseEntity.ok().build(); } } Since the response from Open AI is going to be a String, the controller returns a ResponseEntity that encapsulates a String. If we run the application and issue a request, currently nothing is returned as part of the response body. Client: Open AI Communication Spring AI currently focuses on AI Models that process language and produce language or numbers. Examples of Open AI models in the former category are GPT4-openai or GPT3.5-openai. For fulfilling an interaction with these AI Models, which actually designate Open AI algorithms, Spring AI provides a uniform interface. ChatClient interface currently supports text input and output and has a simple contract. Java @FunctionalInterface public interface ChatClient extends ModelClient<Prompt, ChatResponse> { default String call(String message) { Prompt prompt = new Prompt(new UserMessage(message)); return call(prompt).getResult().getOutput().getContent(); } ChatResponse call(Prompt prompt); } The actual method of the functional interface is the one usually used. In the case of our proof of concept, this is exactly what is needed: a way of calling Open AI and sending the aimed parametrized Prompt as a parameter. The following OpenAiService is defined where an instance of ChatClient is injected. Java @Service public class OpenAiService { private final ChatClient client; public OpenAiService(OpenAiChatClient aiClient) { this.client = aiClient; } public String jobReasons(int count, String domain, String location) { final String promptText = """ Write {count} reasons why people in {location} should consider a {job} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." """; final PromptTemplate promptTemplate = new PromptTemplate(promptText); promptTemplate.add("count", count); promptTemplate.add("job", domain); promptTemplate.add("location", location); ChatResponse response = client.call(promptTemplate.create()); return response.getResult().getOutput().getContent(); } } With the application running, if the following request is performed, from the browser: Plain Text http://localhost:8080/gen-ai/job-reasons?count=3&job=software%20architect&location=Romania Then the below result is retrieved: Lucrative career: Software architect jobs offer competitive salaries and excellent growth opportunities, ensuring financial stability and success in Romania. In-demand profession: As the demand for technology continues to grow, software architects are highly sought after in Romania and worldwide, providing abundant job prospects and job security. Creative problem-solving: Software architects play a crucial role in designing and developing innovative software solutions, allowing them to unleash their creativity and make a significant impact on various industries. This is exactly what it was intended – an easy interface through which the Open AI GPT model can be asked to write a couple of reasons why a certain job in a certain location is appealing. Adjustments and Observations The simple proof of concept developed so far mainly uses the default configurations available. The ChatClient instance may be configured according to the desired needs via various properties. As this is beyond the scope of this writing, only two are exemplified here. spring.ai.openai.chat.options.model designates the AI Model to use. By default, it is "gpt-35-turbo," but "gpt-4" and "gpt-4-32k" designate the latest versions. Although available, one may not be able to access these using a pay-as-you-go plan, but there are additional pieces of information available on the Open AI website to accommodate it. Another property worth mentioning is spring.ai.openai.chat.options.temperature. According to the reference documentation, the sampling temperature controls the “creativity of the responses." It is said that higher values make the output “more random," while lower ones are “more focused and deterministic." The default value is 0.8, if we decrease it to 0.3, restart the application, and ask again with the same request parameters, the below result is retrieved. Lucrative career opportunities: Software architect jobs in Romania offer competitive salaries and excellent growth prospects, making it an attractive career choice for individuals seeking financial stability and professional advancement. Challenging and intellectually stimulating work: As a software architect, you will be responsible for designing and implementing complex software systems, solving intricate technical problems, and collaborating with talented teams. This role offers continuous learning opportunities and the chance to work on cutting-edge technologies. High demand and job security: With the increasing reliance on technology and digital transformation across industries, the demand for skilled software architects is on the rise. Choosing a software architect job in Romania ensures job security and a wide range of employment options, both locally and internationally. It is visible that the output is way more descriptive in this case. One last consideration is related to the structure of the output obtained. It would be convenient to have the ability to map the actual payload received to a Java object (class or record, for instance). As of now, the representation is textual and so is the implementation. Output parsers may achieve this, similarly to Spring JDBC’s mapping structures. In this proof of concept, a BeanOutputParser is used, which allows deserializing the result directly in a Java record as below: Java public record JobReasons(String job, String location, List<String> reasons) { } This is done by taking the {format} as part of the prompt text and providing it as an instruction to the AI Model. The OpenAiService method becomes: Java public JobReasons formattedJobReasons(int count, String job, String location) { final String promptText = """ Write {count} reasons why people in {location} should consider a {job} job. These reasons need to be short, so they fit on a poster. For instance, "{job} jobs are rewarding." {format} """; BeanOutputParser<JobReasons> outputParser = new BeanOutputParser<>(JobReasons.class); final PromptTemplate promptTemplate = new PromptTemplate(promptText); promptTemplate.add("count", count); promptTemplate.add("job", job); promptTemplate.add("location", location); promptTemplate.add("format", outputParser.getFormat()); promptTemplate.setOutputParser(outputParser); final Prompt prompt = promptTemplate.create(); ChatResponse response = client.call(prompt); return outputParser.parse(response.getResult().getOutput().getContent()); } When invoking again, the output is as below: JSON { "job":"software architect", "location":"Romania", "reasons":[ "High demand", "Competitive salary", "Opportunities for growth" ] } The format is the expected one, but the reasons appear less explanatory, which means additional adjustments are required in order to achieve better usability. From a proof of concept point of view though, this is acceptable, as the focus was on the form. Conclusions Prompt design is an important part of the task – the better articulated prompts are, the better the input and the higher the output quality. Using Spring AI to integrate with various chat models is quite straightforward – this post showcased an Open AI integration. Nevertheless, in the case of Gen AI in general, just as in the case of almost any technology, it is very important to get familiar at least with the general concepts first. Then, to try to understand the magic behind the way the communication is carried out and only afterward, start writing “production” code. Last but not least, it is advisable to further explore the Spring AI API to understand the implementations and remain up-to-date as it evolves and improves. The code is available here. References Spring AI Reference
Over the past few years, AI has steadily worked its way into almost every part of the global economy. Email programs use it to correct grammar and spelling on the fly and suggest entire sentences to round out each message. Digital assistants use it to provide a human-like conversational interface for users. You encounter it when you reach out to any business's contact center. You can even have your phone use AI to wait on hold for you when you exhaust the automated support options and need a live agent instead. It's no wonder, then, that AI is also already present in the average software developer's toolkit. Today, there are countless AI coding assistants available that promise to lighten developers' loads. According to their creators, the tools should help software developers and teams work faster and produce more predictable product outcomes. However, they do something less desirable, too—introduce security flaws. It's an issue that software development firms and solo coders are only beginning to come to grips with. Right now, it seems there's a binary choice. Either use AI coding assistants and accept the consequences, or forego them and risk falling behind the developers that do use them. Right now, surveys indicate that about 96% of developers have already chosen the former. But what if there was another option? What if you could mitigate the risks of using AI coding assistants without harming your output? Here's a simple framework developers can use to pull that off. Evaluate Your AI Tools Carefully The first way to mitigate the risks that come with AI coding assistants is to thoroughly investigate any tool you're considering before you use it in production. The best way to do this is to use the tool in parallel with a few of your development projects to see how the results stack up to your human-created code. This will provide you an opportunity to assess the tool's strengths and weaknesses and to look for any persistent output problems that might make it a non-starter for your specific development needs. This simple vetting procedure should let you choose an AI coding assistant that's suited to the tasks you plan to give it. It should also alert you to any significant secure coding shortcomings associated with the tool before it can affect a live project. If those shortcomings are insignificant, you can use what you learn to clean up any code that comes from the tool. If they're significant, you can move on to evaluating another tool instead. Beef up Your Code Review and Validation Processes Next, it's essential to beef up your code review and validation processes before you begin using an AI coding assistant in production. This should include multiple static code analyses passed on all the code you generate, especially any that contain AI-generated code. This should help you catch the majority of inadvertently introduced security vulnerabilities. It should also give your human developers a chance to read the AI-generated code, understand it, and point out any obvious issues with it before moving forward. Your code review and validation processes should also include dynamic testing as soon as each project reaches the point that it's feasible. This will help you evaluate the security of your code as it exists in the real world, including any user interactions that could introduce additional vulnerabilities. Keep Your AI Tools Up to Date Finally, you should create a process that ensures you're always using the latest version of your chosen AI tools. The developers of AI coding assistants are always making changes aimed at increasing the reliability and security of the code their tools generate. It's in their best interest to do so since any flawed code traced back to their tool could lead to developers dropping it in favor of a competitor. However, you shouldn't blindly update your toolset, either. It's important to keep track of any updates to your AI coding assistant change. You should never assume that an updated version of the tool you're using will still be suited for your specific coding needs. So, if you spot any changes that might call for a reevaluation of the tool, that's exactly what you should do. If you can't afford to be without your chosen AI coding assistant for long enough to repeat the vetting process you started with, continue using the older version. However, you should have the new version perform the same coding tasks and compare the output. This should give you a decent idea of how an update's changes will affect your final software products. The Bottom Line Realistically, AI code generation isn't going away. Instead, it likely won't be long before it's an integral part of every development team's workflow. However, we've not yet reached the point where human coders should blindly trust the work product of their AI counterparts. By taking a cautious approach and integrating AI tools thoughtfully, developers should be able to reap the rewards of these early AI tools while insulating themselves from their very real shortcomings.
AngularAndSpringWithMaps is a Sprint Boot project that shows company properties on a Bing map and can be run on the JDK or as a GraalVM native image. ReactAndGo is a Golang project that shows the cheapest gas stations in your post code area and is compiled in a binary. Both languages are garbage collected, and the AngularAndSpringWithMaps project uses the G1 collector. The complexity of both projects can be compared. Both serve as a frontend, provide rest data endpoints for the frontend, and implement services for the logic with repositories for the database access. How to build the GraalVM native image for the AngularAndSpringWithMaps project is explained in this article. What To Compare On the performance side, Golang and Java on the JVM or as a native image are fast and efficient enough for the vast majority of use cases. Further performance fine-tuning needs good profiling and specific improvements, and often, the improvements are related to the database. The two interesting aspects are: Memory requirements Startup time(can include warmup) The memory requirements are important because the available memory limit on the Kubernetes node or deployment server is mostly reached earlier than the CPU limit. If you use less memory, you can deploy more Docker images or Spring Boot applications on the resource. The startup time is important if you have periods with little load and periods with high load for your application. The shorter the startup time is the more aggressive you can scale the amount of deployed applications/images up or down. Memory Requirements 420 MB AngularAndSpringWithMaps JVM 21 280 MB AngularAndSpringWithMaps GraalVM native image 128-260 MB ReactAndGo binary The GraalVM native image uses significantly less memory than the JVM jar. That makes the native image more resource-efficient. The native image binary is 240 MB in size, which means 40 MB of working memory. The ReactAndGo binary is 29 MB in size and uses 128-260 MB of memory depending on the size of the updates it has to process. That means if the use case would need only 40 MB of working memory like the GraalVM native image, 70 MB would be enough to run it. That makes the Go binary much more resource-efficient. Startup Time 4300ms AngularAndSpringWithMaps JVM 21 220ms AngularAndSpringWithMaps GraalVM native image 100ms ReactAndGo binary The GraalVM native image startup time is impressive and enables the scale-to-zero configurations that start the application on demand and scale down to zero without load. The JVM start time requires one running instance as a minimum. The ReactAndGo binary startup time is the fastest and enables scale to zero. Conclusion The GraalVM native image and the Go binary are the most efficient in this comparison. Due to their lower memory requirements can, the CPU resources be used more efficiently. The fast startup times enable scale to zero configurations that can save money in on-demand environments. The winner is the Go project. The result is that if efficient use of hardware resources is the most important to you, Go is the best. If your developers are most familiar with Java then the use of GraalVM native image can improve the efficient use of hardware resources. Creating GraalVM native images needs more effort and developer time. Some of the effort can be automated, and with some of the effort, that would be hard. Then the question becomes: Is the extra developer time worth the saved hardware resources?
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO