A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Spring Microservice Application Resilience: The Role of @Transactional in Preventing Connection Leaks
Mastering Object-Oriented Design Patterns: Introduction to Design Patterns
Java adoption has shifted from version 1.8 to at least Java 17. Concurrently, Spring Boot has advanced from version 2.x to 3.2.2. The springdoc project has transitioned from the older library 'springdoc-openapi-ui' to 'springdoc-openapi-starter-webmvc-ui' for its functionality. These updates mean that readers relying on older articles may find themselves years behind in these technologies. The author has updated this article so that readers are using the latest versions and don't struggle with outdated information during migration. This is part one of a three-part series. You can check out the other articles below. OpenAPI 3 Documentation With Spring Boot Doing More With Springdoc OpenAPI Extending Swagger and Springdoc Open API In this tutorial, we are going to try out a Spring Boot Open API 3-enabled REST project and explore some of its capabilities. The springdoc-openapi Java library has quickly become very compelling. We are going to refer to Building a RESTful Web Service and springdoc-openapi v2.5.0. Prerequisites Java 17.x Maven 3.x Steps Start by creating a Maven JAR project. Below, you will see the pom.xml to use: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.2</version> <relativePath ></relativePath> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>sample</artifactId> <version>0.0.1</version> <name>sample</name> <description>Demo project for Spring Boot with openapi 3 documentation</description> <properties> <java.version>17</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note the "springdoc-openapi-starter-webmvc-ui" dependency. Now, let's create a small Java bean class. Java package sample; import org.hibernate.validator.constraints.CreditCardNumber; import jakarta.validation.constraints.Email; import jakarta.validation.constraints.Max; import jakarta.validation.constraints.Min; import jakarta.validation.constraints.NotBlank; import jakarta.validation.constraints.NotNull; import jakarta.validation.constraints.Pattern; import jakarta.validation.constraints.Size; public class Person { private long id; private String firstName; @NotNull @NotBlank private String lastName; @Pattern(regexp = ".+@.+\\..+", message = "Please provide a valid email address" ) private String email; @Email() private String email1; @Min(18) @Max(30) private int age; @CreditCardNumber private String creditCardNumber; public String getCreditCardNumber() { return creditCardNumber; } public void setCreditCardNumber(String creditCardNumber) { this.creditCardNumber = creditCardNumber; } public long getId() { return id; } public void setId(long id) { this.id = id; } public String getEmail1() { return email1; } public void setEmail1(String email1) { this.email1 = email1; } @Size(min = 2) public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } - This is an example of a Java bean. Now, let's create a controller. Java package sample; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import io.swagger.v3.oas.annotations.media.Content; import io.swagger.v3.oas.annotations.media.ExampleObject; import jakarta.validation.Valid; @RestController public class PersonController { @RequestMapping(path = "/person", method = RequestMethod.POST) @io.swagger.v3.oas.annotations.parameters.RequestBody(required = true, content = @Content(examples = { @ExampleObject(value = INVALID_REQUEST, name = "invalidRequest", description = "Invalid Request"), @ExampleObject(value = VALID_REQUEST, name = "validRequest", description = "Valid Request") })) public Person person(@Valid @RequestBody Person person) { return person; } private static final String VALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" }"""; private static final String INVALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abcabc.com", "email1": "abcabc.com", "age": 17, "creditCardNumber": "411111111111111" }"""; } - Above is a sample REST Controller. Side Note: Normally I don't like to clutter already annotation-cluttered code with additional annotations, but I do think having ready-made examples like these can be useful. Another reason that forced me to do this was the default examples now generated from Swagger UI appear to be generating some confusing text when using @Pattern. It appears to be a Spring UI issue and not a Springdoc issue. Let's make some entries in src\main\resources\application.properties. Properties files application-description=@project.description@ application-version=@project.version@ logging.level.org.springframework.boot.autoconfigure=ERROR # server.error.include-binding-errors is now needed if we # want to display the errors as shown in this article # this can also be avoided in other ways as we will see # in later articles server.error.include-binding-errors=always The above entries will pass on Maven build-related information to the OpenAPI documentation and also include the new server.error.include-binding-errors property. Finally, let's write the Spring Boot application class: Java package sample; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; import io.swagger.v3.oas.models.info.License; @SpringBootApplication public class SampleApplication { public static void main(String[] args) { SpringApplication.run(SampleApplication.class, args); } @Bean public OpenAPI customOpenAPI(@Value("${application-description}") String appDesciption, @Value("${application-version}") String appVersion) { return new OpenAPI() .info(new Info() .title("sample application API") .version(appVersion) .description(appDesciption) .termsOfService("http://swagger.io/terms/") .license(new License().name("Apache 2.0").url("http://springdoc.org"))); } } - Also, note how the API version and description are being leveraged from application.properties. At this stage, this is what the project looks like in Eclipse: The project contents are above. Next, execute the mvn clean package from the command prompt or terminal. Then, execute java -jar target\sample-0.0.1.jar. You can also launch the application by running the SampleApplication.java class from your IDE. Now, let's visit the Swagger UI — http://localhost:8080/swagger-ui.html. Click the green Post button and expand the > symbol on the right of Person under Schemas. Let's expand the last schemas section a bit more: The nice thing is how the contract is automatically detailed leveraging JSR-303 annotations on the model. It out-of-the-box covers many of the important annotations and documents them. However, I did not see it support out of the box @javax.validation.constraints.Email and @org.hibernate.validator.constraints.CreditCardNumber at this point. The issue is that they are not documented in the generated Swagger specs, but those constraints are functional. We will discuss more on this in the subsequent article. For completeness, let's post a request. Press the Try it out button. Press the blue Execute button. Let's feed in a valid input by copying the below or by selecting the valid Input drop-down. JSON { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" } Let's feed that valid input into the Request body section. (We can also select "validRequest" from the Examples dropdown as shown below.) Upon pressing the blue Execute button, we see the below: This was only a brief introduction to the capabilities of the dependency: XML <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> Troubleshooting Tips Ensure prerequisites. If using the Eclipse IDE, we might need to do a Maven update on the project after creating all the files. In the Swagger UI, if you are unable to access the “Schema” definitions link, it might be because you need to come out of the “try it out “ mode. Click on one or two Cancel buttons that might be visible. Source code Git Clone URL, Branch: springdoc-openapi-intro-update1.
In modern web applications, integrating with external services is a common requirement. However, when interacting with these services, it's crucial to handle scenarios where responses might be delayed or fail to arrive. Spring Boot, with its extensive ecosystem, offers robust solutions to address such challenges. In this article, we'll explore how to implement timeouts using three popular approaches: RestClient, RestTemplate, and WebClient, all essential components in Spring Boot. 1. Timeout With RestTemplate First, let's demonstrate setting a timeout using RestTemplate, a synchronous HTTP client. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateExample { public static void main(String[] args) { var restTemplate = new RestTemplate(); var url = "https://api.example.com/data"; var timeout = 5000; // Timeout in milliseconds restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } In this snippet, we're performing a GET request to `https://api.example.com/data`. However, we haven't set any timeout, which means the request might hang indefinitely in case of network issues or server unavailability. To set a timeout, we need to configure RestTemplate with an appropriate `ClientHttpRequestFactory`, such as `HttpComponentsClientHttpRequestFactory`. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateTimeoutExample { public static void main(String[] args) { var url = "https://api.example.com/data"; var timeout = 5000; var clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(); clientHttpRequestFactory.setConnectTimeout(timeout); clientHttpRequestFactory.setConnectionRequestTimeout(timeout); var restTemplate = new RestTemplate(clientHttpRequestFactory); restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } 2. Timeout With WebClient WebClient is a non-blocking, reactive HTTP client introduced in Spring WebFlux. Let's see how we can use it with a timeout: Java import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class WebClientTimeoutExample { public static void main(String[] args) { var client = WebClient.builder() .baseUrl("https://api.example.com") .build(); client.get() .uri("/data") .retrieve() .bodyToMono(String.class) .timeout(Duration.ofMillis(5000)) .subscribe(System.out::println); } } Here, we're using WebClient to make a GET request to `/data` endpoint. The `timeout` operator specifies a maximum duration for the request to wait for a response. 3. Timeout With RestClient RestClient is a synchronous HTTP client that offers a modern, fluent API since Spring Boot 3.2. New Spring Boot applications should replace RestTemplate code with RestClient API. Now, let's implement a RestClient with timeout using `HttpComponentsClientHttpRequestFactory`: Java import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class RestClientTimeoutExample { public static void main(String[] args) { var factory = new HttpComponentsClientHttpRequestFactory(); factory.setConnectTimeout(5000); factory.setReadTimeout(5000); var restClient = RestClient .builder() .requestFactory(clientHttpRequestFactory) .build(); var response = restClient .get() .uri("https://api.example.com/data") .retrieve() .toEntity(String.class); System.out.println(response.getBody()); } } In this code, we define a specified timeout using HttpComponentsClientHttpRequestFactory and use it in RestClient.builder(). By setting timeouts appropriately, we ensure that our application remains responsive even in scenarios where external services are slow or unresponsive. This proactive approach enhances the overall reliability and resilience of our Spring Boot applications. Conclusion In summary, handling timeouts is important for web apps to stay responsive and robust during interactions with external services. We explored three popular Spring Boot approaches for implementing timeouts effectively: RestTemplate, WebClient, and RestClient. By setting appropriate timeouts, developers can ensure applications gracefully handle delayed or failed responses and enhance overall reliability and user experience in network conditions and service availability.
In this tutorial, I'll explore how to set up and utilize docTR, the open-source OCR (Optical Character Recognition) solution of the document parsing API startup Mindee. I’ll go through what you need to install docTR on Ubuntu. It accepts PDFs, images, and even a website URL as an input. In this example, I will parse a grocery store receipt. Let’s get started. Setting Up docTR on Ubuntu docTR is compatible with any Linux distribution, macOS, and Windows. It is also available as a Docker image. I will use Ubuntu 22.04 LTS (Jammy Jellyfish) for this tutorial. Hardware-wise, you don’t need anything specific, but if you want to do extensive testing, I recommend using a GPU instance; OVHcloud offers affordable options, with servers starting at less than a dollar per hour. Let’s start by installing Python. At the time of writing, docTR requires Python 3.8 (or higher). Shell sudo apt install -y python3 To avoid messing with system libraries, let’s use a virtual environment. Shell sudo apt install -y python3.10-venv python3 -m venv testing-Mindee-docTR Then we install the OpenGL Mesa 3D Graphics Library, used for the computer vision part of docTR. Shell sudo apt install -y libgl1-mesa-glx We install pango, which is a text layout engine library. Shell sudo apt-get install -y libpango-1.0-0 libpangoft2-1.0-0 Then, we install pip so that we can install docTR. Shell sudo apt install -y python3-pip Finally, we install docTR within our virtual environment. This version is specifically for PyTorch. If you choose to use TensorFlow, change the command accordingly. Shell testing-Mindee-docTR/bin/pip3 install "python-doctr[torch]" Using docTR Now that docTR is installed, let’s start playing with it. In this example, I will test it with a grocery store receipt. You can download the receipt using the command below. Shell wget "https://media.istockphoto.com/id/889405434/vector/realistic-paper-shop-receipt-vector-cashier-bill-on-white-background.jpg?s=612x612&w=0&k=20&c=M2GxEKh9YJX2W3q76ugKW23JRVrm0aZ5ZwCZwUMBgAg=" -O receipt.jpeg Create a testing-docTR.py file and insert the following code into it. Python from doctr.io import DocumentFile from doctr.models import ocr_predictor # Load the grocery receipt doc = DocumentFile.from_images("receipt.jpeg") # Load the OCR model model = ocr_predictor(pretrained=True) # Perform OCR result = model(doc) # Display the OCR result print(result.export()) Note that docTR uses a two-stage approach: First, it performs text detection to localize words. Then, it conducts text recognition to identify all characters in the word. The ocr_predictor function accepts additional parameters to select the text detection and recognition architecture. For simplicity, I used the default ones in this example. You can find information about other models on the docTR documentation. Reading a Receipt Using docTR Now you just need to run your Python script: Shell testing-Mindee-docTR/bin/python3 testing-docTR.py You will get an output such as the one below: JSON {"pages": [{"page_idx": 0, "dimensions": [612, 612], "orientation": {"value": null, "confidence": null}, "language": {"value": null, "confidence": null}, "blocks": [{"geometry": [[0.44140625, 0.1201171875], [0.548828125, 0.14453125]], "lines": [{"geometry": [[0.44140625, 0.1201171875], [0.548828125, 0.14453125]], "words": [{"value": "RECEIPT", "confidence": 0.9695481061935425, "geometry": [[0.44140625, 0.1201171875], [0.548828125, 0.14453125]]}]}], "artefacts": []}]}]} Note that I drastically shortened the JSON output for readability and only kept the part showing the “RECEIPT” word. Here is the JSON structure you’d be looking at without truncating the result. I have expanded the part of the tree that I kept in the JSON output. docTR will provide a bunch of information about the document but the important part is about how it breaks down the document into lines, and for each line, provides an array containing the words it detected along with the degree of confidence. Here, we can see it spotted the word RECEIPT with a confidence of 96%. docTR offers an efficient OCR solution that simplifies text recognition processes. Depending on the document type, you may need to change the text detection and text recognition architectures to improve accuracy. Comprehensive docTR documentation is available here. Considerations When Using docTR Deploying docTR entails certain complexities. First, you must create a dataset and train docTR to achieve satisfactory accuracy. This means dealing with data annotation on many images. Since OCR systems typically serve as backend services for other apps, it may be necessary to integrate docTR via an API and scale it according to the app’s needs. docTR does not provide this out of the box, but there are many open-source technologies that can help facilitate this step. Conclusion Document processing technologies have come a long way since the advent of OCR tools, which are limited to character recognition. Intelligent Document Processing (IDP) platforms represent the next step; they utilize OCR (such as docTR) along with additional layers of intelligence like table reconstruction, document classification, and natural language understanding, to achieve better accuracy and precision. Additionally, for those seeking a scalable IDP solution without the complexities of data collection and model training, I recommend trying out Mindee’s latest solution, docTI. This training-free IDP solution leverages Large Language Models (LLMs) to eliminate the need for data collection, annotation, and the model training process. You can use the free-tier plan, configure an instance, and start querying the API in minutes.
Just as you can plug in a toaster, and add bread... You can plug this API Appliance into your database, and add rules and Python. Automation can provide: Remarkable agility and simplicity With all the flexibility of a framework Using conventional frameworks, creating a modern, API-based web app is a formidable undertaking. It might require several weeks and extensive knowledge of a framework. In this article, we'll use API Logic Server (open source, available here) to create it in minutes, instead of weeks or months. And, we'll show how it can be done with virtually zero knowledge of frameworks, or even Python. We'll even show how to add message-based integration. 1. Plug It Into Your Database Here's how you plug the ApiLogicServer appliance into your database: $ ApiLogicServer create-and-run --project-name=sample_ai --db-url=sqlite:///sample_ai.sqlite No database? Create one with AI, as described in the article, "AI and Rules for Agile Microservices in Minutes." It Runs: Admin App and API Instantly, you have a running system as shown on the split-screen below: A multi-page Admin App (shown on the left), supported by... A multi-table JSON:API with Swagger (shown on the right) So right out of the box, you can support: Custom client app dev Ad hoc application integration Agile collaboration, based on working software Instead of weeks of complex and time-consuming framework coding, you have working software, now. Containerize API Logic Server can run as a container or a standard pip install. In either case, scripts are provided to containerize your project for deployment, e.g., to the cloud. 2. Add Rules for Logic Instant working software is great: one command instead of weeks of work, and virtually zero knowledge required. But without logic enforcement, it's little more than a cool demo. Behind the running application is a standard project. Open it with your IDE, and: Declare logic with code completion. Debug it with your debugger. Instead of conventional procedural logic, the code above is declarative. Like a spreadsheet, you declare rules for multi-table derivations and constraints. The rules handle all the database access, dependencies, and ordering. The results are quite remarkable: The 5 spreadsheet-like rules above perform the same logic as 200 lines of Python. The backend half of your system is 40X more concise. Similar rules are provided for granting row-level access, based on user roles. 3. Add Python for Flexibility Automation and rules provide remarkable agility with very little in-depth knowledge required. However, automation always has its limits: you need flexibility to deliver a complete result. For flexibility, the appliance enables you to use Python and popular packages to complete the job. Below, we customize for pricing discounts and sending Kafka messages: Extensible Declarative Automation The screenshots above illustrate remarkable agility. This system might have taken weeks or months using conventional frameworks. But it's more than agility. The level of abstraction here is very high, bringing a level of simplicity that empowers you to create microservices - even if you are new to Python or frameworks such as Flask and SQLAlchemy. There are 3 key elements that deliver this speed and simplicity: Microservice automation: Instead of slow and complex framework coding, just plug into your database for an instant API and Admin App. Logic automation with declarative rules: Instead of tedious code that describes how logic operates, rules express what you want to accomplish. Extensibility: Finish the remaining elements with your IDE, Python, and standard packages such as Flask and SQLAlchemy. This automation appliance can provide remarkable benefits, empowering more people, to do more.
Debugging effectively requires a nuanced approach, similar to using tongs that tightly grip the problem from both sides. While low-level tools have their place in system-level service debugging, today's focus shifts towards a more sophisticated segment of the development stack: advanced management tools. Understanding these tools is crucial for developers, as it bridges the gap between code creation and operational deployment, enhancing both efficiency and effectiveness in managing applications across extensive infrastructures. The Need for Advanced Management Tools in Development Development and DevOps teams utilize an array of tools, often perceived as complex or alien by developers. These tools, designed for scalability, enable the management of thousands of servers simultaneously. Such capabilities, although not always necessary for smaller scales, offer significant advantages in application management. Advanced management tools facilitate the navigation and control over multiple machines, making them indispensable for developers seeking to optimize application performance and reliability. Introduction to JMX (Java Management Extensions) One of the pivotal standards in application management is Java Management Extensions (JMX), which Java introduced to simplify the interaction with and management of applications. JMX allows both applications and the Java Development Kit (JDK) itself to expose critical information and functionalities, enabling external tools to manipulate these elements dynamically. Although activating JMX falls outside this discussion, its significance cannot be overstated, with ample resources available for those interested in its implementation. Setting up JMX JMX isn't enabled by default, to enable it we need the following steps: 1. Modify the JVM Startup Parameters To enable JMX on a Java application, you must adjust the Java Virtual Machine (JVM) startup parameters. This involves adding specific flags to your application's startup command. The essential flags for enabling JMX are: -Dcom.sun.management.jmxremote: This flag activates the JMX remote management and monitoring. -Dcom.sun.management.jmxremote.port=<PORT>: Replace <PORT> with a specific port number where the JMX remote connection will listen. -Dcom.sun.management.jmxremote.ssl=false: This flag disables SSL for JMX connections. For development environments, SSL might be disabled for simplicity, but for production environments, consider enabling SSL for security. -Dcom.sun.management.jmxremote.authenticate=false: This flag disables authentication. Similar to SSL, authentication may be disabled in development but should be enabled in production to ensure secure access. 2. Restart Your Application With the JVM parameters set, restart your application. This will apply the new startup parameters, activating JMX. 3. Verify JMX Connectivity After restarting your application, you can verify that JMX is enabled by connecting to it using a JMX client such as JConsole, VisualVM, or a custom management application. Use the port number specified in the startup parameters to establish the connection. JMX Security Considerations While enabling JMX provides powerful management capabilities, it's crucial to consider security implications, especially when JMX is exposed over a network. When deploying applications in production, always enable SSL and authentication to protect against unauthorized access. Additionally, consider firewall rules and network policies to restrict JMX access to trusted clients. Understanding MBeans Central to JMX are Management Beans (MBeans), which serve as the control points within an application. These beans enable developers to publish specific functionalities for runtime monitoring and configuration. The ability to export application metrics to dashboards through MBeans is particularly valuable, facilitating real-time decision-making based on accurate, up-to-date information. Furthermore, operations such as user management can be exposed through MBeans, enhancing administrative capabilities. Spring and Management Beans Spring Framework's Actuator module exemplifies the integration of management capabilities within development, offering extensive metrics and operational details. This integration propels applications to "production-ready" status, allowing developers to monitor and manage applications with unprecedented depth and efficiency. Tooling for JMX Management While JMX can be accessed through various web interfaces and administrative tools, command-line tooling offers a direct, efficient method for interacting with JMX-enabled applications on production servers. Tools like JMXTerm complement visual tools by providing a streamlined interface for rapid insights, especially in environments unfamiliar to the developer. Getting Started With JMXTerm JMXTerm is a powerful utility for managing JMX without the need for graphical visualization, ideal for quick diagnostics or high-level server insights. After enabling JMX on the JVM and setting up the necessary configurations, developers can connect to servers, explore different JMX domains, and manipulate MBeans directly from the command line. We can accomplish all of the following via visual tools and sometimes using a web interface. Normally, that's the approach I use. However, as a learning tool I think JMXTerm is fantastic since it exposes things in a way that's consistent and verbose. If we can understand JMXTerm the GUI version would be a walk in the park. . . We can launch JMXTerm using the command line, in my case I used the following command: java -jar ~/Downloads/jmxterm-1.0.2-uber.jar --url localhost:30002 Once the connection is made we can issue commands to JMX and retrieve information about the JVM or the application; e.g., I can list the domains which you can think of as similar to "packages" or "modules" a way to organize the various beans: $>domains #following domains are available JMImplementation com.sun.management java.lang java.nio java.util.logging javax.cache jdk.management.jfr I can select a specific domain and thus perform future operations within said domain: $>domain java.util.logging #domain is set to java.util.logging Once inside the domain, I can select a specific bean and perform operations on it. For this I need to first list the beans in the domain; in this case there's only the logging bean. I can then select that bean using the bean command: $>beans #domain = java.util.logging: java.util.logging:type=Logging $>bean java.util.logging:type=Logging #bean is set to java.util.logging:type=Logging I can perform many operations on beans. Perhaps the most useful is the info command which lets me query a bean. Notice that a bean can have attributes, think of them like object fields and operations which you can think of as methods. There are also notifications which you can think of as events: $>info #mbean = java.util.logging:type=Logging #class name = sun.management.ManagementFactoryHelper$PlatformLoggingImpl # attributes %0 - LoggerNames ([Ljava.lang.String;, r) %1 - ObjectName (javax.management.ObjectName, r) # operations %0 - java.lang.String getLoggerLevel(java.lang.String p0) %1 - java.lang.String getParentLoggerName(java.lang.String p0) %2 - void setLoggerLevel(java.lang.String p0,java.lang.String p1) #there's no notifications I can run operations and pass various commands; e.g., I can get the logger level, set it and then check that the logger level was indeed updated: $>run getLoggerLevel "org.apache.tomcat.websocket.WsWebSocketContainer" #calling operation getLoggerLevel of mbean java.util.logging:type=Logging with params [org.apache.tomcat.websocket.WsWebSocketContainer] #operation returns: $>run setLoggerLevel org.apache.tomcat.websocket.WsWebSocketContainer INFO #calling operation setLoggerLevel of mbean java.util.logging:type=Logging with params [org.apache.tomcat.websocket.WsWebSocketContainer, INFO] #operation returns: null $>run getLoggerLevel "org.apache.tomcat.websocket.WsWebSocketContainer" #calling operation getLoggerLevel of mbean java.util.logging:type=Logging with params [org.apache.tomcat.websocket.WsWebSocketContainer] #operation returns: INFO This is just the tip of the iceberg. We can get many things such as Spring settings, internal VM information, etc. In this example I can query VM information directly from the console: $>domain com.sun.management #domain is set to com.sun.management $>beans #domain = com.sun.management: com.sun.management:type=DiagnosticCommand com.sun.management:type=HotSpotDiagnostic $>bean com.sun.management:type=HotSpotDiagnostic #bean is set to com.sun.management:type=HotSpotDiagnostic $>info #mbean = com.sun.management:type=HotSpotDiagnostic #class name = com.sun.management.internal.HotSpotDiagnostic # attributes %0 - DiagnosticOptions ([Ljavax.management.openmbean.CompositeData;, r) %1 - ObjectName (javax.management.ObjectName, r) # operations %0 - void dumpHeap(java.lang.String p0,boolean p1) %1 - javax.management.openmbean.CompositeData getVMOption(java.lang.String p0) %2 - void setVMOption(java.lang.String p0,java.lang.String p1) #there's no notifications Leveraging JMX in Debugging and Management JMX stands out as a robust tool for wiring management consoles, allowing developers to expose critical settings and metrics for their projects. Beyond its conventional use, JMX can be leveraged as part of the debugging process, serving as a pseudo-interface for triggering debugging scenarios or observing debugging sessions within the management UI. This approach not only simplifies the management of server applications but also enhances the developer's ability to diagnose and resolve issues efficiently. Exposing MBeans in Spring Boot Up until now we discussed the process of working with beans that are a part of the JVM or Spring. But what about our own application logic? We can expose our own applications internal state so we (and our SREs) can review these in production and staging. Instead of building a custom control panel or logging everything, we can just expose the data. If a flag is problematic we can change it in production, if you want to query a specific state it too can be exposed. Spring Boot simplifies the management and monitoring of applications through its comprehensive support for JMX. By leveraging Spring's infrastructure, we can easily expose their application's beans as JMX Managed Beans (MBeans), making them accessible for monitoring and management via JMX clients. Understanding Spring Boot JMX Support Spring Boot automatically configures JMX for you and exposes any beans annotated with @ManagedResource as JMX MBeans. This feature, combined with Spring Boot’s Actuator, provides a rich set of management endpoints, covering various aspects of the application, from metrics to thread dumps. Expose an MBean in Spring Boot To expose a bean we need to take the following steps: 1. Define a Management Interface Create an interface that defines the operations and attributes you wish to expose via JMX. This interface should be annotated with JMX annotations such as @ManagedOperation for methods and @ManagedAttribute for fields or getter/setter methods. 2. Implement the MBean Implement the interface in a class that performs the actual logic for the operations and attributes defined. This class represents your MBean and can be a regular Spring-managed bean. 3. Annotate the Bean With @ManagedResource Annotate your MBean implementation class with @ManagedResource to indicate that it should be exposed as an MBean. You can specify the object name for the MBean in this annotation, which is how it will be identified in JMX clients. 4. Enable JMX in Spring Boot Ensure that JMX is enabled in your Spring Boot application. This is usually the default behavior, but you can explicitly enable it by setting spring.jmx.enabled=true in your application.properties or application.yml file. 5. Access the MBean via a JMX Client Once your application is running, you can access the exposed MBean through any standard JMX client, such as JConsole, VisualVM, or a custom client. Connect to the Spring Boot application's JMX domain, and you'll find the MBean you exposed, ready for interaction. Example: Exposing a Simple Configuration MBean // Define a management interface public interface ConfigurationMBean { @ManagedAttribute String getApplicationName(); @ManagedOperation void updateApplicationName(String name); } // Implement the MBean @Component @ManagedResource(objectName = "com.example:type=Configuration") public class Configuration implements ConfigurationMBean { private String applicationName = "MyApp"; @Override public String getApplicationName() { return applicationName; } @Override public void updateApplicationName(String name) { this.applicationName = name; } } In this example, the Configuration class is annotated with @ManagedResource, making it available as an MBean with operations and attributes accessible via JMX clients. Exposing MBeans in Spring Boot is a powerful feature that enhances the management and monitoring capabilities of applications. By following the steps outlined above, developers can provide external tools with dynamic access to application internals, offering a window into the runtime behavior and allowing for adjustments on the fly. This not only aids in debugging and performance tuning but also aligns with best practices for building manageable, robust applications. Video Final Word Advanced management tools, particularly JMX and its integration with frameworks like Spring, offer developers powerful capabilities for application monitoring, configuration, and debugging. By understanding and utilizing these tools, developers can achieve a deeper level of control over their applications, enhancing both performance and reliability. Whether through graphical interfaces or command-line utilities like JMXTerm, the dynamic manipulation and monitoring of applications in runtime environments open new avenues for effective software development and management. As the bridge between development and operations continues to narrow, mastering these advanced tools becomes essential for any developer looking to excel in today's fast-paced technological landscape.
A BPMN Workflow engine based on the Jakarta EE Framework forms a powerful and effective combination for developing enterprise applications with a focus on business process management. Both, Jakarta EE and BPMN 2.0 are standardized and widely supported. The scalability of Jakarta EE provides a secure foundation for building enterprise applications with robust business process management capabilities. This enables developers to leverage the strengths of both technologies to create efficient, interoperable, and maintainable BPM solutions. In the following, I will explain the aspects in more detail. Standardization Jakarta EE provides a standardized platform for building enterprise applications, offering a set of specifications and APIs. This standardization ensures portability and interoperability across different Jakarta EE-compliant application servers. This allows developers to work within a unified framework without the need to learn proprietary techniques. This not only streamlines the development process but also promotes a broader ecosystem where developers can focus on leveraging the standardized features, thus enhancing the overall efficiency and maintainability of the applications. BPMN 2.0 on the other side is an industry-standard notation for modeling business processes. It provides a common language for business analysts and developers to collaborate on defining and refining business processes. This makes it easy for developers, architects, and non-technical teams to talk about the same things in a common language. Moreover, BPMN facilitates interoperability among various BPMN modeling tools. This compatibility ensures that models created in one tool can be seamlessly transferred and further developed in another, fostering a collaborative and flexible environment for business process modeling. BPMN effectively builds the bridge between the business and IT departments while promoting a standardized and interoperable approach to process modeling. Integration Capabilities The integration of business applications into the existing IT infrastructure is essential for a sustainable architecture. Jakarta EE is designed to support the integration of various enterprise components and systems employing a robust architecture that facilitates seamless communication and collaboration. Technologies like the Java API for RESTful Web Services (JAX-RS), Java Message Service (JMS), or Jakarta Security 3.0 provide essential building blocks for developing scalable and interoperable enterprise applications. These technologies empower BPM systems to effectively handle diverse interactions with different platforms, applications, databases, and services. Utilizing XML as its foundation, BPMN 2.0 seamlessly integrates with Jakarta EE components like the Jakarta XML Binding 4.0 API. Leveraging the BPMN 2.0 extension mechanism, a custom business process can be augmented with the technical details about integration platforms and services within a microservices architecture. This capability facilitates the orchestration of business processes spanning multiple systems and services, enabling a cohesive and efficient integration framework. Transaction Management Another aspect I want to talk about is transactions. Transactions are an essential prerequisite for the execution of business processes. Jakarta EE provides a robust transaction management framework that ensures the reliability and integrity of business processes. In a BPMN Workflowsystem, multiple tasks and events can often orchestrate a single business transaction. Jakarta EE’s robust transaction management capabilities help to coordinate and synchronize these steps, ensuring that either all of them succeed or none do. This atomicity is crucial for maintaining data consistency and reliability in complex business scenarios. Jakarta EE’s transaction management support thus plays a fundamental role in the development of dependable business applications by providing a framework for handling transactions in a coordinated and fault-tolerant manner. Scalability and Performance When we talk about scalability and performance, we usually only think of horizontal scaling in the form of more server capacity. But well-scalable architecture is also characterized by the optimal use of available system resources. With its micro-container architecture Jakarta EE offers features for building scalable and high-performance enterprise applications, a critical aspect for BPM systems that often need to manage a substantial volume of concurrent processes and user interactions. But Jakarta EE Application Servers also extend to modern cloud environments, allowing them to be seamlessly deployed in a cluster configuration within a cloud infrastructure. This cloud-ready nature of Jakarta EE enhances the flexibility and scalability of BPM systems, enabling them to efficiently handle varying workloads and ensuring optimal performance. The ability to run Jakarta EE Application Servers in a cluster in cloud environments underscores its relevance in supporting the development of robust and scalable BPMN-driven applications tailored to contemporary technological landscapes. Security Security is an ongoing topic, especially for business applications. Jakarta EE includes robust security features, addressing concerns such as authentication, authorization, and secure communication. These features are not only vital but also pivotal for building secure BPM systems, especially given the sensitive nature of the business processes and data they often handle. In the context of BPMN applications, the processing of trusted data emerges as an exceptionally crucial aspect. Jakarta EE’s security mechanisms play a paramount role in guaranteeing that only authorized users have access to specific processes and data, providing a resilient defense against unauthorized access or potential security breaches. This emphasis on processing trusted data underscores Jakarta EE’s commitment to fostering a secure environment within BPM systems, instilling confidence in the integrity and confidentiality of the information being managed. Platforms and Tooling Finally, let's talk about available platforms and tools. Jakarta EE has a rich ecosystem of tools, libraries, and frameworks that can be leveraged for the development of BPMN enterprise applications. Widely used open-source server platforms for building Jakarta EE applications include JBoss Wildfly, Payara/Glassfish, and Open Liberty, which are all prepared for operation in cloud environments. Applications can be seamlessly exchanged between these platforms. For the modeling of BPMN diagrams, a variety of commercial and open-source tools are available. One free BPMN modeling tool is Open-BPMN, which can be run on different IDEs such as Visual Studio Code, Eclipse IDE, and Eclipse Theia as well as a standalone web application. Open-BPMN can be utilized by business analysts to design top-level business processes, as well as by architects and developers to model the technical details of complex processing logic. Built on the Eclipse Graphical Language Server Platform (GLSP), Open-BPMN provides an extension mechanism that allows the customization of the BPMN modeling platform to individual application requirements within a vertical domain. The use of the BPMN 2.0 extension mechanism ensures the continued validity of the BPMN 2.0 standard. Imixs-Workflow is an open-source BPMN Workflow engine based on the Jakarta EE Framework. In its latest version, it supports Jakarta EE 10 and includes a BPMN modeling extension for Open-BPMN. Imixs-Workflow provides a comprehensive set of APIs and Plug-Ins that allow the integration of BPMN 2.0 into any business application. The workflow engine supports a powerful multi-level security concept with a fine-grained access control seamlessly integrated into the Jakarta EE Security API. With the event-driven modeling concept, human-centric workflows can be developed in less time. Summary In summary, the integration of a BPMN Workflow engine with the Jakarta EE Framework establishes a robust foundation for developing enterprise applications centered on business process management. The collaboration between Jakarta EE and BPMN 2.0, characterized by standardization and broad support, not only ensures the creation of efficient, interoperable, and maintainable BPM solutions but also signifies a commitment to industry standards.
Netty is a powerful, asynchronous, event-driven networking framework for building high-performance, scalable applications. It simplifies the development of network applications by providing an easy-to-use API and robust abstractions for handling various networking protocols and data formats. Key features and benefits of Netty include: Asynchronous and event-driven: Netty utilizes a non-blocking, event-driven architecture that allows it to handle thousands of concurrent connections with low latency. This makes it ideal for building high-performance applications that require scalability. Modular and extensible: Netty is designed with a modular architecture that allows developers to customize and extend its functionality easily. It provides a rich set of components, known as "handlers," that can be combined to build complex networking pipelines tailored to specific use cases. Protocol agnostic: Netty abstracts away the complexities of various networking protocols, making it easy to develop applications that communicate using protocols such as HTTP, HTTPS, WebSocket, TCP, UDP, and more. Byte buffers and zero-copy: Netty uses efficient byte buffer abstractions and supports zero-copy mechanisms. Zero-copy is a feature currently available only with NIO and Epoll transport. It allows you to quickly and efficiently move data from a file system to the network without copying from kernel space to user space, which can significantly improve performance in protocols such as FTP or HTTP. Specifically, it is not usable with file systems that implement data encryption or compression util file system already has encrypted data. Support for SSL/TLS: Netty provides built-in support for SSL/TLS encryption, allowing developers to secure their network communications with ease. Components of Netty 1. ChannelInitializer ChannelInitializer is an abstract class in Netty used for initializing a new Channel. When a new connection is accepted by the server, Netty creates a new Channel, and the ChannelInitializer is invoked to set up the initial configuration for this Channel. This typically involves adding handlers to the ChannelPipeline. childHandler() method in the ServerBootstrap class is used to specify the ChannelInitializer for the child channels. 2. ChannelPipeline ChannelPipeline represents a sequence of channel handlers that process inbound and outbound data for a Channel. When a message (such as a byte buffer) travels through a Channel, it passes through the pipeline, where each handler can process or modify the message as needed. The output of one handler becomes input to the next handler. The pipeline can be dynamically changed in the handler by acquiring handlerContext. 3. ChannelHandler ChannelHandler is an interface in Netty that defines the behavior of components that can be added to a ChannelPipeline to process inbound and outbound data. Handlers can perform various tasks such as encoding/decoding data, handling protocol-specific logic, performing business logic, and managing the lifecycle of the Channel. There are different types of ChannelHandler interfaces in Netty, such as ChannelInboundHandler for handling inbound data, ChannelOutboundHandler for handling outbound data, and ChannelDuplexHandler for handling both inbound and outbound data. Netty Provides many abstract handlers to hook up to one of the events. 4. ChannelHandlerContext ChannelHandlerContext represents the context in which a ChannelHandler is invoked within a ChannelPipeline. It provides access to the Channel, the ChannelHandler itself, and various operations for interacting with the pipeline. ChannelHandlerContext allows handlers to send messages downstream in the pipeline, forward events to other handlers, modify the pipeline dynamically (e.g., add/remove handlers), and manage the lifecycle of the Channel. When a ChannelHandler is added to a ChannelPipeline, it’s assigned a ChannelHandlerContext, which represents the binding between a ChannelHandler and the ChannelPipeline. Although this object can be used to obtain the underlying Channel, it’s mostly utilized to write outbound data. There are two ways of sending messages in Netty. You can write directly to the Channel or write to a ChannelHandlerContext object associated with a ChannelHandler. The former approach causes the message to start from the tail of the ChannelPipeline, the latter causes the message to start from the next handler in the ChannelPipeline. 5. EventLoopGroup A group of event loops used by Netty to handle I/O operations such as accepting incoming connections, reading from sockets, and writing to sockets. It manages a pool of event loops, each of which runs on a separate thread and processes I/O events for one or more channels. Provides methods to create, manage, and shut down event loops. There are two distinct EventLoopGroup instances used for different purposes in a server: Boss EventLoopGroup: This group is responsible for accepting incoming connections on the server's listening socket. Less number of threads should be sufficient for this group usually 1, as all it does is accept a connection and hand it over to the group. Worker EventLoopGroup: This group handles the actual processing of accepted connections. More number of thread needs to be allocated to this group, depending on processor capacity. Some of the implementations are NioEventLoopGroup for non-blocking I/O using NIO, OioEventLoopGroup for blocking I/O using old I/O, and epollEventLoopGroup specific to Linux. 6. EventLoop EventLoop is the heart of Netty's asynchronous event-driven architecture. It represents a single-threaded event loop that processes I/O events for one or more channels. Each EventLoop runs on its own thread and executes tasks in a loop, such as accepting connections, reading data from sockets, writing data to sockets, and executing user-defined tasks. One EventLoop is used throughout lifecycle of the Channel, avoiding the need for synchronization. Handles the lifecycle of channels, including registration, deregistration, and closing. Manages a queue of tasks (also known as the task queue) and executes them one by one in the order they were submitted. 7. Channel The Channel abstraction hides the underlying complexities of different transport protocols (e.g., TCP, UDP) and network sockets. It provides a unified interface for performing I/O operations regardless of the transport protocol being used. NioServerSocketChannel is a specific implementation of the Channel interface provided by Netty, used for server-side TCP socket communication based on the Java NIO (New I/O) framework. It uses a selectable channel, meaning it can be registered with a selector to receive notification of I/O events such as incoming connection requests. This allows the server to efficiently manage multiple channels using a single thread. Provides various configuration options for setting up the socket, such as setting the receive buffer size, send buffer size, and socket timeout. In addition to this, Netty provides various other channel implementations for different types of communication protocols and use cases. Some of the commonly used channel implementations include NioSocketChannel: Represents a client-side TCP socket channel based on the Java NIO framework. It is used for establishing outgoing connections to remote servers. LocalServerChannel: Represents a server-side local (in-process) communication channel. It is used for handling local connections within the same JVM process. LocalChannel: Represents a client-side local (in-process) communication channel. It is used for establishing outgoing local connections within the same JVM process. EpollServerSocketChannel: Represents a server-side TCP socket channel optimized for Linux systems using the epoll mechanism for I/O event multiplexing. EpollSocketChannel: Represents a client-side TCP socket channel optimized for Linux systems using the epoll mechanism. KQueueServerSocketChannel: Represents a server-side TCP socket channel optimized for BSD-based systems (such as macOS) using the kqueue mechanism for I/O event multiplexing. KQueueSocketChannel: Represents a client-side TCP socket channel optimized for BSD-based systems using the kqueue mechanism. DatagramChannel: Represents a UDP (User Datagram Protocol) channel for both server-side and client-side communication. These are just a few examples of channel implementations provided by Netty. Netty offers a wide range of channel implementations to support various communication protocols, performance optimizations, and platform-specific features. Developers can choose the appropriate channel implementation based on their specific requirements and deployment environments. Let us write a simple Server that echos back the received message. Java @Sharable public class EchoServerHandler extends ChannelInboundHandlerAdapter { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) { ByteBuf in = (ByteBuf) msg; ctx.write(in); } @Override public void channelReadComplete(ChannelHandlerContext ctx) { ctx.writeAndFlush(Unpooled.EMPTY_BUFFER) .addListener(ChannelFutureListener.CLOSE); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { cause.printStackTrace(); ctx.close(); } } Server Bootstrap : Java public class EchoServer { public static void main(String[] args) throws Exception { new EchoServer().start(); } public void start() throws Exception { final EchoServerHandler serverHandler = new EchoServerHandler(); EventLoopGroup bossGroup = new NioEventLoopGroup(1); // Typically fewer threads for boss EventLoopGroup workerGroup = new NioEventLoopGroup(4); try { ServerBootstrap b = new ServerBootstrap(); b.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .localAddress(new InetSocketAddress(8089)) .childHandler(new ChannelInitializer<SocketChannel>(){ @Override public void initChannel(SocketChannel ch) throws Exception { ch.pipeline().addLast(serverHandler); } }); ChannelFuture f = b.bind().sync(); f.channel().closeFuture().sync(); } finally { bossGroup.shutdownGracefully().sync(); workerGroup.shutdownGracefully().sync(); } } } Writing an Echo Client which prints received messages to the console. Java @Sharable public class EchoClientHandler extends SimpleChannelInboundHandler<ByteBuf> { @Override public void channelActive(ChannelHandlerContext ctx) { ctx.writeAndFlush(Unpooled.copiedBuffer("Netty Client!", CharsetUtil.UTF_8); } @Override public void channelRead0(ChannelHandlerContext ctx, ByteBuf in) { System.out.println( "Client received: " + in.toString(CharsetUtil.UTF_8)); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { cause.printStrackTrace(); ctx.close(); } } Client Bootstrap : Java public class EchoClient { private final String host; private final int port; public EchoClient(String host, int port) { this.host = host; this.port = port; } public void start() throws Exception { EventLoopGroup group = new NioEventLoopGroup(); try { Bootstrap b = new Bootstrap(); b.group(group) .channel(NioSocketChannel.class) .remoteAddress(new InetSocketAddress(host, port)) .handler(new ChannelInitializer<SocketChannel>() { @Override public void initChannel(SocketChannel ch) throws Exception { ch.pipeline().addLast( new EchoClientHandler()); } }); ChannelFuture f = b.connect().sync(); f.channel().closeFuture().sync(); } finally { group.shutdownGracefully().sync(); } } All the operations in Netty are async, which returns ChannelFuture or ChannelPromise. ChannelFuture can be supplied with a callback method, which will be executed once the async operation is completed. Resource Management Whenever you act on data by calling ChannelInboundHandler.channelRead() or ChannelOutboundHandler.write(), you need to ensure that there are no resource leaks. Netty uses reference counting to handle pooled ByteBufs. So it’s important to adjust the reference count after you have finished using a ByteBuf. Netty ByteBuf is different from NIO ByteBuffer. In Netty read and write operations have 2 different pointers, particular pointers move forward corresponding to specific operations. This eliminates the need to call flip() ByteBuff while reading data. Netty provides class ResourceLeakDetector, which will sample about 1% of your application’s buffer allocations to check for memory leaks. Set the below property to enable sampling. java -Dio.netty.leakDetectionLevel=ADVANCED To summarize, Netty empowers developers to build high-performance and scalable networked applications with ease, leveraging its rich feature set, flexible architecture, and extensive ecosystem of libraries and tools.
Since the launch and wide adoption of ChatGPT near the end of 2022, we’ve seen a storm of news about tools, products, and innovations stemming from large language models (LLMs) and generative AI (GenAI). While many tech fads come and go within a few years, it’s clear that LLMs and GenAI are here to stay. Do you ever wonder about all the tooling going on in the background behind many of these new tools and products? In addition, you might even ask yourself how these tools—leveraged by both developer and end users—are run in production. When you peel back the layers for many of these tools and applications, you’re likely to come across LangChain, Python, and Heroku. These are the pieces that we’re going to play around with in this article. We’ll look at a practical example of how AI/ML developers use them to build and easily deploy complex LLM pipeline components. Demystifying LLM Workflows and Pipelines Machine learning pipelines and workflows can seem like a black box for those new to the AI world. This is even more the case with LLMs and their related tools, as they’re such (relatively) new technologies. Working with LLMs can be challenging, especially as you’re looking to create engineering-hardened and production-ready pipelines, workflows, and deployments. With new tools, rapidly changing documentation, and limited instructions, knowing where to start or what to use can be hard. So, let’s start with the basics of LangChain and Heroku. The documentation for LangChain tells us this: LangChain is a framework for developing applications powered by language models. Meanwhile, Heroku describes itself this way: Heroku is a cloud platform that lets companies build, deliver, monitor, and scale apps. If we put this in the context of building an LLM application, then LangChain and Heroku are a match made in heaven. We need a well-tested and easy-to-use framework (LangChain) to build our LLM application upon, and then we need a way to deploy and host that application (Heroku). Let’s look into each of these technologies in more detail. Diving Into LangChain Let’s briefly discuss how LangChain is used. LangChain is a framework that assists developers in building applications based on LLM models and use cases. It has support for Python, JavaScript, and TypeScript. For example, let’s say we were building a tool that generates reports based on user input or automates customer support response. LangChain acts as the scaffolding for our project, providing the tools and structure to efficiently integrate language models into our solution. Within LangChain, we have several key components: Agent The agent is the component that interacts with the language model to perform tasks based on our requirements. This is the brain of our application, using the capabilities of language models to understand and generate text. Chains These are sequences of actions or processes that our agent follows to accomplish a task. For example, if we were automating customer support, a chain might include accepting a customer query, finding relevant information, and then crafting a response. Templates Templates provide a way to structure the outputs from the language model. For example, if our application generates reports, then we would leverage a template that helps format these reports consistently, based on the model’s output. LangServe This enables developers to deploy and serve up LangChain applications as a REST API. LangSmith This tool helps developers evaluate, test, and refine the interactions in their language model applications to get them ready for production. LangChain is a widely adopted framework for building AI and LLM applications, and it’s easy to see why. LangChain provides the functionality to build and deploy products end-to-end. Diving Into Heroku Heroku is best known as a cloud platform as a service (PaaS) that makes it incredibly simple to deploy applications to the cloud. Developers often want to focus solely on code and implementation. When you’re already dealing with complex data pipelines and LLM-based applications, you likely don’t have the resources or expertise to deal with infrastructure concerns like servers, networks, and persistent storage. With the ability to easily deploy your apps through Heroku, the major hurdle of productionizing your projects is handled effortlessly. Building With LangChain For a better understanding of how LangChain is used in an LLM application, let’s work through some example problems to make the process clear. In general, we would chain together the following pieces to form a single workflow for an LLM chain: Start with a prompt template to generate a prompt based on parameters from the user. Add a retriever to the chain to retrieve data that the language model was not originally trained on (for example, from a database of documents). Add a conversation retrieval chain to include chat history, so that the language model has context for formulating a better response. Add an agent for interacting with an actual LLM. LangChain lets us “chain” together the processes that form the base of an LLM application. This makes our implementation easy and approachable. Let’s work with a simple example. In our example, we’ll work with OpenAI. We’ll craft our prompt this way: Tell OpenAI to take on the persona of an encouraging fitness trainer. Input a question from the end user. To keep it nice and simple, we won’t worry about chaining in the retrieval of external data or chat history. Once you get the hang of LangChain, adding other capabilities to your chain is straightforward. On our local machine, we activate a virtual environment. Then, we install the packages we need: Shell (venv) $ pip install langchain langchain_openai We’ll create a new file called main.py. Our basic Python code looks like this: Python import os from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI my_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a friendly and encouraging fitness trainer."), ("user", "{input}") ]) llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY")) chain = my_prompt | llm That’s it! In this basic example, we’ve used LangChain to chain together a prompt template and our OpenAI agent. To use this from the command line, we would add the following code: Python user_input = input("Ask me a question related to your fitness goals.\n") response = chain.invoke({ "input": user_input }) print(response) Let’s test out our application from the command line. Shell (venv) $ OPENAI_API_KEY=insert-key-here python3 main.py Ask me a question related to your fitness goals. How do I progress toward holding a plank for 60 seconds? content="That's a great goal to work towards! To progress towards holding a plank for 60 \ seconds, it's important to start with proper form and gradually increase the duration of \ your plank holds. Here are some tips to help you progress:\n\n1. Start with shorter \ durations: Begin by holding a plank for as long as you can with good form, even if it's \ just for a few seconds. Gradually increase the time as you get stronger.\n\n2. Focus on \ proper form: Make sure your body is in a straight line from head to heels, engage your \ core muscles, and keep your shoulders directly over your elbows.\n\n3. Practice regularly: \ Aim to include planks in your workout routine a few times a week. Consistency is key to \ building strength and endurance.\n\n4. Mix it up: Try different variations of planks, such \ as side planks or plank with leg lifts, to work different muscle groups and keep your \ workouts challenging.\n\n5. Listen to your body: It's important to push yourself, but also \ know your limits. If you feel any pain or discomfort, stop and rest.\n\nRemember, progress \ takes time and patience. Celebrate each milestone along the way, whether it's holding a \ plank for a few extra seconds or mastering a new plank variation. You've got this!" (I've added line breaks above for readability.) That’s a great start. But it would be nice if the output was formatted to be a bit more human-readable. To do that, we simply need to add an output parser to our chain. We’ll use StrOutputParser. Python import os from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI from langchain_core.output_parsers import StrOutputParser my_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a friendly and encouraging fitness trainer."), ("user", "{input}") ]) llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY")) chain = my_prompt | llm | output_parser user_input = input("Ask me a question related to your fitness goals.\n") response = chain.invoke({ "input": user_input }) print(response) Now, at the command line, our application looks like this: Shell (venv) $ OPENAI_API_KEY=insert-key-here python3 main.py Ask me a question related to your fitness goals. How do I learn how to do a pistol squat? That's a great goal to work towards! Pistol squats can be challenging but with practice and patience, you can definitely learn how to do them. Here are some steps you can follow to progress towards a pistol squat: 1. Start by improving your lower body strength with exercises like squats, lunges, and step-ups. 2. Work on your balance and stability by practicing single-leg balance exercises. 3. Practice partial pistol squats by lowering yourself down onto a bench or chair until you can eventually perform a full pistol squat. 4. Use a support like a TRX band or a pole to assist you with balance and lowering yourself down until you build enough strength to do it unassisted. Remember to always warm up before attempting pistol squats and listen to your body to avoid injury. And most importantly, stay positive and patient with yourself as you work towards mastering this challenging exercise. You've got this! The LLM response is formatted for improved readability now. For building powerful LLM applications, our chains would be much more complex than this. But that’s the power and simplicity of LangChain. The framework allows for the modularity of logic specific to your needs so you can easily chain together complex workflows. Now that we have a simple LLM application built, we still need the ability to deploy, host, and serve our application to make it useful. As a developer focused on app building rather than infrastructure, we turn to LangServe and Heroku. Serving With LangServe LangServe helps us interact with a LangChain chain through a REST API. To write the serving portion of a LangChain LLM application, we need three key components: A valid chain (like what we built above) An API application framework (such as FastAPI) Route definitions (just as we would have for building any sort of REST API) The LangServe docs provide some helpful examples of how to get up and running. For our example, we just need to use FastAPI to start up an API server and call add_routes() from LangServe to make our chain accessible via API endpoints. Along with this, we’ll need to make some minor modifications to our existing code. We’ll remove the use of the StrOutputParser. This will give callers of our API flexibility in how they want to format and use the output. We won’t prompt for user input from the command line. The API call request will provide the user’s input. We won’t call chain.invoke() because LangServe will make this part of handling the API request. We make sure to add the FastAPI and LangServe packages to our project: Shell (venv) $ pip install langserve fastapi Our final main.py file looks like this: Python import os from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI from fastapi import FastAPI from langserve import add_routes my_prompt = ChatPromptTemplate.from_messages([ ("system", "You are a friendly and encouraging fitness trainer."), ("user", "{input}") ]) llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY")) chain = my_prompt | llm app = FastAPI(title="Fitness Trainer"") add_routes(app, chain) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="localhost", port=8000) On my local machine (Ubuntu 20.04.6 LTS) running Python 3.8.10, I also needed to install some additional packages to get rid of some warnings. You might not need to do this on your machine. Shell (venv) $ pip install sse_starlette pydantic==1.10.13 Now, we start up our server: Shell (venv) $ OPENAI_API_KEY=insert-key-here python3 main.py INFO: Started server process [629848] INFO: Waiting for application startup. LANGSERVE: Playground for chain "/" is live at: LANGSERVE: │ LANGSERVE: └──> /playground/ LANGSERVE: LANGSERVE: See all available routes at /docs/ INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) Ooooh… nice! In the browser, we can go to http://localhost:8000/docs. This is what we see: LangServe serves up an API docs page that uses a Swagger UI! These are the endpoints now available to us through LangServe. We could send a POST request to the invoke/ endpoint. But LangServe also gives us a playground/ endpoint with a web interface to work with our chain directly. We provide an input and click Start. Here’s the result: It’s important to stress the importance of having APIs in the context of LLM application workflows. If you think about it, most use cases of LLMs and applications built on top of them can’t rely on local models and resources for inference. This neither makes sense nor scales well. The real power of LLM applications is the ability to abstract away the complex workflow we’ve described so far. We want to put everything we’ve done behind an API so the use case can scale and others can integrate it. This is only possible if we have an easy option to host and serve these APIs. And that’s where Heroku comes in. Deploying to Heroku Heroku is the key, final part of our LLM application implementation. We have LangChain to piece together our workflow, and LangServe to serve it up as a useful REST API. Now, instead of setting up complex resources manually to host and serve traffic, we turn to Heroku for the simple deployment of our application. After setting up a Heroku account, we’re nearly ready to deploy. Let’s walk through the steps. 1. Create a New Heroku App Using the Heroku CLI, we log in and create a new app. Shell $ heroku login $ heroku create my-langchain-app 2. Set config Variables Next, we need to set the OPENAI_API_KEY environment variable in our Heroku app environment. Shell $ heroku config:set OPENAI_API_KEY=replace-with-your-openai-api-key 3. Create config Files for Python Application Deployment To let Heroku know what we need for our Python application to run, we need to create three simple files: Procfile: Declares what command Heroku should execute to start our app requirements.txt: Specifies the Python package dependencies that Heroku will need to install runtime.txt: Specifies the exact version of the Python runtime we want to use for our app These files are quick and easy to create. Each one goes into the project’s root folder. To create the Procfile, we run this command: Shell $ echo 'web: uvicorn main:app --host=0.0.0.0 --port=${PORT}' > Procfile This tells Heroku to run uvicorn, which is a web server implementation in Python. For requirements.txt, we can use the pip freeze command to output the list of installed packages. Shell $ pip freeze > requirements.txt Lastly, for runtime.txt, we will use Python 3.11.8. Shell $ echo 'python-3.11.8' > runtime.txt With these files in place, our project root folder should look like this: Shell $ tree . ├── main.py ├── Procfile ├── requirements.txt └── runtime.txt 0 directories, 4 files We commit all of these files to the GitHub repository. 4. Connect Heroku to the GitHub Repo The last thing to do is create a Heroku remote for our GitHub repo and then push our code to the remote. Heroku will detect the push of new code and then deploy that code to our application. Shell $ heroku git:remote -a my-langchain-app $ git push heroku main When our code is pushed to the Heroku remote, Heroku builds the application, installs dependencies, and then runs the command in our Procfile. The final result of our git push command looks like this: Shell … remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... remote: Done: 71.8M remote: -----> Launching... remote: Released v4 remote: https://my-langchain-app-ea95419b2750.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. This shows the URL for our Heroku app. In our browser, we visit https://my-langchain-app-ea95419b2750.herokuapp.com/playground. We also check out our Swagger UI docs page at https://my-langchain-app-ea95419b2750.herokuapp.com/docs. And just like that, we’re up and running! This process is the best way to reduce developer time and overhead when working on large, complex LLM pipelines with LangChain. The ability to take APIs built with LangChain and seamlessly deploy to Heroku with a few simple command line arguments is what makes the pairing of LangChain and Heroku a no-brainer. Conclusion Businesses and developers today are right to ride the wave of AI and LLMs. There’s so much room for innovation and new development in these areas. However, the difference between the successes and failures will depend a lot on the toolchain they use to build and deploy these applications. Using the LangChain framework makes the process of building LLM-based applications approachable and repeatable. But, implementation is only half the battle. Once your application is built, you need the ability to easily and quickly deploy those application APIs into the cloud. That’s where you’ll have the advantage of faster iteration and development, and Heroku is a great way to get you there.
Angular, a powerful framework for building dynamic web applications, is known for its component-based architecture. However, one aspect that often puzzles new developers is the fact that Angular components do not have a display: block style by default. This article explores the implications of this design choice, its impact on web development, and how developers can effectively work with it. The world of front-end development is replete with frameworks that aim to provide developers with robust tools to build interactive and dynamic web applications. Among these, Angular stands out as a powerful platform, known for its comprehensive approach to constructing applications’ architecture. Particularly noteworthy is the way Angular handles components — the fundamental building blocks of Angular applications. Understanding Angular Components In Angular, components are the fundamental building blocks that encapsulate data binding, logic, and template rendering. They play a crucial role in defining the structure and behavior of your application’s interface. Definition and Role A component in Angular is a TypeScript class decorated with @Component(), where you can define its application logic. Accompanying this class is a template, typically an HTML file, that determines the component's visual representation, and optionally CSS files for styling. The component's role is multifaceted: it manages the data and state necessary for the view, handles user interactions, and can also be reusable throughout the application. TypeScript import { Component } from '@angular/core'; @Component({ selector: 'app-my-component', templateUrl: './my-component.component.html', styleUrls: ['./my-component.component.css'] }) export class MyComponent { // Component logic goes here } Angular’s Shadow DOM Angular components utilize a feature known as Shadow DOM, which encapsulates their markup and styles, ensuring that they’re independent of other components. This means that styles defined in one component will not leak out and affect other parts of the application. Shadow DOM allows for style encapsulation by creating a boundary around the component. As a developer, it’s essential to understand the structure and capabilities of Angular components to fully leverage the power of the framework. Recognizing the inherent encapsulation provided by Angular’s Shadow DOM is particularly important when considering how components are displayed and styled within an application. Display Block: The Non-Default in Angular Components Angular components are different from standard HTML elements in many ways, one of which is their default display property. Unlike basic HTML elements, which often come with a display value of block or inline, Angular components are assigned none as their default display behavior. This decision is intentional and plays an important role in Angular’s encapsulation philosophy and component rendering process. Comparison With HTML Elements Standard HTML elements like <div>, <p>, and <h1> come with a default styling that can include the CSS display: block property. This means that when you drop a <div> into your markup, it naturally takes up the full width available to it, creating a "block" on the page. <!-- Standard HTML div element --> <div>This div is a block-level element by default.</div> In contrast, Angular components start without any assumptions on their display property. That is, they don’t inherently behave as block or inline elements; they are essentially “display-agnostic” until specified. Rationale Behind Non-Block Default Angular’s choice to diverge from the typical block behavior of HTML elements is deliberate. One reason for this is to encourage developers to consciously decide how each component should be displayed within the application’s layout. It prevents unexpected layout shifts and the overwriting of global styles that may occur when components with block-level styles are introduced into existing content. By not having a display property set by default, Angular invites developers to think responsively and adapt their components to various screen sizes and layout requirements by setting explicit display styles that suit the component’s purpose within the context of the application. In the following section, we will explore how to work with the display properties of Angular components, ensuring that they fit seamlessly into your application’s design with explicit and intentional styling choices. Working With Angular’s Display Styling When building applications with Angular, understanding and properly implementing display styling is crucial for achieving the desired layout and responsiveness. Since Angular components come without a preset display rule, it’s up to the developer to define how each component should be displayed within the context of the application. 1. Explicitly Setting Display Styles You have complete control over how the Angular component is displayed by explicitly setting the CSS display property. This can be defined inline, within the component's stylesheet, or even dynamically through component logic. /* app-example.component.css */ :host { display: block; } <!-- Inline style --> <app-example-component style="display: block;"></app-example-component> // Component logic setting display dynamically export class ExampleComponent implements OnInit { @HostBinding('style.display') displayStyle: string = 'block'; } Choosing to set your component’s display style via the stylesheet ensures that you can leverage CSS’s full power, including media queries for responsiveness. 2. Responsive Design Considerations Angular’s adaptability allows you to create responsive designs by combining explicit display styles with modern CSS techniques. Using media queries, flexbox, and CSS Grid, you can responsively adjust the layout of your components based on the viewport size. CSS /* app-example.component.css */ :host { display: grid; grid-template-columns: repeat(auto-fill, minmax(150px, 1fr)); } @media (max-width: 768px) { :host { display: block; } } By setting explicit display values in style sheets and using Angular’s data-binding features, you can create a responsive and adaptive user interface. This level of control over styling reflects the thoughtful consideration that Angular brings to the development process, enabling you to create sophisticated, maintainable, and scalable applications. Next, we will wrap up our discussion and revisit the key takeaways from working with Angular components and their display styling strategies. Conclusion Throughout this exploration of Angular components and their display properties, it’s become apparent that Angular’s choice to use a non-block default for components is a purposeful design decision. This approach promotes a more thoughtful application of styles and supports encapsulation, a core principle within Angular’s architecture. It steers developers toward crafting intentional and adaptive layouts, a necessity in the diverse landscape of devices and screen sizes. By understanding Angular’s component architecture and the reasoning behind its display styling choices, developers are better equipped to make informed decisions. Explicit display settings and responsive design considerations are not afterthoughts but integral parts of the design and development process when working with Angular. Embracing these concepts allows developers to fully leverage the framework’s capabilities, leading to well-structured, maintainable, and responsive applications that stand the test of time and technology evolution. The information provided in this article aims to guide Angular developers to harness these tools effectively, ensuring that the user experiences they create are as robust as the components they comprise.
MongoDB is a powerful, open-source, document-oriented database management system known for its flexibility, scalability, and wide range of features. It's part of the NoSQL family of database systems, designed to handle large volumes of data and to provide high performance, high availability, and easy scalability. MongoDB stores data in JSON-like documents in key-value pairs, which allows Java Spring Boot to leverage JSON structure in the code. Spring Boot With MongoDB We may come across situations where we use Spring boot with SQL databases, but to leverage MongoDB with Spring boot, Spring Data MongoDB offers lightweight repository data access and support for the MongoDB database, which reduces the complexity of the code. Assuming that you have quite a good understanding of MongoDB, we will take a quick look at building a Spring Boot application with Spring Data MongoDB. Prerequisites Java IDE of your choice Intellij IDEA Spring Tool Suite (STS) Eclipse 6 Steps to Creating a Spring Boot Application With Spring REST and Spring Data MongoDB In this article, I have used the MongoDB Atlas database, which is a multi-cloud developer database service that allows you to create and maintain databases on the cloud, free of cost. I also used MongoDB Compass, a GUI tool to visualize the database. If you don't have an account for MongoDB Atlas, you can try it for free. Step 1: Create a Spring Boot Application With Spring Initializer First, you'll want to create a Spring Boot application using Spring Initializr, which generates a Spring Boot project with selected dependencies. Once you have selected the fields as shown in the image below, click on Generate, and import the extracted project in your IDE. Project Structure: Step 2: Configure the Database To configure MongoDB in the Spring boot application, we are going to add the database URL in the src/main/resources/application.properties file as shown below: Properties files spring.data.mongodb.uri = mongodb+srv://username:password@student.port.mongodb.net/student Model: MongoDB is a non-relational, document-oriented database. We have created a Student and Address Java model to store objects. @Document annotation is used to provide the custom collection name, and @Field is used to provide the custom key name for the object. In the below code, we have created an example of variables with different data types like Date, List, etc. Student.java Java package com.example.studentmanagementsystem.model; import com.fasterxml.jackson.annotation.JsonFormat; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.index.Indexed; import org.springframework.data.mongodb.core.mapping.Document; import org.springframework.data.mongodb.core.mapping.Field; import java.time.LocalDate; import java.util.List; @Document("Student") public class Student { @Id @Indexed(unique = true) private String id; private String name; private double cgpa; @Field("has_arrears") private boolean hasArrears; @Field("course_list") private List<String> courseList; private Address address; @Field("enrollment_date") @JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") private LocalDate enrollmentDate; } Address.java Java package com.example.studentmanagementsystem.model; import jakarta.persistence.Entity; import org.springframework.data.mongodb.core.mapping.Field; @Entity public class Address { private String street; private String city; private String state; private String country; @Field("zip_code") private String zipcode; } Step 3: Create the Repository We have created an interface StudentRepository, which extends to MongoRepository. MongoRepository is an interface provided by Spring Data that allows pre-defined CRUD operations and automatic mapping. CRUD or REST operations are nothing but communication between services and data in a persistent and structured way. Spring @Repository annotation is used to indicate that the class provides the mechanism for storage, retrieval, search, update, and delete operations on objects and acts as the persistence layer. Let's create findBy methods to fetch data from the database as shown in the code below: Java package com.example.studentmanagementsystem.repository; import com.example.studentmanagementsystem.model.Student; import org.springframework.data.mongodb.repository.Aggregation; import org.springframework.data.mongodb.repository.MongoRepository; import org.springframework.stereotype.Repository; import java.time.LocalDate; import java.util.List; @Repository public interface StudentRepository extends MongoRepository<Student, String> { List<Student> findByNameAndCgpa(String name, Double cgpa); Student findByAddress_City(String city); List<Student> findByAddress_CountryOrHasArrears(String country, Boolean hasArrears); List<Student> findByEnrollmentDateBetweenOrderByEnrollmentDate(LocalDate startDate, LocalDate endDate); List<Student> findByCgpaGreaterThanEqual(Double cgpa); String findByNameIgnoreCase(String name); List<Student> findByCgpaOrderByNameDesc(Double cgpa, String name); //aggregation example for overall average cgpa @Aggregation("{ $group : { _id : null, averageCgpa : { $avg : $cgpa} } }") Long avgCgpa(); } Step 4: Create a Service Let's build a service layer for the Student repository in order to communicate with the data in the MongoDB database. We will create a few methods to leverage CRUD operations like insert, retrieve, and delete methods. Java package com.example.studentmanagementsystem.service; import com.example.studentmanagementsystem.model.Student; import com.example.studentmanagementsystem.repository.StudentRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.time.LocalDate; import java.util.List; import java.util.Optional; @Service public class StudentService { @Autowired private StudentRepository studentRepo; public void addStudentData(Student studentDetails) { studentRepo.insert(studentDetails); } public void addMultipleStudentsData(List<Student> studentsDetail) { studentRepo.insert(studentsDetail); } public List<Student> fetchAllStudentsData() { return studentRepo.findAll(); } public Optional<Student> fetchStudentDataById(String id) { return studentRepo.findById(id); } public List<Student> fetchStudentDataByNameAndCgpa(String name, Double cgpa) { return studentRepo.findByNameAndCgpa(name, cgpa); } public Student fetchStudentDataByCity(String city) { return studentRepo.findByAddress_City(city); } public List<Student> fetchStudentDataByCountryOrArrears(String country, Boolean hasArrears) { return studentRepo.findByAddress_CountryOrHasArrears(country, hasArrears); } public List<Student> fetchStudentDataByCgpa(Double cgpa) { return studentRepo.findByCgpaGreaterThanEqual(cgpa); } public List<Student> fetchStudentDataByEnrollmentDate(LocalDate startDate, LocalDate endDate) { return studentRepo.findByEnrollmentDateBetweenOrderByEnrollmentDate(startDate, endDate); } public List<Student> fetchStudentDataByCgpaAndName(Double cgpa, String name) { return studentRepo.findByCgpaOrderByNameDesc(cgpa, name); } public Long fetchAverageCgpa() { return studentRepo.avgCgpa(); } public String fetchStudentDataByName(String name) { return studentRepo.findByNameIgnoreCase(name); } public void deleteStudentData(Student studentDetails) { studentRepo.insert(studentDetails); } public void deleteAllStudentData() { studentRepo.deleteAll(); } } Step 5: Create a Controller Next, build CRUD REST API calls for the Student resource to fetch, insert, or delete resources in the MongoDB database. The Spring @RestController annotation is used to create RESTful web services, and it combines both @Controller and @Responsebody annotations, making it easy to write handler methods. Java package com.example.studentmanagementsystem.controller; import com.example.studentmanagementsystem.model.Student; import com.example.studentmanagementsystem.service.StudentService; import com.fasterxml.jackson.annotation.JsonFormat; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.time.LocalDate; import java.util.List; import java.util.Optional; @RestController @RequestMapping("/student") public class StudentController { @Autowired private StudentService studentService; @PostMapping("/addStudent") public void populateStudentData(@RequestBody Student student){ studentService.addStudentData(student); } @PostMapping("/addStudentsData") public void populateStudentsData(@RequestBody List<Student> students){ studentService.addMultipleStudentsData(students); } @GetMapping("/getAllStudentsData") public List<Student> fetchAllStudentsData(){ return studentService.fetchAllStudentsData(); } @GetMapping("/getStudentById/{id}") public Optional<Student> fetchStudentDataById(@PathVariable String id){ return studentService.fetchStudentDataById(id); } @GetMapping("/getStudentByNameAndCgpa") public List<Student> fetchStudentDataByNameAndCgpa(@RequestParam String name, @RequestParam Double cgpa){ return studentService.fetchStudentDataByNameAndCgpa(name, cgpa); } @GetMapping("/getStudentByCity/{city}") public Student fetchStudentDataByCity(@PathVariable String city){ return studentService.fetchStudentDataByCity(city); } @GetMapping("/getStudentByCountryOrArrears") public List<Student> fetchStudentDataByCountryOrArrears(@RequestParam String country,@RequestParam Boolean hasArrears){ return studentService.fetchStudentDataByCountryOrArrears(country, hasArrears); } @GetMapping("/getStudentByEnrollmentDate") public List<Student> fetchStudentDataByEnrollmentDate(@JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") LocalDate startDate, @JsonFormat (shape = JsonFormat.Shape.STRING, pattern = "dd-MM-yyyy") LocalDate endDate){ return studentService.fetchStudentDataByEnrollmentDate(startDate, endDate); } @GetMapping("/getStudentByName") public String fetchStudentDataByName(@RequestParam String name){ return studentService.fetchStudentDataByName(name); } @GetMapping("/getStudentByCgpa") public List<Student> fetchStudentDataByCgpa(@RequestParam Double cgpa){ return studentService.fetchStudentDataByCgpa(cgpa); } @GetMapping("/getAvgCgpa") public Long fetchStudentAvgCgpa(){ return studentService.fetchAverageCgpa(); } @DeleteMapping("/deleteStudent") public void deleteStudentData(Student student){ studentService.deleteStudentData(student); } @DeleteMapping("/deleteAllStudents") public void deleteAllStudentsData(){ studentService.deleteAllStudentData(); } } Step 6: Testing Now, let's test one of the API calls in Postman to fetch data from the database as shown in the below image. The below HTTP method returns all the student information in an array of JSON objects. Method: GET Request URL: http://localhost:8080/student/getAllStudentsData We have built a Spring Boot application leveraging the MongoDB database, and we have created CRUD operations such as creating, deleting, and fetching the data from the database, including different ways to fetch data. Spring Data MongoDB helps us to use the inbuilt methods for CRUD operations, which reduces the code complexity in the persistence layer.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO