Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
How To Get Started With New Pattern Matching in Java 21
Harnessing the Power of SIMD With Java Vector API
Since its public release in November 2022, ChatGPT continues to fascinate millions of users, raising their creative power while, at the same time, catalyzing tech enthusiasts and their attention on its possible drawbacks, or even weaknesses. ChatGPT, and similar chatbots, are a special type of software called large language models (LLMs) that have dramatically metamorphosed the natural language processing (NLP) field in order to provide newer and less common tasks like question answering, text generation and summarization, etc. All these terms really sound very complicated and, while it's a long way from this post's ambitions to elucidate the LLM's quantum leap, we're trying to look here at how they work and, specifically, how could they be used in Java, highlighting their compelling possibilities, as well as some potential problems. Let's go! A Brief History of LLMs NLP refers to building machines that are able to recognize, understand, and generate text in human languages. It might sound like a new technology to many of us, but it's actually as old as computers themselves. As a matter of fact, automatically translating one human language to another was the dream of any programmer at the beginning of the information age. In 1950, Alan Turing published a paper stating that a machine could have been considered as "intelligent" if it could produce responses indistinguishable from those of a human. This method, called the Turing test, is now considered an incomplete case of the allegedly machines' "intelligence," since it's easily passed by modern programs, which are created to mimic human speech. The first NLP programs have adopted a simple approach of using sets of rules and heuristics in order to imitate conversations. In 1966, Joseph Weizenbaum, professor at MIT, released the first chatbot in history, Eliza. Based on common language pattern matching, this program created the illusion of a conversation by asking open-ended questions and giving generic responses, like "Please go on," to sentences that it didn't "understand." Over the next several decades, rules-based text parsing and pattern matching remained the most common NLP approach. By the 1990s, an important paradigm shift had taken place in NLP, consisting of replacing rule-based methods with statistical ones. As opposed to the old models trying to define and construct grammar, the new ones were designed to "learn" language patterns through "training." Thousands of documents were now being used to feed data to NLP programs in order to "teach" them a given language model. So, people started to "train" programs for text generation, classifications, or other natural language tasks, and in the beginning, this process was based on input sequences that the model was splitting into tokens, typically words or partial words, before being converted into the associated mathematical representation given by the training algorithm. Finally, this particular representation was converted back into tokens to produce a readable result. This back-and-forth tokenization process is called encoding-decoding. In 2014, NLP researchers found another alternative to this traditional approach of passing sequences through the encoder-decoder model, piece by piece. The new approach, which was called attention, consisted of having the decoder search the full input sequence and trying to find the pieces that were the most relevant from the language model point of view. A few years later, a paper titled Attention Is All You Need was published by Google. It showed that models based on this new principle of attention were much faster and parallelizable. They were called transformers. Transformers marked the beginning of LLMs because they made it possible to train models of much larger data sets. In 2018, OpenAI introduced the first LLM called Generative Pre-Trained Transformers (GPT). This LLM was a transformer-based one that was trained using a massive amount of unlabeled data and then could be fine-tuned to specific tasks, such as machine translation, text classification, sentiment analysis, and others. Another LLM introduced this same year, BERT (Bidirectional Encoder Representation Transformer), by Google, used an even larger amount of training data, consisting of billions of words and more than 100 million of parameters. Unlike previous NLP programs, these LLMs aren't intended to be task specific. Instead, they are trained simply to predict the token that fits the best given the model's particular context. They are applied to different fields and are becoming an integral part of our everyday lives. Conversational agents, like Siri from Apple, Alexa from Amazon, or Google Home, are able to listen for queries, turn sounds into text, and answer questions. Their general purpose and versatility result in a broad range of natural language tasks, including but not limited to: Language modeling Question answering Coding Content generation Logical reasoning Etc. Conversational LLMs The LLMs' undertaking consists in their capacity to generate text, in a highly flexible way, for a wide range of cases, which makes them perfect in talking to humans. Chatbots are LLMs specifically designed for conversational use. ChatGPT is the most well-known, but there are many others like: Bard from Google Bing AI from Microsoft LLaMa from Meta Claude from Anthropic Copilot from GitHub Etc. Embedded in enterprise-grade applications, conversational LLMs are ideal solutions in fields like customer service, education, healthcare, web content generation, chemistry, biology, and many others. Chatbots and virtual assistants can be powered by being given access to conversational LLM capabilities. This kind of integration of LLMs in classical applications requires them to expose a consistent API. And in order to call these APIs from applications, a toolkit is required, which is able to interact with the AI model and facilitate custom creation. LLM Toolkits There have been so many rapid developments in AI since ChatGPT hit the scene, and among all these new tools, LLM toolkits have seen a veritable explosion. Some of the most known, like AutoGPT, MetaGPT, AgentGPT, and others, have attempted to jump on the bandwagon and strike while the iron was hot. But there is no doubt that the one which emerged as the most modern and, at the same time, the most discussed, is LangChain. Available in Python, JavaScript, and TypeScript, LangChain was launched in 2022 as an open-source library, originally developed by Harrison Chase, which shortly after its inception, turned out to be one of the fastest-growing projects in the AI space. But despite its growing popularity, LangChain had a major drawback: the lack of Java support. So in order to address this drawback, LangChain4j emerged in early 2023 as the Java implementation of the LangChain Python library. In the demonstration below, we will use LangChain4J to implement enterprise-grade Java services and components powered by the most dominant and influential LLMs. The Project In order to illustrate our discourse, we'll be using a simple Java program that performs a natural language task. The use case that we chose for this purpose is to implement an AI service able to compose a haiku. For those who don't know what a haiku is, here is the Britannica definition: "Unrhymed poetic form consisting of 17 syllables arranged in three lines of 5, 7, and 5 syllables respectively." As you can see, the usefulness of such a task isn't really striking and, as a matter of fact, more than a veritable use case, it's a pretext to demonstrate some LangChain4j features, while using a ludic and hopefully original form. So, our project is a maven multi-module project having the following structure: A master POM named llm-java A JAX-RS module, named haiku, exposing a REST API which invokes the LLM model An infrastructure module, named infra, which creates the required Docker containers The Master POM Our project is a Quarkus project. Hence, the use of the following Bill of Material (BOM) is as follows: XML <dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-bom</artifactId> <version>${quarkus.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> It uses Quarkus 3.8.3, Java 17, and LangChain4j 0.25.0. The JAX-RS Module This module, named haiku, is using the quarkus-resteasy-reactive-jackson Quarkus extension in order to expose a REST API: Java @Path("/haiku") public class HaikuResource { private final HaikuService haikuService; public HaikuResource(HaikuService haikuService) { this.haikuService = haikuService; } @GET public String makeHaiku(@DefaultValue("samurai") @RestQuery String subject) { return haikuService.writeHaiku(subject); } } As you can see, this API defines an endpoint listening for GET HTTP requests, accepting the haiku subject as a query parameter, which containers the default value: "samurai." The module also uses the quarkus-container-image-jib Quarkus extension to create a Docker image that runs the AI service. The attributes of this Docker image are defined in the application.properties file, as shown below: Properties files ... quarkus.container-image.build=true quarkus.container-image.group=quarkus-llm quarkus.container-image.name=haiku quarkus.jib.jvm-entrypoint=/opt/jboss/container/java/run/run-java.sh ... These attributes state that the newly created Docker image name will be quarkus-llm/haiku and its entrypoint will be the run-java.sh shell script located in the container's /opt/jboss/container/java/run directory. This project uses the Quarkus extension quarkus-langchain4j-ollama, which provides integration with the LangChain4j library and the Ollama tool. Ollama is an advanced AI streamlined utility that allows users to set up and run locally large LLMs, like openai, llama2, mistral, and others. Here, we're running llama2 locally. This is configured again in the application.properties using the following statement: Properties files quarkus.langchain4j.ollama.chat-model.model-id=llama2:latest This declaration states that the LLM used here, in order to serve AI requests, will be llama2 for its last version. Let's have a look now at our AI service itself: Java @RegisterAiService public interface HaikuService { @SystemMessage("You are a professional haiku poet") @UserMessage("Write a haiku about {subject}.") String writeHaiku(String subject); } And that's it! As you can see, our AI service is an interface annotated with the @RegisterAiService annotation. The annotation processor provided by the Quarkus extension will generate the class that implements this interface. In order to be able to serve requests, any conversational LLM needs a defined context or scope. In our case, this scope is the one of an artist specialized in composing haikus. This is the role of the @SystemMessage annotation: to set up the current scope. Last but not least, the @UserMessage annotation allows us to define the specific text that will serve as a prompt to the AI service. Here, we're requesting our AI service to compose a haiku on a topic that is defined by the input parameter subject of type String. The Infrastructure Module After having examined the implementation of our AI service, let's see how could we set up the required infrastructure. The infrastructure module, named infra, is a maven sub-project using the docker-compose utility to start the following Docker containers: A Docker container named ollama is running an image tagged nicolasduminil/ollama:llama2. This image is simply the official Ollama Docker image, which has been augmented to include the llama2 LLM. As explained earlier, Ollama is able to locally run several LLMs, and in order to make these LLMs available, we need to pull them from their Docker registries. This is why, when running the Ollama official Docker container, one typically needs to pull the chosen LLM. In order to avoid this repetitive operation, I have extended this official Docker container to already include the llama2 LLM. A Docker container named haiku is running the image tagged quarkus-llm/haiku, which is precisely our AI service. Here is the associated docker-compose.yaml file required to create the infrastructure described above: YAML version: "3.7" services: ollama: image: nicolasduminil/ollama:llama2 hostname: ollama container_name: ollama ports: - "11434:11434" expose: - 11434 haiku: image: quarkus-llm/haiku:1.0-SNAPSHOT depends_on: - ollama hostname: haiku container_name: haiku links: - ollama:ollama ports: - "8080:8080" environment: JAVA_DEBUG: "true" JAVA_APP_DIR: /home/jboss JAVA_APP_JAR: quarkus-run.jar As you can see, the ollama service runs on a node having the DNS name of ollama and listens on the TCP port number 11434. Our AI service, hence, needs to be configured appropriately to connect to the same node/port. Again, its file application.properties is used for this purpose, as shown below: Properties files quarkus.langchain4j.ollama.base-url=http://ollama:11434 This declaration means that our AI service will send its requests to the URL: http://ollama:11434, where ollama is converted by the DNS service into the IP address, which was allocated to the Docker container with the same name. Running and Testing In order to run and test this sample project, you may proceed as follows: Clone the repository: Shell $ git clone https://github.com/nicolasduminil/llm-java.git cd to the project: Shell $ cd llm-java Build the project: Shell $ mvn clean install Check that all the required containers are running: Shell $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 19006601c908 quarkus-llm/haiku:1.0-SNAPSHOT "/opt/jboss/containe…" 5 seconds ago Up 4 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 8443/tcp haiku 602e6bb06aa9 nicolasduminil/ollama:llama2 "/bin/ollama serve" 5 seconds ago Up 4 seconds 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp ollama Run the open-api interface to test the service. Fire your preferred browser at: http://localhost:8080/q/swaggerui. In the displayed Swagger dialog labeled Haiku API, click on the GET button and use the Try it function to perform tests. In the text field labeled Subject, type the name of the topic in which you want our AI service to compose a haiku or keep the default one, which is samurai. The figure below shows the test result: You can also test the project by sending a GET request to our AI service using the curl utility, as shown below: Shell $ curl http://localhost:8080/haiku?subject=quarkus Quarkus, tiny gem In the cosmic sea of space Glints like a star Wrapping Up In the demonstration above, we explored the history of LLMs and used LangChain4J to implement enterprise-grade Java services and components powered by the most dominant and influential LLMs. We hope you enjoyed this article on LLMs and how to implement them in the Java space!
Effective exception management is pivotal for maintaining the integrity and stability of software applications. Java's lambda expressions offer a concise means of expressing anonymous functions, yet handling exceptions within these constructs presents unique challenges. In this article, we'll delve into the nuances of managing exceptions within Java lambda expressions, exploring potential hurdles and providing practical strategies to overcome them. Understanding Lambda Expressions in Java Java 8 introduced lambda expressions, revolutionizing the way we encapsulate functionality as method arguments or create anonymous classes. Lambda expressions comprise parameters, an arrow (->), and a body, facilitating a more succinct representation of code blocks. Typically, lambda expressions are utilized with functional interfaces, which define a single abstract method (SAM). Java // Syntax of a Lambda Expression (parameter_list) -> { lambda_body } Exception Handling in Lambda Expressions Lambda expressions are commonly associated with functional interfaces, most of which do not declare checked exceptions in their abstract methods. Consequently, dealing with operations that might throw checked exceptions within lambda bodies presents a conundrum. Consider the following example: Java interface MyFunction { void operate(int num); } public class Main { public static void main(String[] args) { MyFunction func = (num) -> { System.out.println(10 / num); }; func.operate(0); // Division by zero } } In this scenario, dividing by zero triggers an ArithmeticException. As the operate method in the MyFunction interface doesn't declare any checked exceptions, handling the exception directly within the lambda body is disallowed by the compiler. Workarounds for Exception Handling in Lambda Expressions Leveraging Functional Interfaces With Checked Exceptions One workaround involves defining functional interfaces that explicitly declare checked exceptions in their abstract methods. Java @FunctionalInterface interface MyFunctionWithException { void operate(int num) throws Exception; } public class Main { public static void main(String[] args) { MyFunctionWithException func = (num) -> { if (num == 0) { throw new Exception("Division by zero"); } System.out.println(10 / num); }; try { func.operate(0); } catch (Exception e) { e.printStackTrace(); } } } Here, the MyFunctionWithException functional interface indicates that the operate method may throw an Exception, enabling external handling of the exception. Utilizing Try-Catch Within Lambda Body Another approach involves enclosing the lambda body within a try-catch block to manage exceptions internally. Java interface MyFunction { void operate(int num); } public class Main { public static void main(String[] args) { MyFunction func = (num) -> { try { System.out.println(10 / num); } catch (ArithmeticException e) { System.out.println("Cannot divide by zero"); } }; func.operate(0); } } This method maintains the brevity of the lambda expression while encapsulating exception-handling logic within the lambda body itself. Employing Optional for Exception Handling Java 8 introduced the Optional class, providing a mechanism to wrap potentially absent values. This feature can be harnessed for exception handling within lambda expressions. Java import java.util.Optional; interface MyFunction { void operate(int num); } public class Main { public static void main(String[] args) { MyFunction func = (num) -> { Optional<Integer> result = divideSafely(10, num); result.ifPresentOrElse( System.out::println, () -> System.out.println("Cannot divide by zero") ); }; func.operate(0); } private static Optional<Integer> divideSafely(int dividend, int divisor) { try { return Optional.of(dividend / divisor); } catch (ArithmeticException e) { return Optional.empty(); } } } In this example, the divideSafely() helper method encapsulates the division operation within a try-catch block. If successful, it returns an Optional containing the result; otherwise, it returns an empty Optional. The ifPresentOrElse() method within the lambda expression facilitates handling both successful and exceptional scenarios. Incorporating multiple Optional instances within exception-handling scenarios can further enhance the robustness of Java lambda expressions. Let's consider an example where we have two values that we need to divide, and both operations are wrapped within Optional instances for error handling: Java import java.util.Optional; interface MyFunction { void operate(int num1, int num2); } public class Main { public static void main(String[] args) { MyFunction func = (num1, num2) -> { Optional<Integer> result1 = divideSafely(10, num1); Optional<Integer> result2 = divideSafely(20, num2); result1.ifPresentOrElse( res1 -> result2.ifPresentOrElse( res2 -> System.out.println("Result of division: " + (res1 / res2)), () -> System.out.println("Cannot divide second number by zero") ), () -> System.out.println("Cannot divide first number by zero") ); }; func.operate(0, 5); } private static Optional<Integer> divideSafely(int dividend, int divisor) { try { return Optional.of(dividend / divisor); } catch (ArithmeticException e) { return Optional.empty(); } } } In this example, the operate method within the Main class takes two integer parameters num1 and num2. Inside the lambda expression assigned to func, we have two division operations, each wrapped within its respective Optional instance: result1 and result2. We use nested ifPresentOrElse calls to handle both present (successful) and absent (exceptional) cases for each division operation. If both results are present, we perform the division operation and print the result. If either of the results is absent (due to division by zero), an appropriate error message is printed. This example demonstrates how multiple Optional instances can be effectively utilized within Java lambda expressions to handle exceptions and ensure the reliability of operations involving multiple values. Chained Operations With Exception Handling Suppose we have a chain of operations where each operation depends on the result of the previous one. We want to handle exceptions gracefully within each step of the chain. Here's how we can achieve this: Java import java.util.Optional; public class Main { public static void main(String[] args) { // Chain of operations: divide by 2, then add 10, then divide by 5 process(20, num -> divideSafely(num, 2)) .flatMap(result -> process(result, res -> addSafely(res, 10))) .flatMap(result -> process(result, res -> divideSafely(res, 5))) .ifPresentOrElse( System.out::println, () -> System.out.println("Error occurred in processing") ); } private static Optional<Integer> divideSafely(int dividend, int divisor) { try { return Optional.of(dividend / divisor); } catch (ArithmeticException e) { return Optional.empty(); } } private static Optional<Integer> addSafely(int num1, int num2) { // Simulating a possible checked exception scenario if (num1 == 0) { return Optional.empty(); } return Optional.of(num1 + num2); } private static Optional<Integer> process(int value, MyFunction function) { try { function.operate(value); return Optional.of(value); } catch (Exception e) { return Optional.empty(); } } interface MyFunction { void operate(int num) throws Exception; } } In this illustration, the function process accepts an integer and a lambda function (named MyFunction). It executes the operation specified by the lambda function and returns a result wrapped in an Optional. We link numerous process calls together, where each relies on the outcome of the preceding one. The flatMap function is employed to manage potential empty Optional values and prevent the nesting of Optional instances. If any step within the sequence faces an error, the error message is displayed. Asynchronous Exception Handling Imagine a scenario where we need to perform operations asynchronously within lambda expressions and handle any exceptions that occur during execution: Java import java.util.concurrent.CompletableFuture; public class Main { public static void main(String[] args) { CompletableFuture.supplyAsync(() -> divideAsync(10, 2)) .thenApplyAsync(result -> addAsync(result, 5)) .thenApplyAsync(result -> divideAsync(result, 0)) .exceptionally(ex -> { System.out.println("Error occurred: " + ex.getMessage()); return null; // Handle exception gracefully }) .thenAccept(System.out::println); // Print final result } private static int divideAsync(int dividend, int divisor) { return dividend / divisor; } private static int addAsync(int num1, int num2) { return num1 + num2; } } In this example, we use CompletableFuture to perform asynchronous operations. Each step in the chain (supplyAsync, thenApplyAsync) represents an asynchronous task, and we chain them together. The exceptionally method allows us to handle any exceptions that occur during the execution of the asynchronous tasks. If an exception occurs, the error message is printed, and the subsequent steps in the chain are skipped. Finally, the result of the entire operation is printed. Conclusion Navigating exception handling in the context of Java lambdas requires innovative approaches to preserve the succinctness and clarity of lambda expressions. Strategies such as exception wrapping, custom interfaces, the "try" pattern, and using external libraries offer flexible solutions. Whether it's through leveraging functional interfaces with checked exceptions, encapsulating exception handling within try-catch blocks inside lambda bodies, or utilizing constructs like Optional, mastering exception handling in lambda expressions is essential for building resilient Java applications. Essentially, while lambda expressions streamline code expression, implementing effective exception-handling techniques is crucial to fortify the resilience and dependability of Java applications against unforeseen errors. With the approaches discussed in this article, developers can confidently navigate exception management within lambda expressions, thereby strengthening the overall integrity of their codebases.
NCache Java Edition with distributed cache technique is a powerful tool that helps Java applications run faster, handle more users, and be more reliable. In today's world, where people expect apps to work quickly and without any problems, knowing how to use NCache Java Edition is very important. It's a key piece of technology for both developers and businesses who want to make sure their apps can give users fast access to data and a smooth experience. This makes NCache Java Edition an important part of making great apps. This article is made especially for beginners to make the ideas and steps of adding NCache to your Java applications clear and easy to understand. It doesn't matter if you've been developing for years or if you're new to caching, this article will help you get a good start with NCache Java Edition. Let’s start with a step-by-step process to set up a development workstation for NCache with the Java setup. NCache Server Installation: Java Edition NCache has different deployment options. The classification is listed below: On-premises Cloud Using Docker/Kubernetes You can check all the deployment options and the package available for the deployment here. NCache recommends at least SO-16 (16GB RAM, 8v CPU) to get optimum performance in a production environment, for a higher transaction load we should go with SO-32, SO-64, or SO-128. NCache Server Deployment With Docker Image NCache provides different images (alachisoft/ncache - Docker Image | Docker Hub) for Windows and Linux platform Java edition. Let’s see how to deploy the NCache server using the latest Linux Docker image. Use the below Docker command to pull the latest image: Dockerfile docker pull alachisoft/ncache:latest-java Now we successfully pulled the Docker image. Run the Docker image using the Docker command below: For a development workstation: Dockerfile docker run --name ncache -itd -p 8251:8251 -p 9800:9800 -p 8300:8300 -p 8301:8301 alachisoft/ncache:latest-java Use the actual host configuration for the production NCache server: Dockerfile docker run –name ncache -itd –network host alachisoft/ncache:latest-java The above command will run the NCache server and listen to port 8251. Now, launch NCache Management Center using the browser (localhost:8251). You will get a modal popup to register your license key as shown below: Click on Start Free Trial to activate the free trial with the license key, using the form below. You can register your license key using this registration page form or the Docker command to register the license key as given below: Dockerfile docker exec -it ncache /opt/ncache/bin/tools/register-ncacheevaluation -firstname [registered first name] -lastname [registered last name] -company [registered company name] -email [registered e-mail id] -key [key] Now, open the NCache Management Center from the browser http://localhost:8251/. NCache Cache Cluster Let’s install one more image in a different instance, with proper network configuration. Use the document below for the network configuration with NCache docker image deployment: Create NCache Containers for Windows Server I deployed one image in the 10.0.0.4 instance and another in the 10.0.0.5 instance. I just hopped into 10.0.0.4 NCache Management Center and removed the default cluster cache created during the installation. Let’s create a new clustered cache using the NCache Management Center Wizard. Click on New from the Clustered Cache page as shown in the figure below: It’s a 7-step process to create a clustered cache with the NCache Management Center interface, which we will go through one by one. Step 1: In-Memory Store In this step, you can define the in-memory store type, the name of the clustered cache, and the serialization type. In my case, I named the clustered cache as demoCache and the serialization as JSON. Step 2: Caching Topology Define the caching topology in the screen; in my case, I just went with default options. Step 3: Cache Partitions and Size In this screen, we can define the cache partition size. In my case, I just went with the default value. With this option, it will skip step 4. Also, I added two server nodes: 10.0.0.4 and 10.0.0.5. Step 5: Cluster TCP Parameters Define the Cluster Port, Port Range, and Batch Interval values. In my case, I went with the default values. Step 6: Encryption and Compression Settings You can enable the encryption and compression settings in this step. I just went with default values. Step 7: Advanced Options You can enable eviction and also check other advanced options. In my case, I checked to start the cache on the finish. Finally, click on Finish. Once the process is complete, it will create and start the clustered cache with two nodes. (10.0.0.4 and 10.0.0.5). Now the cluster is formed. Start the Cache You can use the start option from the NCache Management Center to start the clustered cache, as shown in the below figure. You can also use the command below to start the server: PowerShell start-cache –name demoCache Run a Stress Test Click Test-Stress and select the duration to run the stress test. This is one of my favorite features in NCache Management Center where you can initiate a stress test with ease just by a button click. You can also use the commands below to start the server. For example, to initiate a Test-Stress for the demoCache cluster with default settings: PowerShell test-stress –cachename demoCache Click on Monitor to check the metrics. You can monitor the number of requests processed by each node. Click on Statistics to get the complete statistics of the clustered caches. SNMP Counter to Monitor NCache Simple Network Management Protocol (SNMP) is a key system used for keeping an eye on and managing different network devices and their activities. It's a part of the Internet Protocol Suite and helps in sharing important information about the network's health and operations between devices like routers, switches, servers, and printers. This allows network managers to change settings, track how well the network is doing, and get alerts on any issues. SNMP is widely used and important for keeping networks running smoothly and safely. It's a vital part of managing and fixing networks. NCache has made SNMP monitoring easier by now allowing the publication of counters through a single port. Before, a separate port was needed for each cache. Make sure the NCache service and cache(s) to monitor are up and running. Configure NCache Service The Alachisoft.NCache.Service.dll.config file, located in the %NCHOME%\bin\service folder, provides the ability to activate or deactivate the monitoring of cache counters via SNMP by modifying particular options. These options are marked by specific tags. Update the value for the tags below: <add key="NCacheServer.EnableSnmpMonitoring" value="true"/> <add key="NCacheServer.SnmpListenersInfoPort" value="8256"/> <add key="NCacheServer.EnableMetricsPublishing" value="true"/> Change the NCacheServer.EnableSnmpMonitoring tag to true to turn on or off the SNMP monitoring of NCache cache counters. Initially, this tag is off (false). Change the NCacheServer.SnmpListenersInfoPort tag to true to set the port for SNMP to listen on. The default port is 8256, but you can adjust it according to your needs. Change the NCacheServer.EnableMetricsPublishing tag to true if you want to start or stop sending metrics to the NCache Service. Remember to reboot the NCache Service once you've made the necessary adjustments to the service configuration files. SNMP Monitoring NCache has made available a single MIB file called alachisoft.mib that keeps track of various counters which can be checked using SNMP. This file tells you about the ports used for different types of caches and client activities. You can find this file at %NCHOME%\bin\resources. To look at these counters, you can use a program called MIB Browser Free Tool to go through the MIB file. Use port 8256 to connect with NCache, and open the SNMP Table from View to check all the attributes of the NCache as shown in the figure below: To check specific attribute details in the SNMP Table, first pick the attributes you want to see. For a sample, I have selected cacheName, cacheSize, cacheCount, fetchesPerSec, requestsPerSec, additionPerSec. Then, click on View from the menu at the top before you choose the SNMP Table. You'll then see the values of the counter in the table as shown in the figure below. Summary This article provides a beginner-friendly guide on how to get started with NCache Java Edition, covering essential steps such as installing the NCache server, deploying it using a Docker image, starting the cache, conducting a stress test to evaluate its performance, and monitoring its operation through JMX counters. It helps you get started with enhancing your Java application's speed and reliability by implementing distributed caching with NCache.
Java Virtual Machine (JVM) is a crucial foundation for executing, ensuring reliability, and scaling Java applications across various platforms. Developers with an in-depth understanding of the JVM can advance their career opportunities and deliver high-quality software. This article explores the pivotal aspects of the JVM and emphasizes why a profound sense of its functionalities is vital for developers aiming to excel in their careers. 1. JVM Internals, Bytecode, and Classloading At the core of Java’s write-once, run-anywhere promise lies the JVM, a marvel of engineering that allows Java applications to transcend platform boundaries. By delving into JVM internals, developers gain valuable insights into bytecode—the intermediate representation of Java code—and the classloading mechanism that dynamically loads classes at runtime. This knowledge is instrumental in optimizing applications, ensuring platform compatibility, and improving the debugging process. 2. Efficient Memory Management in the JVM One of the JVM’s key features is its automated memory management, primarily through garbage collection (GC). Understanding the intricacies of memory management in the JVM enables developers to write applications that efficiently utilize system resources, leading to enhanced performance and stability. This aspect of JVM knowledge is crucial for developing high-load applications that maintain optimal performance under varying conditions. 3. Multithreading and Synchronization in Java Java’s powerful multithreading capabilities guide the development of high-performance, responsive applications. Mastery of threading and synchronization within the JVM allows developers to build applications capable of executing multiple tasks concurrently and safely, an indispensable skill for modern, scalable software development. 4. Security and Classloader Mechanisms The JVM’s security architecture and classloader mechanisms provide a robust framework for developing secure applications. Understanding these features enables developers to protect their applications from common security threats, an essential competency in today’s security-conscious environment. 5. Practical Performance Tuning Strategies Tuning JVM performance involves a nuanced understanding of JVM settings and how they affect application behavior. Developers skilled in performance tuning can significantly improve application responsiveness and efficiency, making this knowledge highly sought after for teams focused on delivering seamless user experiences. 6. Insights Into Alternative JVMs Exploring alternative JVMs can offer performance improvements and features not available in the standard JVM. Knowledge of these alternatives allows developers to choose the most appropriate JVM for their application’s needs, providing a competitive edge in application development and deployment. 7. Elevate Java Application Development A comprehensive understanding of the JVM equips developers with the tools to design and implement sophisticated, efficient, and scalable Java applications. This expertise enhances the quality of software produced and positions developers as valuable assets capable of leading complex development projects. 8. Garbage Collector: The Unsung Hero of JVM Garbage collection is a pivotal feature of the JVM. It automatically manages application memory and frees up resources that are no longer in use. Profound knowledge of garbage collection mechanisms and how to optimize them for specific applications can drastically reduce memory leaks and application pauses, ensuring smooth and efficient application performance. Understanding the various garbage collectors available within the JVM and how to configure them is crucial for developing high-performance applications. Enhancing JVM Mastery With “Mastering the Java Virtual Machine: An In-Depth Guide to JVM Internals and Performance Optimization” “Mastering the Java Virtual Machine” is an invaluable resource for developers who want to deepen their understanding of the JVM. This comprehensive guide delves into the heart of Java programming, focusing on the JVM's intricate workings and equipping readers with the skills necessary to excel in Java development. What You Will Learn From the Book Grasp JVM architecture and bytecode execution: Begin your journey by exploring the JVM’s architecture, understanding how it processes Java code into execution, and grasping the role of bytecode in this ecosystem. Dive into memory management: This book thoroughly examines JVM memory management, including heap and stack management, garbage collection, and memory profiling. It teaches you to optimize memory usage, a critical skill for developing high-performance Java applications. Alternative JVMs and performance optimization: Learn to compare and evaluate various JVMs, such as GraalVM, and understand their unique advantages for specific applications. This knowledge helps you select the right JVM for your projects and enhance application performance. Dynamic behavior with reflection: Master reflection to introduce dynamic behavior in Java applications. This technique allows for more flexible and adaptable code. Code generation with Java annotation processors: Learn to utilize Java annotation processors to generate code and streamline the development process efficiently. Embrace reactive programming: This book introduces reactive programming principles and guides you through developing scalable applications that can handle large volumes of data or users. Table of Contents Highlights Introduction to JVM and its foundational architecture Deep dive into the class file structure and bytecode understanding A comprehensive guide on memory management for optimal resource use Exploration of the execution engine, class loading, and dynamic behaviors Detailed discussion on garbage collection techniques and memory profiling Insights into GraalVM and alternative JVMs for performance enhancement Practical application of Java framework principles, reflection, and annotation processing This book uncovers the JVM’s technicalities and emphasizes practical application through case studies, making the complex concepts accessible and applicable. Whether you’re looking to optimize existing Java applications or build new ones with efficiency and scalability in mind, “Mastering the Java Virtual Machine” offers the knowledge and tools to achieve these goals. By integrating the insights and techniques presented in this guide, developers can significantly elevate their Java application development and position themselves as experts in the field. Conclusion Exploring the Java Virtual Machine (JVM) is more than a technical endeavor; it’s a journey toward mastering Java development. The depth of understanding and practical skills acquired through studying the JVM’s intricate mechanisms enable developers to build applications that are not only efficient and scalable but also robust and maintainable. Mastering the Java Virtual Machine: An In-depth Guide to JVM Internals and Performance Optimization embodies this journey, providing a comprehensive resource that covers everything from JVM architecture and bytecode execution to advanced memory management and garbage collection techniques. This guide is a beacon for developers navigating the complex landscape of Java, illuminating the path to proficiency and expertise. In mastering the JVM, developers unlock a new realm of possibilities in software development. The insights gained from this guide empower them to optimize applications, enhance performance, and confidently embrace Java’s dynamic capabilities. The practical application of these concepts, supported by case studies and real-world scenarios, ensures that the knowledge is understood and applied effectively. As developers integrate these lessons into their work, they position themselves at the forefront of the field, ready to tackle the challenges of modern software development and shape the future of technology with their contributions.
In this example, we'll learn about the Strategy pattern in Spring. We'll cover different ways to inject strategies, starting from a simple list-based approach to a more efficient map-based method. To illustrate the concept, we'll use the three Unforgivable curses from the Harry Potter series — Avada Kedavra, Crucio, and Imperio. What Is the Strategy Pattern? The Strategy Pattern is a design principle that allows you to switch between different algorithms or behaviors at runtime. It helps make your code flexible and adaptable by allowing you to plug in different strategies without changing the core logic of your application. This approach is useful in scenarios where you have different implementations for a specific task of functionality and want to make your system more adaptable to changes. It promotes a more modular code structure by separating the algorithmic details from the main logic of your application. Step 1: Implementing Strategy Picture yourself as a dark wizard who strives to master the power of Unforgivable curses with Spring. Our mission is to implement all three curses — Avada Kedavra, Crucio and Imperio. After that, we will switch between curses (strategies) in runtime. Let's start with our strategy interface: Java public interface CurseStrategy { String useCurse(); String curseName(); } In the next step, we need to implement all Unforgivable curses: Java @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public String curseName() { return "Crucio"; } } @Component public class ImperiusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Imperio!"; } @Override public String curseName() { return "Imperio"; } } @Component public class KillingCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Avada Kedavra!"; } @Override public String curseName() { return "Avada Kedavra"; } } Step 2: Inject Curses as List Spring brings a touch of magic that allows us to inject multiple implementations of an interface as a List so we can use it to inject strategies and switch between them. But let's first create the foundation: Wizard interface. Java public interface Wizard { String castCurse(String name); } And we can inject our curses (strategies) into the Wizard and filter the desired one. Java @Service public class DarkArtsWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> name.equals(s.curseName())) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } UnsupportedCurseException is also created if the requested curse does not exist. Java public class UnsupportedCurseException extends RuntimeException { } And we can verify that curse casting is working: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("Crucio")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("Imperio")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("Avada Kedavra")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Abrakadabra")); } } Another popular approach is to define the canUse method instead of curseName. This will return boolean and allows us to use more complex filtering like: Java public interface CurseStrategy { String useCurse(); boolean canUse(String name, String wizardType); } @Component public class CruciatusCurseStrategy implements CurseStrategy { @Override public String useCurse() { return "Attack with Crucio!"; } @Override public boolean canUse(String name, String wizardType) { return "Crucio".equals(name) && "Dark".equals(wizardType); } } @Service public class DarkArtstWizard implements Wizard { private final List<CurseStrategy> curses; public DarkArtsListWizard(List<CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { return curses.stream() .filter(s -> s.canUse(name, "Dark"))) .findFirst() .orElseThrow(UnsupportedCurseException::new) .useCurse(); } } Pros: Easy to implement. Cons: Runs through a loop every time, which can lead to slower execution times and increased processing overhead. Step 3: Inject Strategies as Map We can easily address the cons from the previous section. Spring lets us inject a Map with bean names and instances. It simplifies the code and improves its efficiency. Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(Map<String, CurseStrategy> curses) { this.curses = curses; } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } This approach has a downside: Spring injects the bean name as the key for the Map, so strategy names are the same as the bean names like cruciatusCurseStrategy. This dependency on Spring's internal bean names might cause problems if Spring's code or our class names change without notice. Let's check that we're still capable of casting those curses: Java @SpringBootTest class DarkArtsWizardTest { @Autowired private DarkArtsWizard wizard; @Test public void castCurseCrucio() { assertEquals("Attack with Crucio!", wizard.castCurse("cruciatusCurseStrategy")); } @Test public void castCurseImperio() { assertEquals("Attack with Imperio!", wizard.castCurse("imperiusCurseStrategy")); } @Test public void castCurseAvadaKedavra() { assertEquals("Attack with Avada Kedavra!", wizard.castCurse("killingCurseStrategy")); } @Test public void castCurseExpelliarmus() { assertThrows(UnsupportedCurseException.class, () -> wizard.castCurse("Crucio")); } } Pros: No loops. Cons: Dependency on bean names, which makes the code less maintainable and more prone to errors if names are changed or refactored. Step 4: Inject List and Convert to Map Cons of Map injection can be easily eliminated if we inject List and convert it to Map: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses; public DarkArtsMapWizard(List<CurseStrategy> curses) { this.curses = curses.stream() .collect(Collectors.toMap(CurseStrategy::curseName, Function.identity())); } @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } } With this approach, we can move back to use curseName instead of Spring's bean names for Map keys (strategy names). Step 5: @Autowire in Interface Spring supports autowiring into methods. The simple example of autowiring into methods is through setter injection. This feature allows us to use @Autowired in a default method of an interface so we can register each CurseStrategy in the Wizard interface without needing to implement a registration method in every strategy implementation. Let's update the Wizard interface by adding a registerCurse method: Java public interface Wizard { String castCurse(String name); void registerCurse(String curseName, CurseStrategy curse) } This is the Wizard implementation: Java @Service public class DarkArtsWizard implements Wizard { private final Map<String, CurseStrategy> curses = new HashMap<>(); @Override public String castCurse(String name) { CurseStrategy curse = curses.get(name); if (curse == null) { throw new UnsupportedCurseException(); } return curse.useCurse(); } @Override public void registerCurse(String curseName, CurseStrategy curse) { curses.put(curseName, curse); } } Now, let's update the CurseStrategy interface by adding a method with the @Autowired annotation: Java public interface CurseStrategy { String useCurse(); String curseName(); @Autowired default void registerMe(Wizard wizard) { wizard.registerCurse(curseName(), this); } } At the moment of injecting dependencies, we register our curse into the Wizard. Pros: No loops, and no reliance on inner Spring bean names. Cons: No cons, pure dark magic. Conclusion In this article, we explored the Strategy pattern in the context of Spring. We assessed different strategy injection approaches and demonstrated an optimized solution using Spring's capabilities. The full source code for this article can be found on GitHub.
On March 19, 2024, Oracle announced the release of Java 22, the latest version of the popular programming language and development platform. This significant update delivers a wide range of new features and improvements that Java developers should be excited about. Let's take a deep dive into the most important enhancements in Java 22 and what they mean for the Java development community. Language Enhancements from Project Amber One of the key focus areas in Java 22 is Project Amber, which aims to evolve the Java language and make it more expressive and concise. Here are the notable language features introduced in this release: 1. Statements Before Super (...) With JEP 447, developers now have more flexibility in expressing constructor behavior. Statements that do not reference the instance being created can now appear before an explicit constructor invocation. This allows for a more natural placement of logic and preserves the top-down execution order of constructors. 2. Unnamed Variables and Patterns JEP 456 introduces unnamed variables and patterns, which can be used when variable declarations or nested patterns are required but never used. This enhancement improves code readability, reduces errors, and enhances the maintainability of the codebase. 3. String Templates The second preview of JEP 459 simplifies the development of Java programs by making it easier to express strings that include runtime-computed values. String templates improve security when composing strings from user-provided values and enhance the readability of expressions mixed with text. 4. Implicitly Declared Classes and Instance Main Methods JEP 463, also in its second preview, provides a smooth on-ramp for Java beginners by allowing them to write their first programs without needing to understand complex language features. This feature enables streamlined declarations for single-class programs and allows students to gradually expand their programs as their skills grow. Concurrency Improvements With Project Loom Project Loom, which focuses on making it easier to write and maintain concurrent and parallel code, brings two significant features in Java 22: 1. Structured Concurrency JEP 462, in its second preview, introduces an API for structured concurrency. This feature helps developers streamline error handling, cancellation, and observability in concurrent programming. It promotes a style that eliminates common risks such as thread leaks and cancellation delays. 2. Scoped Values The second preview of JEP 464 introduces scoped values, which enable the sharing of immutable data within and across threads. Scoped values improve the ease of use, comprehensibility, performance, and robustness of concurrent code. Native Interoperability With Project Panama Project Panama aims to improve Java's interoperability with native code and data. Java 22 includes two key features from this project: 1. Foreign Function and Memory API JEP 454 introduces an API that allows Java programs to efficiently invoke foreign functions and safely access foreign memory without relying on the Java Native Interface (JNI). This feature increases ease of use, flexibility, safety, and performance when interoperating with native libraries and data. 2. Vector API The seventh incubator of JEP 460 provides an API to express vector computations that can be compiled to vector instructions on supported CPU architectures. This enables developers to achieve superior performance compared to equivalent scalar computations. Core Libraries and Tools Enhancements Java 22 also brings several improvements to core libraries and tools: 1. Class-File API JEP 457, in preview, introduces a standard API for parsing, generating, and transforming Java class files. This feature aims to improve developer productivity when working with class files. 2. Launch Multi-File Source-Code Programs JEP 458 enhances the Java application launcher to enable running programs supplied as multiple files of Java source code. This gives developers more flexibility in configuring build tools. 3. Stream Gatherers The preview of JEP 461 enhances the Stream API to support custom intermediate operations. This feature makes stream pipelines more flexible and expressive, allowing developers to write more efficient and maintainable code. Performance Improvements Java 22 includes a notable performance update with JEP 423: Region Pinning for G1. This feature reduces latency by allowing garbage collection to happen during certain native library calls. By pinning only the regions containing objects that need to be blocked, garbage collection can continue normally in unpinned regions, improving overall performance. Cloud Support and Java Management Service Java 22 is optimized for deployment in the cloud, particularly on Oracle Cloud Infrastructure (OCI). OCI is one of the first hyperscale clouds to support Java 22, offering free access to Oracle Java SE, Oracle GraalVM, and the Java SE Subscription Enterprise Performance Pack. Additionally, Java 22 is supported by the Java Management Service (JMS), an OCI-native service that provides a unified console and dashboard for managing Java runtimes and applications across on-premises and cloud environments. JavaOne Returns in 2025 In exciting news for the Java community, Oracle announced that JavaOne, the flagship event for Java developers, will return to the San Francisco Bay Area in 2025. Taking place from March 17-20, JavaOne 2025 will give attendees the opportunity to hear about the latest Java developments and interact with Oracle's Java experts and industry luminaries. Conclusion Java 22 is a significant release that brings a wide range of enhancements and new features to the Java platform. From language improvements and concurrency updates to native interoperability and performance optimizations, this release offers something for every Java developer. With the return of JavaOne in 2025 and the continued support for Java in the cloud through services like JMS, the future looks bright for the Java ecosystem. Developers should explore the new capabilities offered by Java 22 and leverage them to build more efficient, secure, and scalable applications. As always, the release of Java 22 is the result of collaboration between Oracle and the global Java developer community through OpenJDK and the Java Community Process (JCP). The Java community's commitment to innovation and continuous improvement is what keeps the platform vibrant and relevant, even after nearly three decades. Embrace the power of Java 22 and unlock new possibilities in your development projects.
Microservices have emerged as a transformative architectural approach in the realm of software development, offering a paradigm shift from monolithic structures to a more modular and scalable system. At its core, microservices involve breaking down complex applications into smaller, independently deployable services that communicate seamlessly, fostering agility, flexibility, and ease of maintenance. This decentralized approach allows developers to focus on specific functionalities, enabling rapid development, continuous integration, and efficient scaling to meet the demands of modern, dynamic business environments. As organizations increasingly embrace the benefits of microservices, this article explores the key principles, advantages, and challenges associated with this architectural style, shedding light on its pivotal role in shaping the future of software design and deployment. A fundamental characteristic of microservices applications is the ability to design, develop, and deploy each microservice independently, utilizing diverse technology stacks. Each microservice functions as a self-contained, autonomous application with its own dedicated persistent storage, whether it be a relational database, a NoSQL DB, or even a legacy file storage system. This autonomy enables individual microservices to scale independently, facilitating seamless real-time infrastructure adjustments and enhancing overall manageability. NCache Caching Layer in Microservice Architecture In scenarios where application transactions surge, bottlenecks may persist, especially in architectures where microservices store data in non-scalable relational databases. Simply deploying additional instances of the microservice doesn't alleviate the problem. To address these challenges, consider integrating NCache as a distributed cache at the caching layer between microservices and datastores. NCache serves not only as a cache but also functions as a scalable in-memory publisher/subscriber messaging broker, facilitating asynchronous communication between microservices. Microservice Java application performance optimization can be achieved by the cache techniques like Cache item locking, grouping Cache data, Hibernate Caching, SQL Query, data structure, spring data cache technique pub-sub messaging, and many more with NCache. Please check the out-of-the-box features provided by NCache. Using NCache as Hibernate Second Level Java Cache Hibernate First-Level Cache The Hibernate first-level cache serves as a fundamental standalone (in-proc) cache linked to the Session object, limited to the current session. Nonetheless, a drawback of the first-level cache is its inability to share objects between different sessions. If the same object is required by multiple sessions, each triggers a database trip to load it, intensifying database traffic and exacerbating scalability issues. Furthermore, when the session concludes, all cached data is lost, necessitating a fresh fetch from the database upon the next retrieval. Hibernate Second-Level Cache For high-traffic Hibernate applications relying solely on the first-level cache, deployment in a web farm introduces challenges related to cache synchronization across servers. In a web farm setup, each node operates a web server—such as Apache, Oracle WebLogic, etc.—with multiple instances of httpd processes to serve requests. Each Hibernate first-level cache in these HTTP worker processes maintains a distinct version of the same data directly cached from the database, posing synchronization issues. This is why Hibernate offers a second-level cache with a provider model. The Hibernate second-level cache enables you to integrate third-party distributed (out-proc) caching providers to cache objects across sessions and servers. Unlike the first-level cache, the second-level cache is associated with the SessionFactory object and is accessible to the entire application, extending beyond a single session. Enabling the Hibernate second-level cache results in the coexistence of two caches: the first-level cache and the second-level cache. Hibernate endeavors to retrieve objects from the first-level cache first; if unsuccessful, it attempts to fetch them from the second-level cache. If both attempts fail, the objects are directly loaded from the database and cached. This configuration substantially reduces database traffic, as a significant portion of the data is served by the second-level distributed cache. NCache Java has implemented a Hibernate second-level caching provider by extending org.hibernate.cache.CacheProvider. Integrating NCache Java Hibernate distributed caching provider with the Hibernate application requires no code changes. This integration enables you to scale your Hibernate application to multi-server configurations without the database becoming a bottleneck. NCache also delivers enterprise-level distributed caching features, including data size management, data synchronization across servers, and more. To incorporate the NCache Java Hibernate caching provider, a simple modification of your hibernate.cfg.xml and ncache.xml is all that is required. Thus, with the NCache Java Hibernate distributed cache provider, you can achieve linear scalability for your Hibernate applications seamlessly, requiring no alterations to your existing code. Code Snippet Java // Configure Hibernate properties programmatically Properties hibernateProperties = new Properties(); hibernateProperties.put("hibernate.connection.driver_class", "org.h2.Driver"); hibernateProperties.put("hibernate.connection.url", "jdbc:h2:mem:testdb"); hibernateProperties.put("hibernate.show_sql", "false"); hibernateProperties.put("hibernate.hbm2ddl.auto", "create-drop"); hibernateProperties.put("hibernate.cache.use_query_cache", "true"); hibernateProperties.put("hibernate.cache.use_second_level_cache", "true"); hibernateProperties.put("hibernate.cache.region.factory_class", "org.hibernate.cache.jcache.internal.JCacheRegionFactory"); hibernateProperties.put("hibernate.javax.cache.provider", "com.alachisoft.ncache.hibernate.jcache.HibernateNCacheCachingProvider"); // Set other Hibernate properties as needed Configuration configuration = new Configuration() .setProperties(hibernateProperties).addAnnotatedClass(Product.class); Logger.getLogger("org.hibernate").setLevel(Level.OFF); // Build the ServiceRegistry ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(configuration.getProperties()).build(); // Build the SessionFactory SessionFactory factory = configuration.buildSessionFactory(serviceRegistry); // Create a List of Product objects ArrayList<Product> products = (ArrayList<Product>) getProducts(); // Open a new Hibernate session to save products to the database. This also caches it try (Session session = factory.openSession()) { Transaction transaction = session.beginTransaction(); // save() method saves products to the database and caches it too System.out.println("ProductID, Name, Price, Category"); for (Product product : products) { System.out.println("- " + product.getProductID() + ", " + product.getName() + ", " + product.getPrice() + ", " + product.getCategory()); session.save(product); } transaction.commit(); System.out.println(); // Now open a new session to fetch products from the DB. // But, these products are actually fetched from the cache try (Session session = factory.openSession()) { List<Product> productList = (List<Product>) session.createQuery("from Product").list(); if (productList != null) { printProductDetails(productList); } } Integrate NCache with Hibernate to effortlessly cache the results of queries. When these objects are subsequently fetched by Hibernate, they are retrieved from the cache, thereby avoiding a costly trip to the database. From the above code sample, the products are saved in the database, and it also caches; now, when the new session opens to fetch the product details, it will fetch from the Cache and avoid an unnecessary database trip. Learn more about Hibernate Caching Scaling With NCache Pub/Sub Messaging NCache is a distributed in-memory caching solution designed for .NET. Its compatibility extends to Java through a native client and third-party integrations, ensuring seamless support for both platforms. NCache serves as an in-memory distributed data store tailored for .NET and Java, offering a feature-rich, in-memory pub/sub mechanism for event-driven communication. This makes it straightforward to set up NCache as a messaging broker, employing the Pub/Sub model for seamless asynchronous communication between microservices. Using NCache In-Memory Pub/Sub for Microservices NCache enables Pub/Sub functionality by establishing a topic where microservices can publish and subscribe to events. These events are published to the NCache message broker outside the microservice. Within each subscribing microservice, there exists an event handler to manage the corresponding event once it has been published by the originating microservice. In the realm of Java microservices, NCache functions as an event bus or message broker, facilitating the relay of messages to one or multiple subscribers. In the context of Pub/Sub models that necessitate a communication channel, NCache serves as a medium for topics. This entails the publisher dispatching messages to the designated topic and subscribers receiving notifications through the same topic. Employing NCache as the medium for topics promotes loose coupling within the model, offering enhanced abstraction and additional advantages for distributed topics. Publish The code snippet below initializes the messageService object using NCache MessagingService package. Initializing the Topic Java // Create a Topic in NCache. MessagingService messagingService = cache.getMessagingService(); Topic topic = messagingService.createTopic(topicName); // Create a thread pool for publishers ExecutorService publisherThreadPool = Executors.newFixedThreadPool(2); The below code snippet used to define register the subscribers to this topic Register subscribers to this Topic MessageReceivedListener subscriptionListener1 = new MessageReceivedListener() { @Override public void onMessageReceived(Object o, MessageEventArgs messageEventArgs) { messageReceivedSubscription1(messageEventArgs.getMessage()); } }; MessageReceivedListener subscriptionListener2 = new MessageReceivedListener() { @Override public void onMessageReceived(Object o, MessageEventArgs messageEventArgs) { messageReceivedSubscription2(messageEventArgs.getMessage()); } }; TopicSubscription subscription1 = topic.createSubscription(subscriptionListener1); TopicSubscription subscription2 = topic.createSubscription(subscriptionListener2); NCache provides two variants of durable subscriptions to cater to the message durability needs within your Java microservices: Shared Durable Subscriptions: This allows multiple subscribers to connect to a single subscription. The Round Robin approach is employed to distribute messages among the various subscribers. Even if a subscriber exits the network, messages persistently flow between the active subscribers. Exclusive Durable Subscriptions: In this type, only one active subscriber is allowed on a subscription at any given time. No new subscriber requests are accepted for the same subscription until the existing connection is active. Learn more Pub/Sub Messaging with NCache implementation here Pub/Sub Messaging in Cache: An Overview SQL Query on Cache NCache provides your microservices with the capability to perform SQL-like queries on indexed cache data. This functionality becomes particularly beneficial when the values of the keys storing the desired information are not known. It abstracts much of the lower-level cache API calls, contributing to clearer and more maintainable application code. This feature is especially advantageous for individuals who find SQL-like commands more intuitive and comfortable to work with. NCache provides functionality for searching and removing cache data through queries similar to SQL's SELECT and DELETE statements. However, operations like INSERT and UPDATE are not available. For executing SELECT queries within the cache, NCache utilizes ExecuteReader; the ExecuteScalar function is used to carry out a query and retrieve the first row's first column from the resulting data set, disregarding any extra columns or rows. For NCache SQL queries to function, indexes must be established on all objects undergoing search. This can be achieved through two methods: configuring the cache or utilizing code with "Custom Attributes" to annotate object fields. When objects are added to the cache, this approach automatically creates indexes on the specified fields. Code Snippet Java String cacheName = "demoCache"; // Connect to the cache and return a cache handle Cache cache = CacheManager.getCache(cacheName); // Adds all the products to the cache. This automatically creates indexes on various // attributes of Product object by using "Custom Attributes". addSampleData(cache); // $VALUE$ keyword means the entire object instead of individual attributes that are also possible String sql = "SELECT $VALUE$ FROM com.alachisoft.ncache.samples.Product WHERE category IN (?, ?) AND price < ?"; QueryCommand sqlCommand = new QueryCommand(sql); List<String> catParamList = new ArrayList<>(Arrays.asList(("Electronics"), ("Stationery"))); sqlCommand.getParameters().put("category", catParamList); sqlCommand.getParameters().put("price", 2000); // ExecuteReader returns ICacheReader with the query resultset CacheReader resultSet = cache.getSearchService().executeReader(sqlCommand); List<Product> fetchedProducts = new ArrayList<>(); if (resultSet.getFieldCount() > 0) { while (resultSet.read()) { // getValue() with $VALUE$ keyword returns the entire object instead of just one column fetchedProducts.add(resultSet.getValue("$VALUE$", Product.class)); } } printProducts(fetchedProducts); Utilize SQL in NCache to perform queries on cached data by focusing on object attributes and Tags, rather than solely relying on keys. In this example, we utilize "Custom Attributes" to generate an index on the Product object. Learn more about SQL Query with NCache in Java Query Data in Cache Using SQL Read-Thru and Write-Thru Utilize the Data Source Providers feature of NCache to position it as the primary interface for data access within your microservices architecture. When a microservice needs data, it should first query the cache. If the data is present, the cache supplies it directly. Otherwise, the cache employs a read-thru handler to fetch the data from the datastore on behalf of the client, caches it, and then provides it to the microservice. In a similar fashion, for write operations (such as Add, Update, Delete), a microservice can perform these actions on the cache. The cache then automatically carries out the corresponding write operation on the datastore using a write-thru handler. Furthermore, you have the option to compel the cache to fetch data directly from the datastore, regardless of the presence of a possibly outdated version in the cache. This feature is essential when microservices require the most current information and complements the previously mentioned cache consistency strategies. The integration of the Data Source Provider feature not only simplifies your application code but also, when combined with NCache's database synchronization capabilities, ensures that the cache is consistently updated with fresh data for processing. ReadThruProvider For implementing Read-Through caching, it's necessary to create an implementation of the ReadThruProvider interface in Java Here's a code snippet to get started with implementing Read-Thru in your microservices: Java ReadThruOptions readThruOptions = new ReadThruOptions(ReadMode.ReadThru, _readThruProviderName); product = _cache.get(_productId, readThruOptions, Product.class); Read more about Read-Thru implementation here: Read-Through Provider Configuration and Implementation WriteThruProvider: For implementing Write-Through caching, it's necessary to create an implementation of the WriteThruProvider interface in Java The code snippet to get started with implementing Write-Thru in your microservices: Java _product = new Product(); WriteThruOptions writeThruOptions = new WriteThruOptions(WriteMode.WriteThru, _writeThruProviderName) CacheItem cacheItem= new CacheItem(_customer) _cache.insert(_product.getProductID(), cacheItem, writeThruOptions); Read more about Write-Thru implementation here: Write-Through Provider Configuration and Implementation Summary Microservices are designed to be autonomous, enabling independent development, testing, and deployment from other microservices. While microservices provide benefits in scalability and rapid development cycles, some components of the application stack can present challenges. One such challenge is the use of relational databases, which may not support the necessary scale-out to handle growing loads. This is where a distributed caching solution like NCache becomes valuable. In this article, we have seen the variety of ready-to-use features like pub/sub messaging, data caching, SQL Query, Read-Thru and Write-Thru, and Hibernate second-level Java Cache techniques offered by NCache that simplify and streamline the integration of data caching into your microservices application, making it an effortless and natural extension.
In the first part of this series, we introduced the basics of brain-computer interfaces (BCIs) and how Java can be employed in developing BCI applications. In this second part, let's delve deeper into advanced concepts and explore a real-world example of a BCI application using NeuroSky's MindWave Mobile headset and their Java SDK. Advanced Concepts in BCI Development Motor Imagery Classification: This involves the mental rehearsal of physical actions without actual execution. Advanced machine learning algorithms like deep learning models can significantly improve classification accuracy. Event-Related Potentials (ERPs): ERPs are specific patterns in brain signals that occur in response to particular events or stimuli. Developing BCI applications that exploit ERPs requires sophisticated signal processing techniques and accurate event detection algorithms. Hybrid BCI Systems: Hybrid BCI systems combine multiple signal acquisition methods or integrate BCIs with other physiological signals (like eye tracking or electromyography). Developing such systems requires expertise in multiple signal acquisition and processing techniques, as well as efficient integration of different modalities. Real-World BCI Example Developing a Java Application With NeuroSky's MindWave Mobile NeuroSky's MindWave Mobile is an EEG headset that measures brainwave signals and provides raw EEG data. The company provides a Java-based SDK called ThinkGear Connector (TGC), enabling developers to create custom applications that can receive and process the brainwave data. Step-by-Step Guide to Developing a Basic BCI Application Using the MindWave Mobile and TGC Establish Connection: Use the TGC's API to connect your Java application with the MindWave Mobile device over Bluetooth. The TGC provides straightforward methods for establishing and managing this connection. Java ThinkGearSocket neuroSocket = new ThinkGearSocket(this); neuroSocket.start(); Acquire Data: Once connected, your application will start receiving raw EEG data from the device. This data includes information about different types of brainwaves (e.g., alpha, beta, gamma), as well as attention and meditation levels. Java public void onRawDataReceived(int rawData) { // Process raw data } Process Data: Use signal processing techniques to filter out noise and extract useful features from the raw data. The TGC provides built-in methods for some basic processing tasks, but you may need to implement additional processing depending on your application's needs. Java public void onEEGPowerReceived(EEGPower eegPower) { // Process EEG power data } Interpret Data: Determine the user's mental state or intent based on the processed data. This could involve setting threshold levels for certain values or using machine learning algorithms to classify the data. For example, a high attention level might be interpreted as the user wanting to move a cursor on the screen. Java public void onAttentionReceived(int attention) { // Interpret attention data } Perform Action: Based on the interpretation of the data, have your application perform a specific action. This could be anything from moving a cursor, controlling a game character, or adjusting the difficulty level of a task. Java if (attention > ATTENTION_THRESHOLD) { // Perform action } Improving BCI Performance With Java Optimize Signal Processing: Enhance the quality of acquired brain signals by implementing advanced signal processing techniques, such as adaptive filtering or blind source separation. Employ Advanced Machine Learning Algorithms: Utilize state-of-the-art machine learning models, such as deep neural networks or ensemble methods, to improve classification accuracy and reduce user training time. Libraries like DeepLearning4j or TensorFlow Java can be employed for this purpose. Personalize BCI Models: Customize BCI models for individual users by incorporating user-specific features or adapting the model parameters during operation. This can be achieved using techniques like transfer learning or online learning. Implement Efficient Real-Time Processing: Ensure that your BCI application can process brain signals and generate output commands in real time. Optimize your code, use parallel processing techniques, and leverage Java's concurrency features to achieve low-latency performance. Evaluate and Validate Your BCI Application: Thoroughly test your BCI application on a diverse group of users and under various conditions to ensure its reliability and usability. Employ standard evaluation metrics and follow best practices for BCI validation. Conclusion Advanced BCI applications require a deep understanding of brain signal acquisition, processing, and classification techniques. Java, with its extensive libraries and robust performance, is an excellent choice for implementing such applications. By exploring advanced concepts, developing real-world examples, and continuously improving BCI performance, developers can contribute significantly to this revolutionary field.
Java interfaces, for a very long time, were just that — interfaces, an anemic set of function prototypes. Even then, there were non-standard uses of interfaces (for example, marker interfaces), but that's it. However, since Java 8, there have been substantial changes in the interfaces. Additions of default and static methods enabled many new possibilities. For example, enabled adding of new functionality to existing interfaces without breaking old code. Or hiding all implementations behind factory methods, enforcing the “code against interface” policy. The addition of sealed interfaces enabled the creation of true sum types and expressions in code design intents. Together, these changes made Java interfaces a powerful, concise, and expressive tool. Let’s take a look at some non-traditional applications of Java interfaces. Fluent Builder Fluent (or Staged) Builder is a pattern used to assemble object instances. Unlike the traditional Builder pattern, it prevents the creation of incomplete objects and enforces a fixed order of field initialization. These properties make it the preferred choice for reliable and maintainable code. The idea behind Fluent Builder is rather simple. Instead of returning the same Builder instance after setting a property, it returns a new type (class or interface), which has only one method, therefore guiding the developer through the process of instance initialization. A fluent builder may omit the build() method at the end; for instance, assembling ends once the last field is set. Unfortunately, the straightforward implementation of Fluent Builder is very verbose: Java public record NameAge(String firstName, String lastName, Option<String> middleName, int age) { public static NameAgeBuilderStage1 builder() { return new NameAgeBuilder(); } public static class NameAgeBuilder implements NameAgeBuilderStage1, NameAgeBuilderStage2, NameAgeBuilderStage3, NameAgeBuilderStage4 { private String firstName; private String lastName; private Option<String> middleName; @Override public NameAgeBuilderStage2 firstName(String firstName) { this.firstName = firstName; return this; } @Override public NameAgeBuilderStage3 lastName(String lastName) { this.lastName = lastName; return this; } @Override public NameAgeBuilderStage4 middleName(Option<String> middleName) { this.middleName = middleName; return this; } @Override public NameAge age(int age) { return new NameAge(firstName, lastName, middleName, age); } } public interface NameAgeBuilderStage1 { NameAgeBuilderStage2 firstName(String firstName); } public interface NameAgeBuilderStage2 { NameAgeBuilderStage3 lastName(String lastName); } public interface NameAgeBuilderStage3 { NameAgeBuilderStage4 middleName(Option<String> middleName); } public interface NameAgeBuilderStage4 { NameAge age(int age); } } It is also not very safe, as it is still possible to cast the returned interface to NameAgeBuilder and call the age() method, getting an incomplete object. We might notice that each interface is a typical functional interface with only one method inside. With this in mind, we may rewrite the code above into the following: Java public record NameAge(String firstName, String lastName, Option<String> middleName, int age) { static NameAgeBuilderStage1 builder() { return firstName -> lastName -> middleName -> age -> new NameAge(firstName, lastName, middleName, age); } public interface NameAgeBuilderStage1 { NameAgeBuilderStage2 firstName(String firstName); } public interface NameAgeBuilderStage2 { NameAgeBuilderStage3 lastName(String lastName); } public interface NameAgeBuilderStage3 { NameAgeBuilderStage4 middleName(Option<String> middleName); } public interface NameAgeBuilderStage4 { NameAge age(int age); } } Besides being much more concise, this version is not susceptible to (even hacky) premature object creation. Reduction of Implementation Although default methods were created to enable the extension of existing interfaces without breaking the existing implementation, this is not the only use for them. For a long time, if we needed multiple implementations of the same interface, where many implementations share some code, the only way to avoid code duplication was to create an abstract class and inherit those implementations from it. Although this avoided code duplication, this solution is relatively verbose and causes unnecessary coupling. The abstract class is a purely technical entity that has no corresponding part in the application domain. With default methods, abstract classes are no longer necessary; common functionality can be written directly in the interface, reducing boilerplate, eliminating coupling, and improving maintainability. But what if we go further? Sometimes, it is possible to express all necessary functionality using only very few implementation-specific methods. Ideally — just one. This makes implementation classes very compact and easy to reason about and maintain. Let’s, for example, implement Maybe<T> monad (yet another name for Optional<T>/Option<T>). No matter how rich and diverse API we’re planning to implement, it still could be expressed as a call to a single method, let’s call it fold(): Java <R> R fold(Supplier<? extends R> nothingMapper, Function<? super T, ? extends R> justMapper) This method accepts two functions; one is called when the value is present and another when the value is missing. The result of the application is just returned as the result of the implemented method. With this method, we can implement map() and flatMap() as: Java default <U> Maybe<U> map(Function<? super T, U> mapper) { return fold(Maybe::nothing, t -> just(mapper.apply(t))); } default <U> Maybe<U> flatMap(Function<? super T, Maybe<U>> mapper) { return fold(Maybe::nothing, mapper); } These implementations are universal and applicable to both variants. Note that since we have exactly two implementations, it makes perfect sense to make the interface sealed. And to even further reduce the amount of boilerplate — use records: Java public sealed interface Maybe<T> { default <U> Maybe<U> map(Function<? super T, U> mapper) { return fold(Maybe::nothing, t -> just(mapper.apply(t))); } default <U> Maybe<U> flatMap(Function<? super T, Maybe<U>> mapper) { return fold(Maybe::nothing, mapper); } <R> R fold(Supplier<? extends R> nothingMapper, Function<? super T, ? extends R> justMapper); static <T> Just<T> just(T value) { return new Just<>(value); } @SuppressWarnings("unchecked") static <T> Nothing<T> nothing() { return (Nothing<T>) Nothing.INSTANCE; } static <T> Maybe<T> maybe(T value) { return value == null ? nothing() : just(value); } record Just<T>(T value) implements Maybe<T> { public <R> R fold(Supplier<? extends R> nothingMapper, Function<? super T, ? extends R> justMapper) { return justMapper.apply(value); } } record Nothing<T>() implements Maybe<T> { static final Nothing<?> INSTANCE = new Nothing<>(); @Override public <R> R fold(Supplier<? extends R> nothingMapper, Function<? super T, ? extends R> justMapper) { return nothingMapper.get(); } } } Although this is not strictly necessary for demonstration, this implementation uses a shared constant for the implementation of 'Nothing', reducing allocation. Another interesting property of this implementation — it uses no if statement (nor ternary operator) for the logic. This improves performance and enables better optimization by the Java compiler. Another useful property of this implementation — it is convenient for pattern matching (unlike Java 'Optional'for example): Java var result = switch (maybe) { case Just<String>(var value) -> value; case Nothing<String> nothing -> "Nothing"; }; But sometimes, even implementation classes are not necessary. The example below shows how the entire implementation fits into the interface (full code can be found here): Java public interface ShortenedUrlRepository { default Promise<ShortenedUrl> create(ShortenedUrl shortenedUrl) { return QRY."INSERT INTO shortenedurl (\{template().fieldNames()}) VALUES (\{template().fieldValues(shortenedUrl)}) RETURNING *" .in(db()) .asSingle(template()); } default Promise<ShortenedUrl> read(String id) { return QRY."SELECT * FROM shortenedurl WHERE id = \{id}" .in(db()) .asSingle(template()); } default Promise<Unit> delete(String id) { return QRY."DELETE FROM shortenedurl WHERE id = \{id}" .in(db()) .asUnit(); } DbEnv db(); } To turn this interface into a working instance, all we need is to provide an instance of the environment. For example, like this: Java var dbEnv = DbEnv.with(dbEnvConfig); ShortenedUrlRepository repository = () -> dbEnv; This approach sometimes results in code that is too concise and sometimes requires writing a more verbose version to preserve context. I’d say that this is quite an unusual property for Java code, which is often blamed for verbosity. Utility … Interfaces? Well, utility (as well as constant) interfaces were not feasible for a long time. Perhaps the main reason is that such interfaces could be implemented, and constants, as well as utility functions, would be (unnecessary) part of the implementation. But with sealed interfaces, this issue can be solved in a way similar to how instantiation of utility classes is prevented: Java public sealed interface Utility { ... record unused() implements Utility {} } At first look, it makes no big sense to use this approach. However, the use of an interface eliminates the need for visibility modifiers for each method and/or constant. This, in turn, reduces the amount of syntactic noise, which is mandatory for classes but redundant for interfaces, as they have all their members public. Interfaces and Private Records The combination of these two constructs enables convenient writing code in “OO without classes” style, enforcing “code against interface” while reducing boilerplate at the same time. For example: Java public interface ContentType { String headerText(); ContentCategory category(); static ContentType custom(String headerText, ContentCategory category) { record contentType(String headerText, ContentCategory category) implements ContentType {} return new contentType(headerText, category); } } The private record serves two purposes: It keeps the use of implementation under complete control. No direct instantiations are possible, only via the static factory method. Keeps implementation close to the interface, simplifying support, extension, and maintenance. Note that the interface is not sealed, so one can do, for example, the following: Java public enum CommonContentTypes implements ContentType { TEXT_PLAIN("text/plain; charset=UTF-8", ContentCategory.PLAIN_TEXT), APPLICATION_JSON("application/json; charset=UTF-8", ContentCategory.JSON), ; private final String headerText; private final ContentCategory category; CommonContentTypes(String headerText, ContentCategory category) { this.headerText = headerText; this.category = category; } @Override public String headerText() { return headerText; } @Override public ContentCategory category() { return category; } } Conclusion Interfaces are a powerful Java feature, often underestimated and underutilized. This article is an attempt to shed light on the possible ways to utilize their power and get clean, expressive, concise, yet readable code.
In this article, learn how the Dapr project can reduce the cognitive load on Java developers and decrease application dependencies. Coding Java applications for the cloud requires not only a deep understanding of distributed systems, cloud best practices, and common patterns but also an understanding of the Java ecosystem to know how to combine many libraries to get things working. Tools and frameworks like Spring Boot have significantly impacted developer experience by curating commonly used Java libraries, for example, logging (Log4j), parsing different formats (Jackson), serving HTTP requests (Tomcat, Netty, the reactive stack), etc. While Spring Boot provides a set of abstractions, best practices, and common patterns, there are still two things that developers must know to write distributed applications. First, they must clearly understand which dependencies (clients/drivers) they must add to their applications depending on the available infrastructure. For example, they need to understand which database or message broker they need and what driver or client they need to add to their classpath to connect to it. Secondly, they must know how to configure that connection, the credentials, connection pools, retries, and other critical parameters for the application to work as expected. Understanding these configuration parameters pushes developers to know how these components (databases, message brokers, configurations stores, identity management tools) work to a point that goes beyond their responsibilities of writing business logic for their applications. Learning best practices, common patterns, and how a large set of application infrastructure components work is not bad, but it takes a lot of development time out of building important features for your application. In this short article, we will look into how the Dapr project can help Java developers not only to implement best practices and distributed patterns out of the box but also to reduce the application’s dependencies and the amount of knowledge required by developers to code their applications. We will be looking at a simple example that you can find here. This Pizza Store application demonstrates some basic behaviors that most business applications can relate to. The application is composed of three services that allow customers to place pizza orders in the system. The application will store orders in a database, in this case, PostgreSQL, and use Kafka to exchange events between the services to cover async notifications. All the asynchronous communications between the services are marked with red dashed arrows. Let’s look at how to implement this with Spring Boot, and then let’s add Dapr. The Spring Boot Way Using Spring Boot, developers can create these three services and start writing the business logic to process the order placed by the customer. Spring Boot Developers can use http://start.spring.io to select which dependencies their applications will have. For example, with the Pizza Store Service, they will need Spring Web (to host and serve the FrontEnd and some REST endpoints), but also the Spring Actuators extension if we aim to run these services on Kubernetes. But as with any application, if we want to store data, we will need a database/persistent storage, and we have many options to select from. If you look into Spring Data, you can see that Spring Data JPA provides an abstraction to SQL (relational) databases. As you can see in the previous screenshot, there are also NoSQL options and different layers of abstractions here, depending on what your application is doing. If you decide to use Spring Data JPA, you are still responsible for adding the correct database Driver to the application classpath. In the case of PostgreSQL, you can also select it from the list. We face a similar dilemma if we think about exchanging asynchronous messages between the application’s services. There are too many options: Because we are developers and want to get things moving forward, we must make some choices here. Let’s use PostgreSQL as our database and Kafka as our messaging system/broker. I am a true believer in the Spring Boot programming model, including the abstraction layers and auto-configurations. However, as a developer, you are still responsible for ensuring that the right PostgreSQL JDBC driver and Kafka Client are included in your services classpath. While this is quite common in the Java space, there are a few drawbacks when dealing with larger applications that might consist of tens or hundreds of services. Application and Infrastructure Dependencies Drawbacks Looking at our simple application, we can spot a couple of challenges that application and operation teams must deal with when taking this application to production. Let’s start with application dependencies and their relationship with the infrastructure components we have decided to use. The Kafka Client included in all services needs to be kept in sync with the Kafka instance version that the application will use. This dependency pushes developers to ensure they use the same Kafka Instance version for development purposes. If we want to upgrade the Kafka Instance version, we need to upgrade, which means releasing every service that includes the Kafka Client again. This is particularly hard because Kafka tends to be used as a shared component across different services. Databases such as PostgreSQL can be hidden behind a service and never exposed to other services directly. But imagine two or more services need to store data; if they choose to use different database versions, operation teams will need to deal with different stack versions, configurations, and maybe certifications for each version. Aligning on a single version, say PostgreSQL 16.x, once again couples all the services that need to store or read persistent data with their respective infrastructure components. While versions, clients, and drivers create these coupling between applications and the available infrastructure, understanding complex configurations and their impact on application behavior is still a tough challenge to solve. Spring Boot does a fantastic job at ensuring that all configurations can be externalized and consumed from environment variables or property files, and while this aligns perfectly with the 12-factor apps principles and with container technologies such as Docker, defining these configurations parameter values is the core problem. Developers using different connection pool sizes, retry, and reconnection mechanisms being configured differently across environments are still, to this day, common issues while moving the same application from development environments to production. Learning how to configure Kafka and PostgreSQL for this example will depend a lot on how many concurrent orders the application receives and how many resources (CPU and memory) the application has available to run. Once again, learning the specifics of each infrastructure component is not a bad thing for developers. Still, it gets in the way of implementing new services and new functionalities for the store. Decoupling Infrastructure Dependencies and Reusing Best Practices With Dapr What if we can extract best practices, configurations, and the decision of which infrastructure components we need for our applications behind a set of APIs that application developers can consume without worrying about which driver/client they need or how to configure the connections to be efficient, secure and work across environments? This is not a new idea. Any company dealing with complex infrastructure and multiple services that need to connect to infrastructure will sooner or later implement an abstraction layer on top of common services that developers can use. The main problem is that building those abstractions and then maintaining them over time is hard, costs development time, and tends to get bypassed by developers who don’t agree or like the features provided. This is where Dapr offers a set of building blocks to decouple your applications from infrastructure. Dapr Building Block APIs allow you to set up different component implementations and configurations without exposing developers to the hassle of choosing the right drivers or clients to connect to the infrastructure. Developers focus on building their applications by just consuming APIs. As you can see in the diagram, developers don’t need to know about “infrastructure land” as they can consume and trust APIs to, for example, store and retrieve data and publish and subscribe to events. This separation of concern allows operation teams to provide consistent configurations across environments where we may want to use another version of PostgreSQL, Kafka, or a cloud provider service such as Google PubSub. Dapr uses the component model to define these configurations without affecting the application behavior and without pushing developers to worry about any of those parameters or the client/driver version they need to use. Dapr for Spring Boot Developers So, how does this look in practice? Dapr typically deploys to Kubernetes, meaning you need a Kubernetes cluster to install Dapr. Learning about how Dapr works and how to configure it might be too complicated and not related at all to developer tasks like building features. For development purposes, you can use the Dapr CLI, a command line tool designed to be language agnostic, allowing you to run Dapr locally for your applications. I like the Dapr CLI, but once again, you will need to learn about how to use it, how to configure it, and how it connects to your application. As a Spring Boot developer, adding a new command line tool feels strange, as it is not integrated with the tools that I am used to using or my IDE. If I see that I need to download a new CLI or if I depend on deploying my apps into a Kubernetes cluster even to test them, I would probably step away and look for other tools and projects. That is why the Dapr community has worked so hard to integrate with Spring Boot more natively. These integrations seamlessly tap into the Spring Boot ecosystem without adding new tools or steps to your daily work. Let’s see how this works with concrete examples. You can add the following dependency in your Spring Boot application that integrates Dapr with Testcontainers. <dependency> <groupId>io.diagrid.dapr</groupId> <artifactId>dapr-spring-boot-starter</artifactId> <version>0.10.7</version> </dependency> View the repository here. Testcontainers (now part of Docker) is a popular tool in Java to work with containers, primarily for tests, specifically integration tests that use containers to set up complex infrastructure. Our three Pizza Spring Boot services have the same dependency. This allows developers to enable their Spring Boot applications to consume the Dapr Building Block APIs for their local development without any Kubernetes, YAML, or configurations needed. Once you have this dependency in place, you can start using the Dapr SDK to interact with Dapr Building Blocks APIs, for example, if you want to store an incoming order using the Statestore APIs: Where `STATESTORE_NAME` is a configured Statestore component name, the `KEY` is just a key that we want to use to store this order and `order` is the order that we received from the Pizza Store front end. Similarly, if you want to publish events to other services, you can use the PubSub Dapr API; for example, to emit an event that contains the order as the payload, you can use the following API: The publishEvent API publishes an event containing the `order` as a payload into the Dapr PubSub component named (PUBSUB_NAME) and inside a specific topic indicated by PUBSUB_TOPIC. Now, how is this going to work? How is Dapr storing state when we call the saveState() API, or how are events published when we call publishEvent()? By default, the Dapr SDK will try to call the Dapr API endpoints to localhost, as Dapr was designed to run beside our applications. For development purposes, to enable Dapr for your Spring Boot application, you can use one of the two built-in profiles: DaprBasicProfile or DaprFullProfile. The Basic profile provides access to the Statestore and PubSub API, but more advanced features such as Actors and Workflows will not work. If you want to get access to all Dapr Building Blocks, you can use the Full profile. Both of these profiles use in-memory implementations for the Dapr components, making your applications faster to bootstrap. The dapr-spring-boot-starter was created to minimize the amount of Dapr knowledge developers need to start using it in their applications. For this reason, besides the dependency mentioned above, a test configuration is required in order to select which Dapr profile we want to use. Since Spring Boot 3.1.x, you can define a Spring Boot application that will be used for test purposes. The idea is to allow tests to set up your application with all that is needed to test it. From within the test packages (`src/test/<package>`) you can define a new @SpringBootApplication class, in this case, configured to use a Dapr profile. As you can see, this is just a wrapper for our PizzaStore application, which adds a configuration that includes the DaprBasicProfile. With the DaprBasicProfile enabled, whenever we start our application for testing purposes, all the components that we need for the Dapr APIs to work will be started for our application to consume. If you need more advanced Dapr setups, you can always create your domain-specific Dapr profiles. Another advantage of using these test configurations is that we can also start the application using test configuration for local development purposes by running `mvn spring-boot:test-run` You can see how Testcontainers is transparently starting the `daprio/daprd` container. As a developer, how that container is configured is not important as soon as we can consume the Dapr APIs. I strongly recommend you check out the full example here, where you can run the application on Kubernetes with Dapr installed or start each service and test locally using Maven. If this example is too complex for you, I recommend you to check these blog posts where I create a very simple application from scratch: Using the Dapr StateStore API with Spring Boot Deploying and configuring our simple application in Kubernetes
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Andrei Tuchin
Lead Software Developer, VP,
JPMorgan & Chase
Ram Lakshmanan
yCrash - Chief Architect