Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
LTS JDK 21 Features
If you use Spring WebFlux, you probably want your requests to be more resilient. In this case, we can just use the retries that come packaged with the WebFlux library. There are various cases that we can take into account: Too many requests to the server An internal server error Unexpected format Server timeout We would make a test case for those using MockWebServer. We will add the WebFlux and the MockWebServer to a project: XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-webflux</artifactId> <version>2.7.15</version> </dependency> <dependency> <groupId>com.squareup.okhttp3</groupId> <artifactId>mockwebserver</artifactId> <version>4.11.0</version> <scope>test</scope> </dependency> <dependency> <groupId>io.projectreactor</groupId> <artifactId>reactor-test</artifactId> <scope>test</scope> <version>3.5.9</version> </dependency> Let’s check the scenario of too many requests on the server. In this scenario, our request fails because the server will not fulfill it. The server is still functional however and on another request, chances are we shall receive a proper response. Java import okhttp3.mockwebserver.MockResponse; import okhttp3.mockwebserver.MockWebServer; import okhttp3.mockwebserver.SocketPolicy; import org.junit.jupiter.api.Test; import org.springframework.web.reactive.function.client.WebClient; import reactor.core.publisher.Mono; import reactor.test.StepVerifier; import java.io.IOException; import java.time.Duration; import java.util.concurrent.TimeUnit; class WebFluxRetry { @Test void testTooManyRequests() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Too Many Requests") .setResponseCode(429); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } } We used the mock server in order to enqueue requests. Essentially the requests we placed on the mock server will be enqueued and consumed every time we do a request. The first two responses would be failed 429 responses from the server. Let’s check the case of 5xx responses. A 5xx can be caused by various reasons. Usually, if we face a 5xx, there is probably a problem in the server codebase. However, in some cases, 5xx might come as a result of an unstable service that regularly restarts. Also, a server might be deployed in an availability zone that faces network issues; it can even be a failed rollout that is not fully in effect. In this case, a retry makes sense. By retrying, the request will be routed to the next server behind the load balancer. We will try a request that has a bad status: Java @Test void test5xxResponse() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Server Error") .setResponseCode(500); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } Also, a response with the wrong format is possible to happen if an application goes haywire: Java @Data @AllArgsConstructor @NoArgsConstructor private static class UsernameResponse { private String username; } @Test void badFormat() throws IOException { MockWebServer server = new MockWebServer(); MockResponse tooManyRequests = new MockResponse() .setBody("Plain text"); MockResponse successfulRequests = new MockResponse() .setBody("{\"username\":\"test\"}") .setHeader("Content-Type","application/json"); server.enqueue(tooManyRequests); server.enqueue(tooManyRequests); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<UsernameResponse> result = webClient.get() .retrieve() .bodyToMono(UsernameResponse.class) .retry(2); StepVerifier.create(result) .expectNextMatches(s -> s.getUsername().equals("test")) .verifyComplete(); server.shutdown(); } If we break it down, we created two responses in plain text format. Those responses would be rejected since they cannot be mapped to the UsernameResponse object. Thanks to the retries we managed to get a successful response. Our last request would tackle the case of a timeout: Java @Test void badTimeout() throws IOException { MockWebServer server = new MockWebServer(); MockResponse dealayedResponse= new MockResponse() .setBody("Plain text") .setSocketPolicy(SocketPolicy.DISCONNECT_DURING_RESPONSE_BODY) .setBodyDelay(10000, TimeUnit.MILLISECONDS); MockResponse successfulRequests = new MockResponse() .setBody("successful"); server.enqueue(dealayedResponse); server.enqueue(successfulRequests); server.start(); WebClient webClient = WebClient.builder() .baseUrl("http://" + server.getHostName() + ":" + server.getPort()) .build(); Mono<String> result = webClient.get() .retrieve() .bodyToMono(String.class) .timeout(Duration.ofMillis(5_000)) .retry(1); StepVerifier.create(result) .expectNextMatches(s -> s.equals("successful")) .verifyComplete(); server.shutdown(); } That’s it. Thanks to retries, our codebase was able to recover from failures and become more resilient. Also, we used MockWebServer, which can be very handy for simulating these scenarios.
ConcurrentHashMap is used extensively in multi-threaded applications. Examples of multi-threaded applications are online gaming applications and chat applications, which add the benefit of concurrency to the application. To make the application more concurrent in nature, ConcurrentHashMap introduces a concept called ‘Parallelism.’ In this article, we will learn more about parallelism in Concurrent Hashmaps. What Is Parallelism? Basically, parallel computing divides a problem into subproblems, solves those subproblems parallelly, and finally joins the results of the subproblems. Here, the subproblems will run in separate threads. Java Support to Parallelism in ConcurrentHashMap In order to make use of parallelism in ConcurrentHashMap, we need to use Java 1.8 version onwards. Parallelism is not supported in Java versions less than 1.8. Common Framework for Parallel Processing Java has introduced a framework called ‘fork and join’ that will enable parallel computing. It makes use of java.util.concurrent.ForkJoinPool API to achieve parallel computing. This API is used to implement parallelism in ConcurrentHashMap. Parallel Methods in ConcurrentHashMap ConcurrentHashMap effectively uses parallel computing with the help of parallelism threshold. It is a numerical value, and the default value is two. These are the following methods that have parallelism capabilities in ConcurrentHashMap. forEach() reduce() reduceEntries() forEachEntry() forEachKey() forEachValue() The concurrentHashMap deals with parallelism slightly differently, and you will understand that if you look at the arguments of these above methods. Each of these methods can take the parallelism threshold as an argument. First of all, parallelism is an optional feature. We can enable this feature by adding the proper parallel threshold value in the code. Usage of ConcurrentHashMap Without Parallelism Let us take an example of replacing all the string values of a concurrenthashmap. This is done without using parallelism. Example: concurrentHashMap.forEach((k,v) -> v=””); It is pretty straightforward, and we are iterating all the entries in a concurrenthashmap and replacing the value with an empty string. In this case, we are not using parallelism. Usage of ConcurrentHashMap With Parallelism Example: concurrentHashMap.forEach(2, (k,v) -> v=””); The above example iterates a ConcurrentHashMap and replaces the value of a map with an empty string. The arguments to the forEach() method are parallelism threshold and a functional interface. In this case, the problem will be divided into subproblems. The problem is replacing the concurrent hashmap's value with an empty string. This is achieved by dividing this problem into subproblems, i.e., creating separate threads for subproblems, and each thread will focus on replacing the value with an empty string. What Happens When Parallelism Is Enabled? When the parallelism threshold is enabled, JVM will create threads, and each thread will run to solve the problem and join the results of all the threads. The significance of this value is that if the number of records has reached a certain level (threshold), then only JVM will enable parallel processing in the above example. The application will enable parallel processing if there is more than one record in the map. This is a cool feature; we can control the parallelism by adjusting the threshold value. This way, we can take advantage of parallel processing in the application. Take a look at another example below: concurrentHashMap.forEach(10000, (k,v) -> v=””); In this case, the parallelism threshold is 10,000, which means that if the number of records is less than 10,000, JVM will not enable parallelism when replacing the values with an empty string. Fig: Full code example without parallelism Fig: Full code example with parallelism In the above example, the parallelism threshold is 10,000. Performance Comparison of Parallel Processing The following code replaces all the values in the map with an empty string. This concurrenthash map contains more than 100,000 entries in it. Let’s compare the performance of the below code without and with parallelism. Fig: Comparison of the code both with and without parallelism After running the above code, you can see there is a little performance improvement in the case of normal forEach operation. time without parallelism->20 milliseconds time with parallelism->30 milliseconds This is because the number of records on the map is fairly low. But if we add 10 million records to the map, then parallelism really wins! It takes less time to process the data. Take a look at the code in the below image: Fig: Threshold of the code with and without parallelism The above code replaces all the values in the concurrenthashmap with an empty string without using parallelism. Next, it uses parallelism to replace all the values of the concurrenthashmap with string one. This is the output: time without parallelism->537 milliseconds time with parallelism->231 milliseconds You can see that in the case of parallelism, it only takes half of the time. Note: The above values are not constant. It may produce different results in different systems. Thread Dump Analysis for Parallelism JVM uses the ForkJoinPool framework to enable parallel processing when we enable parallelism in the code. This framework creates a few worker threads based on the demand in the current processing. Let’s take a look at the thread dump analysis with parallelism enabled using the fastthread.io tool for the above code. Fig: fastThread report showing the thread count with parallelism enabled Fig: fastThread report showing the identical stacktrace by enabling parallelism You can understand from the above picture that it is using more threads. The reason for too many running threads is that it is using ForkJoinPool API. This is the API that is responsible for implementing the 'parallelism' behind the scenes. You will understand this difference when you look at the next section. View the report. Thread Dumps Analysis Without Parallelism Let us understand the thread dump analysis without enabling parallelism. Fig: fastThread report showing thread count without parallelism enabled Fig: fastThread report showing the identical stacktrace without enabling parallelism If you look closely at the above image, you can understand that only a few threads are used. In this case, there are only 35 threads as compared to the previous image. There are 32 runnable threads in this case. But, waiting and timed_waiting threads are 2 and 1, respectively. The reason for the reduced number of runnable threads, in this case, is that it is not calling the ForkJoinPool API. View the report. This way, the fastthread.io tool can provide a good insight into the thread dump internals very smartly. Summary We focused on parallelism in the concurrenthashmap and how this feature can be used in the application. Also, we understood what happens with the JVM when we enable this feature. Parallelism is a cool feature that can be used well in modern concurrent applications.
If you’re anything like me, you’ve noticed the massive boom in AI technology. It promises to disrupt not just software engineering but every industry. THEY’RE COMING FOR US!!! Just kidding ;P I’ve been bettering my understanding of what these tools are and how they work, and decided to create a tutorial series for web developers to learn how to incorporate AI technology into web apps. In this series, we’ll learn how to integrate OpenAI‘s AI services into an application built with Qwik, a JavaScript framework focused on the concept of resumability (this will be relevant to understand later). Here’s what the series outline looks like: Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying We’ll get into the specifics of OpenAI and Qwik where it makes sense, but I will mostly focus on general-purpose knowledge, tooling, and implementations that should apply to whatever framework or toolchain you are using. We’ll be working as closely to fundamentals as we can, and I’ll point out which parts are unique to this app. Here’s a little sneak preview. I thought it would be cool to build an app that takes two opponents and uses AI to determine who would win in a hypothetical fight. It provides some explanation and the option to create an AI-generated image. Sometimes the results come out a little wonky, but that’s what makes it fun. I hope you’re excited to get started because in this first post, we are mostly going to work on… Boilerplate :/ Prerequisites Before we start building anything, we have to cover a couple of prerequisites. Qwik is a JavaScript framework, so we will have to have Node.js (and NPM) installed. You can download the most recent version, but anything above version v16.8 should work. I’ll be using version 20. Next, we’ll also need an OpenAI account to have access to their API. At the end of the series, we will deploy our applications to a VPS (Virtual Private Server). The steps we follow should be the same regardless of what provider you choose. I’ll be using Akamai’s cloud computing services (formerly Linode). Setting Up the Qwik App Assuming we have the prerequisites out of the way, we can open a command line terminal and run the command: npm create qwik@latest. This will run the Qwik CLI that will help us bootstrap our application. It will ask you a series of configuration questions, and then generate the project for you. Here’s what my answers looked like: If everything works, open up the project and start exploring. Inside the project folder, you’ll notice some important files and folders: /src: Contains all application business logic /src/components: Contains reusable components to build our app with /src/routes: Responsible for Qwik’s file-based routing; Each folder represents a route (can be a page or API endpoint). To make a page, drop a index.{jsx|tsx} file in the route’s folder. /src/root.tsx: This file exports the root component responsible for generating the HTML document root. Start Development Qwik uses Vite as a bundler, which is convenient because Vite has a built-in development server. It supports running our application locally, and updating the browser when files change. To start the development server, we can open our project in a terminal and execute the command npm run dev. With the dev server running, you can open the browser and head to http://localhost:5173 and you should see a very basic app. Any time we make changes to our app, we should see those changes reflected almost immediately in the browser. Add Styling This project won’t focus too much on styling, so this section is totally optional if you want to do your own thing. To keep things simple, I’ll use Tailwind. The Qwik CLI makes it easy to add the necessary changes, by executing the terminal command, npm run qwik add. This will prompt you with several available Qwik plugins to choose from. You can use your arrow keys to move down to the Tailwind plugin and press Enter. Then it will show you the changes it will make to your codebase and ask for confirmation. As long as it looks good, you can hit Enter, once again. For my projects, I also like to have a consistent theme, so I keep a file in my GitHub to copy and paste styles from. Obviously, if you want your own theme, you can ignore this step, but if you want your project to look as amazing as mine, copy the styles from this file on GitHub into the /src/global.css file. You can replace the old styles, but leave the Tailwind directives in place. Prepare Homepage The last thing we’ll do today to get the project to a good starting point is make some changes to the homepage. This means making changes to /src/routes/index.tsx. By default, this file starts out with some very basic text and an example for modifying the HTML <head> by exporting a head variable. The changes I want to make include: Removing the head export Removing all text except the <h1>; Feel free to add your own page title text. Adding some Tailwind classes to center the content and make the <h1> larger Wrapping the content with a <main> tag to make it more semantic Adding Tailwind classes to the <main> tag to add some padding and center the contents These are all minor changes that aren’t strictly necessary, but I think they will provide a nice starting point for building out our app in the next post. Here’s what the file looks like after my changes. import { component$ } from "@builder.io/qwik"; export default component$(() => { return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-6xl">Hi [wave emoji]</h1> </main> ); }); And in the browser, it looks like this: Conclusion That’s all we’ll cover today. Again, this post was mostly focused on getting the boilerplate stuff out of the way so that the next post can be dedicated to integrating OpenAI’s API into our project. With that in mind, I encourage you to take a moment to think about some AI app ideas that you might want to build. There will be a lot of flexibility for you to put your own spin on things. I’m excited to see what you come up with, and if you would like to explore the code in more detail, I’ll post it on my GitHub account.
In the ever-evolving world of Java development, developers constantly look for tools and libraries to simplify the code-writing process. One such tool is Project Lombok, often simply referred to as Lombok. This Java library offers code generation features that promise to simplify developers' lives. However, as with any powerful tool, there are pitfalls to be aware of. In this article, we will delve deep into the world of code design with a focus on Lombok. We'll explore why Lombok's seemingly convenient annotations, such as Builder and Log, might not be as flawless as they seem. We'll also highlight the importance of encapsulation and discuss how Lombok's Data and NotNull annotations can lead to unexpected challenges. Whether you're a seasoned developer or just starting your coding journey, this article will equip you with valuable insights to enhance your engineering skills. The Good Points of Lombok Before we dive into the potential pitfalls, it's essential to acknowledge the positive aspects of Lombok. Lombok offers several annotations that can significantly simplify code writing: Log and Builder Annotations Lombok's Log annotation allows developers to quickly generate logging code, reducing the need for boilerplate code. The Builder annotation streamlines the creation of complex objects by developing builder methods that enhance code readability. The Encapsulation Challenge However, it's not all smooth sailing when it comes to Lombok. One of the most significant challenges posed by Lombok relates to the concept of encapsulation. Encapsulation is a fundamental principle of object-oriented programming, emphasizing the bundling of data (attributes) and methods (functions) that operate on that data into a single unit, known as a class. It helps in maintaining data integrity and protecting data from unauthorized access. The Data Annotation Lombok's Data annotation, while seemingly convenient, can lead to anemic models, a term used to describe objects that primarily store data with little behavior. This annotation generates getter and setter methods for all fields in a class, effectively breaking encapsulation by exposing the internal state to external manipulation. Consider a scenario where you have a User class with sensitive information, such as a password field. Applying the Data annotation would automatically generate getter and setter methods for the password field, potentially allowing unauthorized access to sensitive data. This can lead to security vulnerabilities and data integrity issues. The NotNull Annotation Another challenge comes with Lombok's NotNull annotation. My advice would be some explicit API that comes from Java 8 with Objects.requireNonNull. To address the issue of null values, it's worth noting that Java 8 and higher versions offer a built-in solution. The Objects.requireNonNull method allows developers to explicitly check for null values and throw a NullPointerException if a null value is encountered. This approach provides a clear and concise way to handle null checks, ensuring that essential fields are not uninitialized. Here's an example of how Objects.requireNonNull can be used: Java public void setUser(User user) { this.user = Objects.requireNonNull(user, "User must not be null"); } By using Objects.requireNonNull, developers can enforce null checks more robustly, even without relying on Lombok's NotNull annotation. Enhancing Code Templates and IDE Support It's also important to note that even without using Lombok, development teams can enhance code templates in their Integrated Development Environments (IDEs). For example, IntelliJ IDEA, a popular Java IDE, offers native support for generating builder patterns. Developers can create custom code templates or use IDE-specific features to generate code that matches their preferred coding standards. By utilizing IDE features and custom templates, teams can achieve many of Lombok's benefits, such as reduced boilerplate code and improved code readability, while maintaining full control over the generated code. Challenges with Enforcing Best Practices In an ideal world, developers could use tools like Arch Unit to enforce coding best practices and prevent the use of unsafe annotations. However, as our experience shows, this can be easier said than done. Avoiding specific Lombok annotations through automated tools may face challenges or limitations. This places a greater responsibility on code reviews and developer discipline to catch and correct potential issues. The Trade-Offs of Using Lombok Like any tool, Lombok brings trade-offs from a code design perspective. It offers convenience and reduces boilerplate code, but it can also introduce risks to data encapsulation and require additional vigilance during code reviews. The decision to use Lombok in your projects should be well-considered, considering the specific needs of your application and the development team's familiarity with Lombok's features and potential pitfalls. In conclusion, Lombok is a powerful tool that can significantly improve code readability and reduce boilerplate code in Java development. However, it is essential to approach its use cautiously, especially regarding data encapsulation. Understanding the potential pitfalls, such as the Data and NotNull annotations, is crucial for maintaining code integrity and security. As with any tool in the developer's toolbox, Lombok should be used judiciously, carefully considering its benefits and drawbacks. A well-informed approach to Lombok can help you leverage its advantages while mitigating the risks, ultimately leading to more maintainable and secure Java code. So, before you embrace Lombok in your Java projects, remember to unravel its code design pitfalls and make informed decisions to enhance your engineering skills and ensure the integrity of your codebase.
The rise of microservices architecture has changed the way developers build and deploy applications. Spring Cloud, a part of the Spring ecosystem, aims to simplify the complexities of developing and managing microservices. In this comprehensive guide, we will explore Spring Cloud and its features and demonstrate its capabilities by building a simple microservices application. What Is Spring Cloud? Spring Cloud is a set of tools and libraries that provide solutions to common patterns and challenges in distributed systems, such as configuration management, service discovery, circuit breakers, and distributed tracing. It builds upon Spring Boot and makes it easy to create scalable, fault-tolerant microservices. Key Features of Spring Cloud Configuration management: Spring Cloud Config provides centralized configuration management for distributed applications. Service discovery: Spring Cloud Netflix Eureka enables service registration and discovery for better load balancing and fault tolerance. Circuit breaker: Spring Cloud Netflix Hystrix helps prevent cascading failures by isolating points of access between services. Distributed tracing: Spring Cloud Sleuth and Zipkin enable tracing requests across multiple services for better observability and debugging. Building a Simple Microservices Application With Spring Cloud In this example, we will create a simple microservices application consisting of two services: a user-service and an order-service. We will also use Spring Cloud Config and Eureka for centralized configuration and service discovery. Prerequisites Ensure that you have the following installed on your machine: Java 8 or later Maven or Gradle An IDE of your choice Dependencies XML <!-- maven --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> OR Groovy //Gradle implementation 'org.springframework.cloud:spring-cloud-config-server' implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client' implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server' implementation 'org.springframework.cloud:spring-cloud-starter-config' implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client' implementation 'org.springframework.boot:spring-boot-starter-web' Step 1: Setting up Spring Cloud Config Server Create a new Spring Boot project using Spring Initializr (https://start.spring.io/) and add the Config Server and Eureka Discovery dependencies. Name the project config-server. Add the following properties to your application.yml file: YAML server: port: 8888 spring: application: name: config-server cloud: config: server: git: uri: https://github.com/your-username/config-repo.git # Replace with your Git repository URL eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/ Enable the Config Server and Eureka Client by adding the following annotations to your main class: Java import org.springframework.cloud.config.server.EnableConfigServer; import org.springframework.cloud.netflix.eureka.EnableEurekaClient; @EnableConfigServer @EnableEurekaClient @SpringBootApplication public class ConfigServerApplication { public static void main(String[] args) { SpringApplication.run(ConfigServerApplication.class, args); } } Step 2: Setting up Spring Cloud Eureka Server Create a new Spring Boot project using Spring Initializr and add the Eureka Server dependency. Name the project eureka-server. Add the following properties to your application.yml file: YAML server: port: 8761 spring: application: name: eureka-server eureka: client: registerWithEureka: false fetchRegistry: false Enable the Eureka Server by adding the following annotation to your main class: Java import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @EnableEurekaServer @SpringBootApplication public class EurekaServerApplication { public static void main(String[] args) { SpringApplication.run(EurekaServerApplication.class, args); } } Step 3: Creating the User Service Create a new Spring Boot project using Spring Initializr and add the Config Client, Eureka Discovery, and Web dependencies. Name the project user-service. Add the following properties to your bootstrap.yml file: YAML spring: application: name: user-service cloud: config: uri: http://localhost:8888 eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/ Create a simple REST controller for the User Service: Java import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RestController; @RestController public class UserController { @GetMapping("/users/{id}") public String getUser(@PathVariable("id") String id) { return "User with ID: " + id; } } Step 4: Creating the Order Service Create a new Spring Boot project using Spring Initializr and add the Config Client, Eureka Discovery, and Web dependencies. Name the project order-service. Add the following properties to your bootstrap.yml file: YAML spring: application: name: order-service cloud: config: uri: http://localhost:8888 eureka: client: serviceUrl: defaultZone: http://localhost:8761/eureka/ Create a simple REST controller for the Order Service: Java import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RestController; @RestController public class OrderController { @GetMapping("/orders/{id}") public String getOrder(@PathVariable("id") String id) { return "Order with ID: " + id; } } Step 5: Running the Application Start the config-server, eureka-server, user-service, and order-service applications in the following order. Once all services are running, you can access the Eureka dashboard at http://localhost:8761 and see the registered services. You can now access the User Service at http://localhost:<user-service-port>/users/1 and the Order Service at http://localhost:<order-service-port>/orders/1. Conclusion In this comprehensive guide, we explored Spring Cloud and its features and demonstrated its capabilities by building a simple microservices application. By leveraging the power of Spring Cloud, you can simplify the development and management of your microservices, making them more resilient, scalable, and easier to maintain. Embrace the world of microservices with Spring Cloud and elevate your applications to new heights.
When working with objects in Java, there are times when you need to create a copy of an object. However, not all copies are the same. In fact, there are two main ways to make copies: deep copy and shallow copy. Let’s explore these concepts and see how they work with some easy examples. Deep Copy: What Is It? Imagine you have a collection of shapes, each with its own set of properties. A deep copy of an object means creating a completely new copy of the original object, along with all the nested objects it contains. In other words, it’s like making a photocopy of each shape, including all the details. Shallow Copy: What's the Difference? On the other hand, a shallow copy is like making a copy of a picture and its frame. You get a new frame, but the picture itself remains the same. Similarly, a shallow copy of an object creates a new object, but it still shares the same nested objects with the original. Changes made to nested objects in the copied object will also affect the original object, and vice versa. Let’s Put It Into Practice: Shapes Example Imagine you have a class called Circle, which has a nested object of class Point representing its center. We'll see how deep copy and shallow copy work with these objects. Java public class Circle { public Point center; public int radius; public Circle(Point center, int radius) { this.center = center; this.radius = radius; } } public class Point { public int x, y; public Point(int x, int y) { this.x = x; this.y = y; } } Creating a Shallow Copy For a shallow copy, we just copy the references to the nested objects: Java public Circle shallowCopyCircle(Circle original) { return new Circle(original.center, original.radius); } Creating Deep Copy For a deep copy of a Circle, we need to create new instances of both the Point and the Circle objects: Java public Circle deepCopyCircle(Circle original) { Point copiedPoint = new Point(original.center.x, original.center.y); return new Circle(copiedPoint, original.radius); } Creating Simple CopyUtil Class Here is the util class which has object copy codes: Java public class CopyUtil { public Circle deepCopyCircle(Circle original) { Point copiedPoint = new Point(original.center.x, original.center.y); return new Circle(copiedPoint, original.radius); } public Circle shallowCopyCircle(Circle original) { return new Circle(original.center, original.radius); } } Unit Tests Let's write some simple tests to check our deep copy and shallow copy methods. Java public class ShallowAndDeepCopyUnitTest { @Test public void givenCircle_whenDeepCopy_thenDifferentObjects() { CopyUtil util=new CopyUtil(); Point center = new Point(3, 5); Circle original; original = new Circle(center, 10); Circle copied = util.deepCopyCircle(original); assertNotSame(original, copied); Assert.assertNotSame(original.center, copied.center); } @Test public void givenCircle_whenShallowCopy_thenSameCenter() { CopyUtil util=new CopyUtil(); Point center = new Point(7, 9); Circle original = new Circle(center, 15); Circle copied = util.shallowCopyCircle(original); assertNotSame(original, copied); assertSame(original.center, copied.center); } } Conclusion Creating deep copies and shallow copies of objects in Java is like making copies of pictures and their frames. Deep copies duplicate everything, while shallow copies create new frames but share the same pictures. By understanding these concepts and applying them in your Java programs, you can ensure that your objects are copied in the way that best suits your needs. So remember, whether it's circles, squares, or any other objects, knowing how to copy them can make your Java programming smoother and more efficient.
Building components and reusing them across different packages led me to conclude that it is necessary to organize the correct approach for the content of these projects in a single structure. Building tools should be the same, including testing environment, lint rules, and efficient resource allocation for component libraries. I was looking for tools that could bring me efficient and effective ways to build robust, powerful combinations. As a result, a formidable trio emerged. In this article, we will create several packages with all those tools. Tools Before we start, let’s examine what each of these tools does. Lerna: Manages JavaScript projects with multiple packages; It optimizes the workflow around managing multipackage repositories with Git and NPM. Vite: Build tool providing rapid hot module replacement, out-of-the-box ES Module support, extensive feature, and plugin support for React Storybook: An open-source tool for developing and organizing UI components in isolation, which also serves as a platform for visual testing and creating interactive documentation Lerna Initial Setup The first step will be to set up the Lerna project. Create a folder with lerna_vite_monorepo and inside that folder, run through the terminal npx lerna init — this will create an essential for the Lerna project. It generates two files — lerna.json, package.json — and empty folder packages. lerna.json — This file enables Lerna to streamline your monorepo configuration, providing directives on how to link dependencies, locate packages, implement versioning strategies, and execute additional tasks. Vite Initial Setup Once the installation is complete, a packages folder will be available. Our next step involves creating several additional folders inside packages the folder: vite-common footer-components body-components footer-components To create those projects, we have to run npm init vite with the project name. Choose React as a framework and Typescript as a variant. Those projects will use the same lint rules, build process, and React version. This process in each package will generate a bunch of files and folders: ├── .eslintrc.cjs ├── .gitignore ├── index.html ├── package.json ├── public │ └── vite.svg ├── src │ ├── App.css │ ├── App.tsx │ ├── assets │ │ └── react.svg │ ├── index.css │ ├── main.tsx │ └── vite-env.d.ts ├── tsconfig.json ├── tsconfig.node.json └── vite.config.ts Storybook Initial Setup Time to set up a Storybook for each of our packages. Go to one of the package folders and run there npx storybook@latest init for Storybook installation. For the question about eslint-plugin-storybook — input Y for installation. After that, the process of installing dependencies will be launched. This will generate .storybook folder with configs and stories in src. Let’s remove the stories folder because we will build our own components. Now, run the installation npx sb init --builder @storybook/builder-vite — it will help you build your stories with Vite for fast startup and HMR. Assume that for each folder, we have the same configurations. If those installation has been accomplished, then you can run yarn storybook inside the package folder and run the Storybook. Initial Configurations The idea is to reuse common settings for all of our packages. Let’s remove some files that we don’t need in each repository. Ultimately, each folder you have should contain the following set of folders and files: ├── package.json ├── src │ └── vite-env.d.ts ├── tsconfig.json └── vite.config.ts Now, let’s take all devDependencies and cut them from package.json in one of our package folders and put them all to devDependenices in the root package.json. Run in root npx storybook@latest init and fix in main.js property: stories: [ "../packages/*/src/**/*..mdx", "../packages/*/src/**/*.stories.@(js|jsx|ts|tsx)" ], And remove from the root in package.json two scripts: "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" Add components folder with index.tsx file to each package folder: ├── package.json ├── src │ ├── components │ │ └── index.tsx │ ├── index.tsx │ └── vite-env.d.ts ├── tsconfig.json └── vite.config.ts We can establish common configurations that apply to all packages. This includes settings for Vite, Storybook, Jest, Babel, and Prettier, which can be universally configured. The root folder has to have the following files: ├── .eslintrc.cjs ├── .gitignore ├── .nvmrc ├── .prettierignore ├── .prettierrc.json ├── .storybook │ ├── main.ts │ ├── preview-head.html │ └── preview.ts ├── README.md ├── babel.config.json ├── jest.config.ts ├── lerna.json ├── package.json ├── packages │ ├── vite-body │ ├── vite-common │ ├── vite-footer │ └── vite-header ├── test.setup.ts ├── tsconfig.json ├── tsconfig.node.json └── vite.config.ts We won’t be considering the settings of Babel, Jest, and Prettier in this instance. Lerna Configuration First, let’s examine the Lerna configuration file that helps manage our monorepo project with multiple packages. JSON { "$schema": "node_modules/lerna/schemas/lerna-schema.json", "useWorkspaces": true, "packages": ["packages/*"], "version": "independent" } First of all, "$schema" provides structure and validation for the Lerna configuration. When "useWorkspaces" is true, Lerna will use yarn workspaces for better linkage and management of dependencies across packages. If false, Lerna manages interpackage dependencies in monorepo. "packages" defines where Lerna can find the packages in the project. "version" when set to "independent", Lerna allows each package within the monorepo to have its own version number, providing flexibility in releasing updates for individual packages. Common Vite Configuration Now, let’s examine the necessary elements within the vite.config.ts file. TypeScript import path from "path"; import { defineConfig } from "vite"; import pluginReact from "@vitejs/plugin-react"; const isExternal = (id: string) => !id.startsWith(".") && !path.isAbsolute(id); export const getBaseConfig = ({ plugins = [], lib }) => defineConfig({ plugins: [pluginReact(), ...plugins], build: { lib, rollupOptions: { external: isExternal, output: { globals: { react: "React", }, }, }, }, }); This file will export the common configs for Vite with extra plugins and libraries which we will reuse in each package. defineConfig serves as a utility function in Vite’s configuration file. While it doesn’t directly execute any logic or alter the passed configuration object, its primary role is to enhance type inference and facilitate autocompletion in specific code editors. rollupOptions allows you to specify custom Rollup options. Rollup is the module bundler that Vite uses under the hood for its build process. By providing options directly to Rollup, developers can have more fine-grained control over the build process. The external option within rollupOptions is used to specify which modules should be treated as external dependencies. In general, usage of the external option can help reduce the size of your bundle by excluding dependencies already present in the environment where your code will be run. The output option with globals: { react: "React" } in Rollup's configuration means that in your generated bundle, any import statements for react will be replaced with the global variable React. Essentially, it's assuming that React is already present in the user's environment and should be accessed as a global variable rather than included in the bundle. JSON { "compilerOptions": { "composite": true, "skipLibCheck": true, "module": "ESNext", "moduleResolution": "node", "allowSyntheticDefaultImports": true }, "include": ["vite.config.ts"] } The tsconfig.node.json file is used to specifically control how TypeScript transpiles with vite.config.ts file, ensuring it's compatible with Node.js. Vite, which serves and builds frontend assets, runs in a Node.js environment. This separation is needed because the Vite configuration file may require different TypeScript settings than your frontend code, which is intended to run in a browser. JSON { "compilerOptions": { // ... "types": ["vite/client", "jest", "@testing-library/jest-dom"], // ... }, "references": [{ "path": "./tsconfig.node.json" }] } By including "types": ["vite/client"] in tsconfig.json, is necessary because Vite provides some additional properties on the import.meta object that is not part of the standard JavaScript or TypeScript libraries, such as import.meta.env and import.meta.glob. Common Storybook Configuration The .storybook directory defines Storybook's configuration, add-ons, and decorators. It's essential for customizing and configuring how Storybook behaves. ├── main.ts └── preview.ts For the general configs, here are two files. Let’s check them all. main.ts is the main configuration file for Storybook and allows you to control the behavior of Storybook. As you can see, we’re just exporting common configs, which we’re gonna reuse in each package. TypeScript import type { StorybookConfig } from "@storybook/react-vite"; const config: StorybookConfig = { addons: [ { name: "@storybook/preset-scss", options: { cssLoaderOptions: { importLoaders: 1, modules: { mode: "local", auto: true, localIdentName: "[name]__[local]___[hash:base64:5]", exportGlobals: true, }, }, }, }, { name: "@storybook/addon-styling", options: { postCss: { implementation: require("postcss"), }, }, }, "@storybook/addon-links", "@storybook/addon-essentials", "@storybook/addon-interactions", "storybook-addon-mock", ], framework: { name: "@storybook/react-vite", options: {}, }, docs: { autodocs: "tag", }, }; export default config; File preview.ts allows us to wrap stories with decorators, which we can use to provide context or set styles across our stories globally. We can also use this file to configure global parameters. Also, it will export that general configuration for package usage. TypeScript import type { Preview } from "@storybook/react"; const preview: Preview = { parameters: { actions: { argTypesRegex: "^on[A-Z].*" }, options: { storySort: (a, b) => { return a.title === b.title ? 0 : a.id.localeCompare(b.id, { numeric: true }); }, }, layout: "fullscreen", controls: { matchers: { color: /(background|color)$/i, date: /Date$/, }, }, }, }; export default preview; Root package.json In a Lerna monorepo project, the package.json serves a similar role as in any other JavaScript or TypeScript project. However, some aspects are unique to monorepos. JSON { "name": "root", "private": true, "workspaces": [ "packages/*" ], "scripts": { "start:vite-common": "lerna run --scope vite-common storybook --stream", "build:vite-common": "lerna run --scope vite-common build --stream", "test:vite-common": "lerna run --scope vite-common test --stream", "start:vite-body": "lerna run --scope vite-body storybook --stream", "build": "lerna run build --stream", "test": "NODE_ENV=test jest --coverage" }, "dependencies": { "react": "^18.2.0", "react-dom": "^18.2.0" }, "devDependencies": { "@babel/core": "^7.22.1", "@babel/preset-env": "^7.22.2", "@babel/preset-react": "^7.22.3", "@babel/preset-typescript": "^7.21.5", "@storybook/addon-actions": "^7.0.18", "@storybook/addon-essentials": "^7.0.18", "@storybook/addon-interactions": "^7.0.18", "@storybook/addon-links": "^7.0.18", "@storybook/addon-styling": "^1.0.8", "@storybook/blocks": "^7.0.18", "@storybook/builder-vite": "^7.0.18", "@storybook/preset-scss": "^1.0.3", "@storybook/react": "^7.0.18", "@storybook/react-vite": "^7.0.18", "@storybook/testing-library": "^0.1.0", "@testing-library/jest-dom": "^5.16.5", "@testing-library/react": "^14.0.0", "@types/jest": "^29.5.1", "@types/react": "^18.0.28", "@types/react-dom": "^18.0.11", "@typescript-eslint/eslint-plugin": "^5.57.1", "@typescript-eslint/parser": "^5.57.1", "@vitejs/plugin-react": "^4.0.0", "babel-jest": "^29.5.0", "babel-loader": "^8.3.0", "eslint": "^8.41.0", "eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-refresh": "^0.3.4", "eslint-plugin-storybook": "^0.6.12", "jest": "^29.5.0", "jest-environment-jsdom": "^29.5.0", "lerna": "^6.5.1", "path": "^0.12.7", "prettier": "^2.8.8", "prop-types": "^15.8.1", "sass": "^1.62.1", "storybook": "^7.0.18", "storybook-addon-mock": "^4.0.0", "ts-jest": "^29.1.0", "ts-node": "^10.9.1", "typescript": "^5.0.2", "vite": "^4.3.2" } } Scripts will manage the monorepo. Running tests across all packages or building all packages. This package.json also include development dependencies that are shared across multiple packages in the monorepo, such as testing libraries or build tools. The private field is usually set to true in this package.json to prevent it from being accidentally published. Scripts, of course, can be extended with other packages for testing, building, and so on, like: "start:vite-footer": "lerna run --scope vite-footer storybook --stream", Package Level Configuration As far as we exported all configs from the root for reusing those configs, let’s apply them at our package level. Vite configuration will use root vite configuration where we just import getBaseConfig function and provide there lib. This configuration is used to build our component package as a standalone library. It specifies our package's entry point, library name, and output file name. With this configuration, Vite will generate a compiled file that exposes our component package under the specified library name, allowing it to be used in other projects or distributed separately. TypeScript import * as path from "path"; import { getBaseConfig } from "../../vite.config"; export default getBaseConfig({ lib: { entry: path.resolve(__dirname, "src/index.ts"), name: "ViteFooter", fileName: "vite-footer", }, }); For the .storybook, we use the same approach. We just import the commonConfigs. TypeScript import commonConfigs from "../../../.storybook/main"; const config = { ...commonConfigs, stories: ["../src/**/*..mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)"], }; export default config; And preview it as well. TypeScript import preview from "../../../.storybook/preview"; export default preview; For the last one from the .storybook folder, we need to add preview-head.html. HTML <script> window.global = window; </script> And the best part is that we have a pretty clean package.json without dependencies, we all use them for all packages from the root. JSON { "name": "vite-footer", "private": true, "version": "1.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc && vite build", "lint": "eslint src --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview", "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" }, "dependencies": { "vite-common": "^2.0.0" } } The only difference is vite-common, which is the dependency we’re using in the Footer component. Components By organizing our component packages in this manner, we can easily manage and publish each package independently while sharing common dependencies and infrastructure provided by our monorepo. Let’s look at the folder src of the Footer component. The other components will be identical, but the configuration only makes the difference. ├── assets │ └── flow.svg ├── components │ ├── Footer │ │ ├── Footer.stories.tsx │ │ └── index.tsx │ └── index.ts ├── index.ts └── vite-env.d.ts The vite-env.d.ts file in the src folder helps TypeScript understand and provide accurate type checking for Vite-related code in our project. It ensures that TypeScript can recognize and validate Vite-specific properties, functions, and features. Embedded Javascript /// <reference types="vite/client" /> In the src folder, index.ts has: TypeScript export * from "./components"; And the component that consumes vite-common components look like this: TypeScript-JSX import { Button, Links } from "vite-common"; export interface FooterProps { links: { label: string; href: string; }[]; } export const Footer = ({ links }: FooterProps) => { return ( <footer> <Links links={links} /> <Button label="Click Button" backgroundColor="green" /> </footer> ); }; export default Footer; Here’s what stories looks like for the component: TypeScript-JSX import { StoryFn, Meta } from "@storybook/react"; import { Footer } from "."; export default { title: "Example/Footer", component: Footer, parameters: { layout: "fullscreen", }, } as Meta<typeof Footer>; const mockedLinks = [ { label: "Home", href: "/" }, { label: "About", href: "/about" }, { label: "Contact", href: "/contact" }, ]; const Template: StoryFn<typeof Footer> = (args) => <Footer {...args} />; export const FooterWithLinks = Template.bind({}); FooterWithLinks.args = { links: mockedLinks, }; export const FooterWithOneLink = Template.bind({}); FooterWithOneLink.args = { links: [mockedLinks[0]], }; We use four packages in this example, but the approach is the same. Once you create all the packages, you have to be able to build, run, and test them independently. Before all are in the root level, run yarn install then yarn build to build all packages, or build yarn build:vite-common and you can start using that package in your other packages. Publish To publish all the packages in our monorepo, we can use the npx lerna publish command. This command guides us through versioning and publishing each package based on the changes made. lerna notice cli v6.6.2 lerna info versioning independent lerna info Looking for changed packages since vite-body@1.0.0 ? Select a new version for vite-body (currently 1.0.0) Major (2.0.0) ? Select a new version for vite-common (currently 2.0.0) Patch (2.0.1) ? Select a new version for vite-footer (currently 1.0.0) Minor (1.1.0) ? Select a new version for vite-header (currently 1.0.0) Patch (1.0.1) ❯ Minor (1.1.0) Major (2.0.0) Prepatch (1.0.1-alpha.0) Preminor (1.1.0-alpha.0) Premajor (2.0.0-alpha.0) Custom Prerelease Custom Version Lerna will ask us for each package version, and then you can publish it. lerna info execute Skipping releases lerna info git Pushing tags... lerna info publish Publishing packages to npm... lerna success All packages have already been published. Conclusion I was looking for a solid architecture solution for our front-end components organization in the company I am working for. For each project, we have a powerful, efficient development environment with general rules that help us become independent. This combination gives me streamlined dependency management, isolated component testing, and simplified publishing. References Repository Vite with Storybook
In the video below, we'll cover the newly released Hibernate 6.3. With its annotation processing capabilities, it offers alternative approaches to frameworks like Spring Data JPA, and we'll explore those with a bit of live coding. What’s in the Video? We'll start off with a tiny story about how this webinar came about. I read the new "Introduction to Hibernate 6" written by Gavin King, which includes many opinions on how to do data persistence with Java in general. I thought it might make sense to not only have a theoretical discussion about this, but take an existing Spring Boot/Spring Data JPA project, and replace its bits and pieces one by one with the new approach offered by Hibernate 6.3. Hence, we'll set the baseline for this video by quickly going over my Google Photos Clone project, which lets you create thumbnails for directories on your hard drive, for example, and display them on a (not yet nice-looking) HTML page. There are just a couple of data queries the application currently executes, mainly to select all photos, check if they already exist in a database, and save them to a database. So we'll go about replacing those one by one. Let's start with the select query. We'll use the newly introduced @HQL annotation to replace the Spring Data JPA select query with it. Along the way, we'll learn that we don't need to encode the query into the method name itself and that we also have the flexibility to use helper objects like Order or Page to customize our queries. Once we restarted our application to find out it is still working, let's take care of the "exists" query. It needs a bit of custom-written HQL, but along the way, we'll learn about compile-time validation of our queries - the Hibernate annotation processor does that out of the box. Once the exists query is working, we'll take care of the last query, saving new images to the database. That gives us room to discuss architectural questions, like "Do we need another abstraction on top of our annotated queries?" and "How do we manage and structure queries in bigger projects?" In the last quarter of the live-stream we'll discuss other popular questions that arise with Hibernate on a day-to-day basis: Should you use sessions vs. stateless sessions? Should you use fetch profiles extensively? Is it ok to use plain SQL with Hibernate? Is it ok to use Hibernate-specific annotations as opposed to JPA ones? And many more All in all, the livestream should be of huge value for anyone using Hibernate in their projects (which the majority of Java projects likely do). Enjoy! Video
Debugging complex code in Java is an essential skill for every developer. As projects grow in size and complexity, the likelihood of encountering bugs and issues increases. Debugging, however, is not just about fixing problems; it's also a valuable learning experience that enhances your coding skills. In this article, we'll explore effective strategies and techniques for debugging complex Java code, along with practical examples to illustrate each point. 1. Use a Debugger One of the most fundamental tools for debugging in Java is the debugger. Modern integrated development environments (IDEs) like IntelliJ IDEA, Eclipse, and NetBeans provide powerful debugging features that allow you to set breakpoints, inspect variables, and step through your code line by line. Java public class DebugExample { public static void main(String[] args) { int num1 = 10; int num2 = 0; int result = num1 / num2; // Set a breakpoint here System.out.println("Result: " + result); } } Here's a more detailed explanation of how to effectively use a debugger: Setting breakpoints: Breakpoints are markers you set in your code where the debugger will pause execution. This allows you to examine the state of your program at that specific point in time. To set a breakpoint, you typically click on the left margin of the code editor next to the line you want to pause at. Inspecting variables: While your code is paused at a breakpoint, you can inspect the values of variables. This is incredibly helpful for understanding how your program's state changes during execution. You can hover over a variable to see its current value or add it to a watch list for constant monitoring. Stepping through code: Once paused at a breakpoint, you can step through the code step by step. This means you can move forward one line at a time, seeing how each line of code affects the state of your program. This helps you catch any unintended behavior or logical errors. Call stack and call hierarchy: A debugger provides information about the call stack, showing the order in which methods were called and their relationships. This is especially useful in identifying the sequence of method calls that led to a specific error. Conditional breakpoints: You can set breakpoints that trigger only when certain conditions are met. For instance, if you're trying to identify why a loop is behaving unexpectedly, you can set a breakpoint to pause only when the loop variable reaches a specific value. Changing variable values: Some debuggers allow you to modify variable values during debugging. This can help you test different scenarios without having to restart your program. Exception breakpoints: You can set breakpoints that pause your program whenever an exception is thrown. This is particularly useful when dealing with runtime exceptions. 2. Print Statements Good old-fashioned print statements can be surprisingly effective. By strategically placing print statements in your code, you can trace the flow of execution and the values of variables at different stages. Java public class PrintDebugging { public static void main(String[] args) { int x = 5; int y = 3; int sum = x + y; System.out.println("x: " + x); System.out.println("y: " + y); System.out.println("Sum: " + sum); } } 3. Isolate the Problem If you encounter an issue, try to create a minimal example that reproduces the problem. This can help you isolate the troublesome part of your code and make it easier to find the root cause. Certainly! Isolating the problem through a minimal example is a powerful technique in debugging complex Java code. Let's explore this concept with a practical example: Imagine you're working on a Java program that calculates the factorial of a number using recursion. However, you've encountered a StackOverflowError when calculating the factorial of a larger number. To isolate the problem, you can create a minimal example that reproduces the issue. Here's how you could go about it: Java public class FactorialCalculator { public static void main(String[] args) { int number = 10000; // A larger number that causes StackOverflowError long factorial = calculateFactorial(number); System.out.println("Factorial of " + number + " is: " + factorial); } public static long calculateFactorial(int n) { if (n == 0) { return 1; } else { return n * calculateFactorial(n - 1); } } } In this example, the calculateFactorial the method calculates the factorial of a number using recursion. However, it's prone to a StackOverflowError for larger numbers due to the excessive number of recursive calls. To isolate the problem, you can create a minimal example by simplifying the code: Java public class MinimalExample { public static void main(String[] args) { int number = 5; // A smaller number to debug long factorial = calculateFactorial(number); System.out.println("Factorial of " + number + " is: " + factorial); } public static long calculateFactorial(int n) { if (n == 0) { return 1; } else { return n * calculateFactorial(n - 1); } } } By reducing the value of number, you're creating a scenario where the recursive calls are manageable and won't lead to a StackOverflowError. This minimal example helps you focus on the core problem and isolate it from other complexities present in your original code. Once you've identified the issue (in this case, the excessive recursion causing a StackOverflowError), you can apply your debugging techniques to understand why the original code behaves unexpectedly for larger numbers. In real-world scenarios, isolating the problem through minimal examples helps you narrow down the root cause, saving you time and effort in identifying complex issues within your Java code. 4. Rubber Duck Debugging Explaining your code to someone (or something) else, like a rubber duck, can help you spot mistakes. This technique forces you to break down your code step by step and often reveals hidden bugs. Certainly! The rubber duck debugging technique is a simple yet effective method to debug your Java code. Let's delve into it with a practical example: Imagine you're working on a Java program that calculates the Fibonacci sequence using recursion. However, you've noticed that the calculated sequence is incorrect for certain inputs. To use the rubber duck debugging technique, you'll explain your code step by step as if you were explaining it to someone else or, in this case, a rubber duck. Here's how you could apply the rubber duck debugging technique: Java public class FibonacciCalculator { public static void main(String[] args) { int n = 5; // Input for calculating the Fibonacci sequence long result = calculateFibonacci(n); System.out.println("Fibonacci number at position " + n + " is: " + result); } public static long calculateFibonacci(int n) { if (n <= 1) { return n; } else { return calculateFibonacci(n - 1) + calculateFibonacci(n - 2); } } } Now, let's imagine you're explaining this code to a rubber duck: "Hey, rubber duck! I'm trying to calculate the Fibonacci sequence for a given position n. First, I'm checking if n is less than or equal to 1. If it is, I return n because the Fibonacci sequence starts with 0 and 1. If not, I recursively calculate the sum of the Fibonacci numbers at positions n - 1 and n - 2. This should give me the Fibonacci number at position n. Hmmm, I think I just realized that for larger values of n, the recursion might be inefficient and lead to incorrect results!" By explaining your code to the rubber duck, you've broken down the logic step by step. This process often helps in revealing hidden bugs or logical errors. In this case, you might have identified that the recursive approach for calculating the Fibonacci sequence can become inefficient for larger values of n, leading to incorrect results. The rubber duck debugging technique encourages you to articulate your thought process and identify issues that might not be immediately apparent. It's a valuable method for tackling complex problems in your Java code and improving its quality. Version control: Version control systems like Git allow you to track changes and collaborate with others. Using descriptive commit messages can help you remember why you made a certain change, making it easier to backtrack and identify the source of a bug. Unit testing: Writing unit tests for your code helps catch bugs early in the development process. When debugging, you can use these tests to pinpoint the exact part of your code that's causing the issue. Review documentation and stack traces: Error messages and stack traces can be overwhelming, but they contain valuable information about what went wrong. Understanding the stack trace can guide you to the specific line of code that triggered the error. Binary search debugging: If you have a large codebase, narrowing down the source of a bug can be challenging. Using a binary search approach, you can comment out sections of code until you identify the problematic portion. Conclusion Debugging complex Java code is a skill that requires patience, practice, and a systematic approach. By leveraging tools like debuggers, print statements, and version control systems, you can effectively identify and fix bugs. Remember that debugging is not just about solving immediate issues; it's also a way to deepen your understanding of your codebase and become a more proficient Java developer. So, the next time you encounter a complex bug, approach it as an opportunity to refine your coding skills and create more robust and reliable Java applications.
The Microsoft OpenXML files we use on a day-to-day basis are conveniently designed to be accessed and manipulated programmatically. We can jump into any OpenXML file structure in a variety of capacities (usually via specialized programming libraries or APIs) to easily manipulate objects within a document and/or retrieve important contents from its various sections. The flexibility afforded by Office document formats is, to an even greater extent, facilitated by macros. Using the Visual Basic for Applications (VBA) programming language - a specially designed version of the Visual Basic (VB) language - we can add a myriad of dynamic elements to our Office documents and allow our files to seamlessly connect with other applications in our system. We can automate away our Excel spreadsheets’ most repetitive calculations, and we can ask toolbars within our DOCX files to update external applications based on information entered in form fields. We can create macros in our PowerPoint PPTX presentations that insert slides from one file into another, and we can even automate PPTX file conversions to formats like PDF, PNG, JPG, etc. to save us valuable time in our workflow. The list of macro-enabled benefits is virtually endless. Of course, macros are far from purely beneficial blocks of code. The fact that VBA has the power to execute code means VBA macros will always pose a considerable security threat to our system. Since their conception in the 90s, macros have served as an effective vessel for cybercriminals to deliver viruses and malware to machines all around the globe. Attackers can use VBA to trigger arbitrary commands and run programs on our devices, and they can even use it to delete valuable data from our hard drives. Some of the earliest examples of rapidly proliferating computer virus infections leveraged VBA macros to compromise victims’ devices, hijack their email contact lists, and target those new contacts with the original malware. In more recent years, macro-enabled files have even proved an efficient method for delivering ransomware to sensitive file storage locations with weak security policies. The threat of macros is significant enough that Office now disables them by default when macro-enabled files are downloaded from the internet. Downloading a file containing a macro will automatically bring up a “Security Risk” notification, meaning we’ll have to enable macros manually via document settings and accept the associated malware risks on our own terms. The trouble is, of course, that macros aren’t always downloaded directly from sketchy internet sources. It’s common to encounter malicious macros as innocuous file attachments in our email inboxes (oftentimes sent from compromised devices we once trusted), and we might also find them scattered within our web applications’ various cloud storage instances when we allow direct client-side uploads through web portals. More and more, macro threats are delivered latently, bypassing weakly configured security policies and lying dormant until their contents are unwittingly executed. As a result, it’s extremely important that we implement our own methods for identifying and mitigating macro threats. There are a variety of solutions we can utilize to accomplish this, including a few simplistic low-code APIs provided further down the page. Demonstration We can easily determine if Excel XSLX, Word DOCX, and PowerPoint PPTX files contain macros using the ready-to-run Java code examples provided below. These three separate API solutions make it straightforward to incorporate macro checks into our relevant web application workflows, returning simple Boolean responses when macros are identified. To be clear, these solutions offer an efficient method for definitively identifying the existence of macros, but they do not take any additional action on the document in question, nor do they determine if the macros identified are malicious. As such, they are best utilized as a precursor to downstream actions that either store or delete documents outright. Before we structure our API calls with code examples, we’ll first need to install our SDK. We can begin installing with Maven by first adding a reference to the repository in pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> And we can finish that process by adding a reference to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> We can now copy the code examples below for any (or all) of our three API solutions. We can use the following code to check if Excel XLSX files contain macros: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); EditDocumentApi apiInstance = new EditDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { GetMacrosResponse result = apiInstance.editDocumentXlsxGetMacroInformation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling EditDocumentApi#editDocumentXlsxGetMacroInformation"); e.printStackTrace(); } We can use the following to check Word DOCX/DOCM files: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); EditDocumentApi apiInstance = new EditDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { GetMacrosResponse result = apiInstance.editDocumentDocxGetMacroInformation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling EditDocumentApi#editDocumentDocxGetMacroInformation"); e.printStackTrace(); } And, finally, we can use the following code to check PowerPoint PPTX/PPTM files: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); EditDocumentApi apiInstance = new EditDocumentApi(); File inputFile = new File("/path/to/inputfile"); // File | Input file to perform the operation on. try { GetMacrosResponse result = apiInstance.editDocumentPptxGetMacroInformation(inputFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling EditDocumentApi#editDocumentPptxGetMacroInformation"); e.printStackTrace(); } Each of these solutions will return a “ContainsVbaMacros” Boolean response containing a “true” or “false” value. We can authorize our requests for any of these solutions using a free Cloudmersive API key.