Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
Participate in DZone Research Surveys: You Can Shape Trend Reports! (+ Enter the Raffles)
Debugging Terraform providers is crucial for ensuring the reliability and functionality of infrastructure deployments. Terraform providers, written in languages like Go, can have complex logic that requires careful debugging when issues arise. One powerful tool for debugging Terraform providers is Delve, a debugger for the Go programming language. Delve allows developers to set breakpoints, inspect variables, and step through code, making it easier to identify and resolve bugs. In this blog, we will explore how to use Delve effectively for debugging Terraform providers. Setup Delve for Debugging Terraform Provider Shell # For Linux sudo apt-get install -y delve # For macOS brew instal delve Refer here for more details on the installation. Debug Terraform Provider Using VS Code Follow the below steps to debug the provider Download the provider code. We will use IBM Cloud Terraform Provider for this debugging example. Update the provider’s main.go code to the below to support debugging Go package main import ( "flag" "log" "github.com/IBM-Cloud/terraform-provider-ibm/ibm/provider" "github.com/IBM-Cloud/terraform-provider-ibm/version" "github.com/hashicorp/terraform-plugin-sdk/v2/plugin" ) func main() { var debug bool flag.BoolVar(&debug, "debug", true, "Set to true to enable debugging mode using delve") flag.Parse() opts := &plugin.ServeOpts{ Debug: debug, ProviderAddr: "registry.terraform.io/IBM-Cloud/ibm", ProviderFunc: provider.Provider, } log.Println("IBM Cloud Provider version", version.Version) plugin.Serve(opts) } Launch VS Code in debug mode. Refer here if you are new to debugging in VS Code. Create the launch.json using the below configuration. JSON { "version": "0.2.0", "configurations": [ { "name": "Debug Terraform Provider IBM with Delve", "type": "go", "request": "launch", "mode": "debug", "program": "${workspaceFolder}", "internalConsoleOptions": "openOnSessionStart", "args": [ "-debug" ] } ] } In VS Code click “Start Debugging”. Starting the debugging starts the provider for debugging. To attach the Terraform CLI to the debugger, console prints the environment variable TF_REATTACH_PROVIDERS. Copy this from the console. Set this as an environment variable in the terminal running the Terraform code. Now in the VS Code where the provider code is in debug mode, open the go code to set up break points. To know more on breakpoints in VS Code refer here. Execute 'terraform plan' followed by 'terraform apply', to notice the Terraform provider breakpoint to be triggered as part of the terraform apply execution. This helps to debug the Terraform execution and comprehend the behavior of the provider code for the particular inputs supplied in Terraform. Debug Terraform Provider Using DLV Command Line Follow the below steps to debug the provider using the command line. To know more about the dlv command line commands refer here. Follow the 1& 2 steps mentioned in Debug Terraform provider using VS Code In the terminal navigate to the provider go code and issue go build -gcflags="all=-N -l" to compile the code To execute the precompiled Terraform provider binary and begin a debug session, run dlv exec --accept-multiclient --continue --headless <path to the binary> -- -debug where the build file is present. For IBM Cloud Terraform provider use dlv exec --accept-multiclient --continue --headless ./terraform-provider-ibm -- -debug In another terminal where the Terraform code would be run, set the TF_REATTACH_PROVIDERS as an environment variable. Notice the “API server” details in the above command output. In another (third) terminal connect to the DLV server and start issuing the DLV client commands Set the breakpoint using the break command Now we are set to debug the Terraform provider when Terraform scripts are executed. Issue continue in the DLV client terminal to continue until the breakpoints are set. Now execute the terraform plan and terraform apply to notice the client waiting on the breakpoint. Use DLV CLI commands to stepin / stepout / continue the execution. This provides a way to debug the terraform provider from the command line. Remote Debugging and CI/CD Pipeline Debugging Following are the extensions to the debugging using the dlv command line tool. Remote Debugging Remote debugging allows you to debug a Terraform provider running on a remote machine or environment. Debugging in CI/CD Pipelines Debugging in CI/CD pipelines involves setting up your pipeline to run Delve and attach to your Terraform provider for debugging. This can be challenging due to the ephemeral nature of CI/CD environments. One approach is to use conditional logic in your pipeline configuration to only enable debugging when a specific environment variable is set. For example, you can use the following script in your pipeline configuration to start Delve and attach to your Terraform provider – YAML - name: Debug Terraform Provider if: env(DEBUG) == 'true' run: | dlv debug --headless --listen=:2345 --api-version=2 & sleep 5 # Wait for Delve to start export TF_LOG=TRACE terraform init terraform apply Best Practices for Effective Debugging With Delve Here are some best practices for effective debugging with Delve, along with tips for improving efficiency and minimizing downtime: Use version control: Always work with version-controlled code. This allows you to easily revert changes if debugging introduces new issues. Start small: Begin debugging with a minimal, reproducible test case. This helps isolate the problem and reduces the complexity of debugging. Understand the code: Familiarize yourself with the codebase before debugging. Knowing the code structure and expected behavior can speed up the debugging process. Use logging: Add logging statements to your code to track the flow of execution and the values of important variables. This can provide valuable insights during debugging. Use breakpoints wisely: Set breakpoints strategically at critical points in your code. Too many breakpoints can slow down the debugging process. Inspect variables: Use the print (p) command in Delve to inspect the values of variables. This can help you understand the state of your program at different points in time. Use conditional breakpoints: Use conditional breakpoints to break execution only when certain conditions are met. This can help you focus on specific scenarios or issues. Use stack traces: Use the stack command in Delve to view the call stack. This can help you understand the sequence of function calls leading to an issue. Use goroutine debugging: If your code uses goroutines, use Delve's goroutine debugging features to track down issues related to concurrency. Automate debugging: If you're debugging in a CI/CD pipeline, automate the process as much as possible to minimize downtime and speed up resolution. By following these best practices, you can improve the efficiency of your debugging process and minimize downtime caused by issues in your code. Conclusion In conclusion, mastering the art of debugging Terraform providers with Delve is a valuable skill that can significantly improve the reliability and performance of your infrastructure deployments. By setting up Delve for debugging, exploring advanced techniques like remote debugging and CI/CD pipeline debugging, and following best practices for effective debugging, you can effectively troubleshoot issues in your Terraform provider code. Debugging is not just about fixing bugs; it's also about understanding your code better and improving its overall quality. Dive deep into Terraform provider debugging with Delve, and empower yourself to build a more robust and efficient infrastructure with Terraform.
The slow Java startup problem is notorious in the Java community, but the meaning can confuse the observer. The slow startup problem relates to the process of starting a set of interconnected applications on complex Java frameworks. This process includes starting several applications in Spring Boot, and each of them takes around 10 seconds. So the start of such production as a whole will take a minute, but the start of a JVM in this set is 50 milliseconds. The widespread meaning of slow Java startup referred to this process is not exactly true, as technically this is not a Java problem, but a problem of the framework. The effect of slow start-up and warm-up is caused by complex frameworks that we use and dynamic features in the runtime. Java is unique in its functionalities, and thanks to its coding and ecosystem power, Java is very popular among enterprises. The same complexity, though, can make it clumsy in the cloud. Java application startup and warmup technically include several consecutive processes: JVM startup, application startup, and JVM warmup. In these processes, the JVM gets extra time to provide application peak performance. The warmup phase is taken by JVM to compile and optimize the code. This process is needed for code interpretation and optimization and lasts substantially longer than the startup in cases of large complex applications, taking up to several minutes. Every time you start your program, these processes begin from scratch. In practice, it means that we spend time running the application and use significant CPU and memory resources to ensure its performance at the startup point. Therefore, the slow startup and warmup leads to extra resources spent for the phase preparing the application to run rather than the resources that might be required for its operation. Consequently, with the slow startup and warmup, you get increased cloud costs and resource over utilization. Search for the Solutions There are several ways to deal with the issue. Java Optimization Migrating to a newer long-term release (LTS) version of Java can improve application performance slightly, bringing minor changes. Such optimization is a quick method, available immediately. GraalVM Using native images can be beneficial. However, using GraalVM may bring problems such as compilation difficulties, strange errors, and different flags, making it unsuitable for some projects. Project Leyden Its primary goal is to "improve the startup time, time to peak performance, and footprint of Java programs." This project still needs to be completed, and we cannot yet evaluate the effect and possible difficulties of adaptation. However, among all, the Leyden project is designed to solve the problem of slow startup and we follow the news with great expectations on results. Coordinated Restore at Checkpoint It is an OpenJDK project entirely focused on Java startup enhancement. The project's primary aim is to develop a new standard mechanism-agnostic API to notify Java programs about the checkpoint and restore events. Coordinated Restore at Checkpoint (CRaC) offers a Checkpoint/Restore API mechanism solution allowing the creation of an image of a running application at an arbitrary point in time ("checkpoint") and then starting the image from the checkpoint file (snapshot). This process restores the state of an application from the point when the checkpoint was made. Using the CRaC feature with Java runtime enables you to pause the application and restart it from the moment it was paused, and in addition, gives the option to distribute numerous replicas of this file, which is especially relevant for deployment on multiple instances. Amazon Lambda Amazon Lambda is a standalone product based on CRaC technology. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. Lambdas can be very convenient for your development goals, but they are also more expensive and less effective, compared to the JVMs. The Effectiveness and Your Runtime Sustainability The slow startup problem impacts the overall performance of your runtime, and to make your application sustainable and performant, you need to use one of these solutions. Among the above stated, the CRaC solution is the most popular for the Java community today. CRaC, just like Project Leyden, is targeted to solve the issue of slow startup. We cannot evaluate and test Leyden's results fully yet. The project introduced Class Data Sharing + AOT on steroids, which looks very promising for synergy with Java capable of delivering faster startup on JVM. However, there are no ready-made solutions that can be deployed with Java yet. The advantage of the CRaC feature is that it is already available, and getting spread around quickly. Today, you can get OpenJDK runtime and even containers that support the CRaC API. These solutions are ready to install and allow immediate significant improvements. OpenJDK runtimes and small containers with CRaC support will be especially relevant for Spring developers. Spring announced CRaC feature support in 2023, and their recommended runtime is Liberica JDK, which delivers the runtime version with CRaC. It should be noted that the Native Image technology is also highly relevant for Spring users to reach faster startup of their application. Native images can run with a smaller memory footprint and do not require Java Virtual Machine for deployment. However, GraalVM requires individual research given the specifics of your Java application, and it will not always be suitable for resolving the issue. In the case of Amazon Lambdas, you should consider the costs of this product and its effectiveness, as it might ultimately deliver an extra financial burden. Its main advantage is convenience. The key CRaC advantage today is its availability and ease of use, combined with an instant effect on the application performance and cloud costs. CRaC solves the problem immediately. OpenJDK runtime with support for Coordinated Restore at Checkpoint advances your application with a feature to quickly create and restore images of a running application, reducing the startup and warmup times from minutes to milliseconds. Enhancing your application with Linux-based containers supported with CRaC strengthens its performance even further. CRaC lowers the load on the processor and memory at the application startup, reducing the cloud costs and improving application performance and sustainability.
During my early days as a Data Engineer (which dates back to 2016), I had the responsibility of scraping data from different websites. Web scraping is all about making use of tools that are automated to get vast amounts of data from the websites, usually from their HTML. I remember building around the application, digging into the HTML code, and trying to figure out the best solutions for scraping all the data. One of my main challenges was dealing with frequent changes to the websites: for example, the Amazon pages I was scraping changed every one to two weeks. One thought that occurred to me when I started reading about Large Language Models (LLMs) was, "Can I avoid all those pitfalls I faced using LLMs to structure data from webpages?" Let's see if I can. Web Scraping Tools and Techniques At the time, the main tools I was using were Requests, BeautifulSoup, and Selenium. Each service has a different purpose and is targeted at different types of web environments. Requests is a Python library that can be used to easily make HTTP requests. This library performs GET and POST operations against URLs provided in the requests. It is frequently used to fetch HTML content that can be parsed by BeautifulSoup. BeautifulSoup is a Python library for parsing HTML and XML documents, it constructs a parse tree from page source that allows you to access the various elements on the page easily. Usually, it is paired with other libraries like Requests or Selenium that provide the HTML source code. Selenium is primarily employed for websites that have a lot of JavaScript involved. Unlike BeautifulSoup, Selenium does not simply analyze HTML code: it interacts with websites by emulating user actions such as clicks and scrolling. This facilitates the data extraction from websites that create content dynamically. These tools were indispensable when I was trying to extract data from websites. However, they also posed some challenges: code, tags, and structural elements had to be regularly updated to accommodate changes in the website's layout, complicating long-term maintenance. What Are Large Language Models (LLMs)? Large Language Models (LLMs) are next-generation computer programs that can learn through reading and analyzing vast amounts of text data. At this age, they are gifted with the amazing capability to write in a human-like narrative making them efficient agents to process language and comprehend the human language. The outstanding ability shone through in that kind of situation, where the text context was really important. Integrating LLMs Into Web Scraping The web scraping process can be optimized in a great measure when implementing LLMs into it. We need to take the HTML code from a webpage and feed it into the LLM, which will pull out the objects it refers to. Therefore, this tactic helps in making maintenance easy, as the markup structure can evolve, but the content itself does not usually change. Here’s how the architecture of such an integrated system would look: Getting HTML: Use tools like Selenium or Requests to fetch the HTML content of a webpage. Selenium can handle dynamic content loaded with JavaScript, while Requests is suited for static pages. Parsing HTML: Using BeautifulSoup, we can parse out this HTML as text, thus removing the noise from the HTML (footer, header, etc.). Creating Pydantic models: Type the Pydantic model in which we are going to scrape. This makes sure that the data typed and structured conforms to the pre-defined schemas. Generating prompts for LLMs: Design a prompt that will inform the LLM what information has to be extracted. Processing by LLM: The model reads the HTML, understands it, and employs the instructions for data processing and structuring. Output of structured data: The LLM will provide the output in the form of structured objects which are defined by the Pydantic model. This workflow helps to transform HTML (unstructured data) into structured data using LLMs, solving problems such as non-standard design or dynamic modification of the web source HTML. Integration of LangChain With BeautifulSoup and Pydantic This is the static webpage selected for the example. The idea is to scrape all the activities listed there and present them in a structured way. This method will extract the raw HTML from the static webpage and clean it before the LLM processes it. Python from bs4 import BeautifulSoup import requests def extract_html_from_url(url): try: # Fetch HTML content from the URL using requests response = requests.get(url) response.raise_for_status() # Raise an exception for bad responses (4xx and 5xx) # Parse HTML content using BeautifulSoup soup = BeautifulSoup(response.content, "html.parser") excluded_tagNames = ["footer", "nav"] # Exclude elements with tag names 'footer' and 'nav' for tag_name in excluded_tagNames: for unwanted_tag in soup.find_all(tag_name): unwanted_tag.extract() # Process the soup to maintain hrefs in anchor tags for a_tag in soup.find_all("a"): href = a_tag.get("href") if href: a_tag.string = f"{a_tag.get_text()} ({href})" return ' '.join(soup.stripped_strings) # Return text content with preserved hrefs except requests.exceptions.RequestException as e: print(f"Error fetching data from {url}: {e}") return None The next step is to define the Pydantic objects that we are going to scrape from the webpage. Two objects need to be created: Activity: This is a Pydantic object that represents all the metadata related to the activity, with its attributes and data types specified. We have marked some fields as Optional in case they are not available for all activities. Providing a description, examples, and any metadata will help the LLM to have a better definition of the attribute. ActivityScraper: This is the Pydantic wrapper around the Activity. The objective of this object is to ensure that the LLM understands that it is needed to scrape several activities. Python from pydantic import BaseModel, Field from typing import Optional class Activity(BaseModel): title: str = Field(description="The title of the activity.") rating: float = Field(description="The average user rating out of 10.") reviews_count: int = Field(description="The total number of reviews received.") travelers_count: Optional[int] = Field(description="The number of travelers who have participated.") cancellation_policy: Optional[str] = Field(description="The cancellation policy for the activity.") description: str = Field(description="A detailed description of what the activity entails.") duration: str = Field(description="The duration of the activity, usually given in hours or days.") language: Optional[str] = Field(description="The primary language in which the activity is conducted.") category: str = Field(description="The category of the activity, such as 'Boat Trip', 'City Tours', etc.") price: float = Field(description="The price of the activity.") currency: str = Field(description="The currency in which the price is denominated, such as USD, EUR, GBP, etc.") class ActivityScrapper(BaseModel): Activities: list[Activity] = Field("List of all the activities listed in the text") Finally, we have the configuration of the LLM. We will use the LangChain library, which provides an excellent toolkit to get started. A key component here is the PydanticOutputParser. Essentially, this will translate our object into instructions, as illustrated in the Prompt, and also parse the output of the LLM to retrieve the corresponding list of objects. Python from langchain.prompts import PromptTemplate from langchain.output_parsers import PydanticOutputParser from langchain_openai import ChatOpenAI from dotenv import load_dotenv load_dotenv() llm = ChatOpenAI(temperature=0) output_parser = PydanticOutputParser(pydantic_object = ActivityScrapper) prompt_template = """ You are an expert making web scrapping and analyzing HTML raw code. If there is no explicit information don't make any assumption. Extract all objects that matched the instructions from the following html {html_text} Provide them in a list, also if there is a next page link remember to add it to the object. Please, follow carefulling the following instructions {format_instructions} """ prompt = PromptTemplate( template=prompt_template, input_variables=["html_text"], partial_variables={"format_instructions": output_parser.get_format_instructions} ) chain = prompt | llm | output_parser The final step is to invoke the chain and retrieve the results. Python url = "https://www.civitatis.com/es/budapest/" html_text_parsed = extract_html_from_url(url) activites = chain.invoke(input={ "html_text": html_text_parsed }) activites.Activities Here is what the data looks like. It takes 46 seconds to scrape the entire webpage. Python [Activity(title='Paseo en barco al anochecer', rating=8.4, reviews_count=9439, travelers_count=118389, cancellation_policy='Cancelación gratuita', description='En este crucero disfrutaréis de las mejores vistas de Budapest cuando se viste de gala, al anochecer. El barco es panorámico y tiene partes descubiertas.', duration='1 hora', language='Español', category='Paseos en barco', price=21.0, currency='€'), Activity(title='Visita guiada por el Parlamento de Budapest', rating=8.8, reviews_count=2647, travelers_count=34872, cancellation_policy='Cancelación gratuita', description='El Parlamento de Budapest es uno de los edificios más bonitos de la capital húngara. Comprobadlo vosotros mismos en este tour en español que incluye la entrada.', duration='2 horas', language='Español', category='Visitas guiadas y free tours', price=27.0, currency='€') ... ] Demo and Full Repository I have created a quick demo using Streamlit available here. In the first part, you are introduced to the model. You can add as many rows as you need and specify the name, type, and description of each attribute. This will automatically generate a Pydantic model to be used in the web scraping component. The next part allows you to enter a URL and scrape all the data by clicking the button on the webpage. A download button will appear when the scraping has finished, allowing you to download the data in JSON format. Feel free to play with it! Conclusion LLM provides new possibilities for efficiently extracting data from non-structured data such as websites, PDFs, etc. The automatization of web scraping by LLM not only will save time but also ensure the quality of the data retrieved. However, sending raw HTML to the LLM could increase the token cost and make it inefficient. Since HTML often includes various tags, attributes, and content, the cost can quickly rise. Therefore, it is crucial to preprocess and clean the HTML, removing all the unnecessary metadata and non-used information. This approach will help use LLM as a data extractor for webs while maintaining a decent cost. The right tool for the right job!
Java adoption has shifted from version 1.8 to at least Java 17. Concurrently, Spring Boot has advanced from version 2.x to 3.2.2. The springdoc project has transitioned from the older library 'springdoc-openapi-ui' to 'springdoc-openapi-starter-webmvc-ui' for its functionality. These updates mean that readers relying on older articles may find themselves years behind in these technologies. The author has updated this article so that readers are using the latest versions and don't struggle with outdated information during migration. This is part one of a three-part series. You can check out the other articles below. OpenAPI 3 Documentation With Spring Boot Doing More With Springdoc OpenAPI Extending Swagger and Springdoc Open API In this tutorial, we are going to try out a Spring Boot Open API 3-enabled REST project and explore some of its capabilities. The springdoc-openapi Java library has quickly become very compelling. We are going to refer to Building a RESTful Web Service and springdoc-openapi v2.5.0. Prerequisites Java 17.x Maven 3.x Steps Start by creating a Maven JAR project. Below, you will see the pom.xml to use: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.2</version> <relativePath ></relativePath> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>sample</artifactId> <version>0.0.1</version> <name>sample</name> <description>Demo project for Spring Boot with openapi 3 documentation</description> <properties> <java.version>17</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note the "springdoc-openapi-starter-webmvc-ui" dependency. Now, let's create a small Java bean class. Java package sample; import org.hibernate.validator.constraints.CreditCardNumber; import jakarta.validation.constraints.Email; import jakarta.validation.constraints.Max; import jakarta.validation.constraints.Min; import jakarta.validation.constraints.NotBlank; import jakarta.validation.constraints.NotNull; import jakarta.validation.constraints.Pattern; import jakarta.validation.constraints.Size; public class Person { private long id; private String firstName; @NotNull @NotBlank private String lastName; @Pattern(regexp = ".+@.+\\..+", message = "Please provide a valid email address" ) private String email; @Email() private String email1; @Min(18) @Max(30) private int age; @CreditCardNumber private String creditCardNumber; public String getCreditCardNumber() { return creditCardNumber; } public void setCreditCardNumber(String creditCardNumber) { this.creditCardNumber = creditCardNumber; } public long getId() { return id; } public void setId(long id) { this.id = id; } public String getEmail1() { return email1; } public void setEmail1(String email1) { this.email1 = email1; } @Size(min = 2) public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } - This is an example of a Java bean. Now, let's create a controller. Java package sample; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import io.swagger.v3.oas.annotations.media.Content; import io.swagger.v3.oas.annotations.media.ExampleObject; import jakarta.validation.Valid; @RestController public class PersonController { @RequestMapping(path = "/person", method = RequestMethod.POST) @io.swagger.v3.oas.annotations.parameters.RequestBody(required = true, content = @Content(examples = { @ExampleObject(value = INVALID_REQUEST, name = "invalidRequest", description = "Invalid Request"), @ExampleObject(value = VALID_REQUEST, name = "validRequest", description = "Valid Request") })) public Person person(@Valid @RequestBody Person person) { return person; } private static final String VALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" }"""; private static final String INVALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abcabc.com", "email1": "abcabc.com", "age": 17, "creditCardNumber": "411111111111111" }"""; } - Above is a sample REST Controller. Side Note: Normally I don't like to clutter already annotation-cluttered code with additional annotations, but I do think having ready-made examples like these can be useful. Another reason that forced me to do this was the default examples now generated from Swagger UI appear to be generating some confusing text when using @Pattern. It appears to be a Spring UI issue and not a Springdoc issue. Let's make some entries in src\main\resources\application.properties. Properties files application-description=@project.description@ application-version=@project.version@ logging.level.org.springframework.boot.autoconfigure=ERROR # server.error.include-binding-errors is now needed if we # want to display the errors as shown in this article # this can also be avoided in other ways as we will see # in later articles server.error.include-binding-errors=always The above entries will pass on Maven build-related information to the OpenAPI documentation and also include the new server.error.include-binding-errors property. Finally, let's write the Spring Boot application class: Java package sample; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; import io.swagger.v3.oas.models.info.License; @SpringBootApplication public class SampleApplication { public static void main(String[] args) { SpringApplication.run(SampleApplication.class, args); } @Bean public OpenAPI customOpenAPI(@Value("${application-description}") String appDesciption, @Value("${application-version}") String appVersion) { return new OpenAPI() .info(new Info() .title("sample application API") .version(appVersion) .description(appDesciption) .termsOfService("http://swagger.io/terms/") .license(new License().name("Apache 2.0").url("http://springdoc.org"))); } } - Also, note how the API version and description are being leveraged from application.properties. At this stage, this is what the project looks like in Eclipse: The project contents are above. Next, execute the mvn clean package from the command prompt or terminal. Then, execute java -jar target\sample-0.0.1.jar. You can also launch the application by running the SampleApplication.java class from your IDE. Now, let's visit the Swagger UI — http://localhost:8080/swagger-ui.html. Click the green Post button and expand the > symbol on the right of Person under Schemas. Let's expand the last schemas section a bit more: The nice thing is how the contract is automatically detailed leveraging JSR-303 annotations on the model. It out-of-the-box covers many of the important annotations and documents them. However, I did not see it support out of the box @javax.validation.constraints.Email and @org.hibernate.validator.constraints.CreditCardNumber at this point. The issue is that they are not documented in the generated Swagger specs, but those constraints are functional. We will discuss more on this in the subsequent article. For completeness, let's post a request. Press the Try it out button. Press the blue Execute button. Let's feed in a valid input by copying the below or by selecting the valid Input drop-down. JSON { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" } Let's feed that valid input into the Request body section. (We can also select "validRequest" from the Examples dropdown as shown below.) Upon pressing the blue Execute button, we see the below: This was only a brief introduction to the capabilities of the dependency: XML <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> Troubleshooting Tips Ensure prerequisites. If using the Eclipse IDE, we might need to do a Maven update on the project after creating all the files. In the Swagger UI, if you are unable to access the “Schema” definitions link, it might be because you need to come out of the “try it out “ mode. Click on one or two Cancel buttons that might be visible. Source code Git Clone URL, Branch: springdoc-openapi-intro-update1.
This article is part of a range of articles called “Mastering Object-Oriented Design Patterns.” The collection consists of four articles and aims to provide profound guidance on object-oriented design patterns. The articles address the introduction of the design patterns issues, their sources, and the advantages of their use. In addition, the tutorial series provides full explanations of the common design patterns. Every article starts with real-life analogies, discusses the pros and cons of each pattern, and provides a Java example implementation. Once you find the title, “Mastering Object-Oriented Design Patterns,” you can explore the whole series and master object-oriented design patterns. Once upon a time, there was a new notion called “design patterns” in software engineering. This concept has revolutionized how developers approach complex software design. Design patterns are verified solutions to frequently encountered problems. However, where did this idea originate, and how did it significantly contribute to object-oriented programming? Origin of Design Pattern Design patterns first appeared in architecture, not in software. An architect and design theorist, Christopher Alexander, introduced the idea in his influential work, “A Pattern Language: Towns, Buildings, Construction.” Alexander sought to develop a pattern language to solve some city spatial and communal problems. These patterns included several details, such as window heights and the organization of green zones within the neighborhoods. This way, it sets the ground for a design approach focusing on reusable solutions to the same problems. Captivated by the concept of Alexander, a group of four software engineers (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides), also known as the Gang of Four (GoF), recognized the potential of using this concept in software development. In 1994, they published “Design Patterns: Book “Elements of Reusable Object-Oriented” Software that translated the pattern language of architecture into the world of object-oriented programming (OOP). This seminal publication presented twenty-three design patterns targeted at addressing typical design issues. It soon became a best-seller and a vital tool in software engineering instruction. Introduction to Design Patterns What Are Design Patterns? Design patterns are not recipes but recommendations and tips for solving typical design problems. They are a pool of bright ideas and experiences of the software development community. These patterns assist the developers in building flexible, low-maintenance, and reusable code. Design patterns guide common language and methodology for solving design problems, simplifying collaboration among developers, and speeding up the development process. Picture-making software is like assembling a puzzle, except that you can continuously be given the same piece. Design patterns are your map indicating how you can fit those pieces every time. Design patterns are helpful techniques for resolving common coding issues. They can be understood as a set of coding challenge cookbooks. Rather than giving you ready-made code snippets, they present ways to solve particular problems in your projects. The purpose of design patterns is to reduce coding complexities, help you solve problems faster, and keep your code as flexible as possible for the future. Design Patterns vs. Algorithms Nevertheless, both provide solutions, but an algorithm is a sequence of steps to reach a goal, just like a cooking recipe. On the other hand, a design pattern is more of a template. It provides the layout and major components of the solution but does not specify building details; consequently, it is flexible in how this solution is being implemented in your project. Both algorithms and design patterns provide solutions. An algorithm is like a process, a recipe in the kitchen that makes you reach a target. Alternatively, a design pattern is like a blueprint. It gives the framework and the factor elements of the solution but lets you select the structure details, which makes it flexible for your project demands. Inside a Design Pattern A design pattern typically includes: A design pattern typically includes: Intent: What the pattern does and what it solves. Motivation: The reason and the way it can help. Structure of classes: A schematic indicating how its parts communicate. Code example: Commonly made available in popular programming languages to facilitate comprehension. Some will also address when to use the pattern, how to apply it, and its interaction with other patterns, leaving you with a complete toolset for more innovative coding. Why Use Design Patterns? Design patterns in coding are a kind of secret toolset. They make solving common problems easier, and here’s why embracing design patterns can be a game-changer: They make solving common issues more accessible and that’s why embracing design patterns can be a game-changer: Proven and ready-to-use solutions: Imagine owning a treasure chest of brilliant hacks already worked out by professional coders. That’s what design patterns are—several clever, immediately applicable, professional-quality solutions that allow you to solve problems quickly and correctly. Simplifying complexity: Any great software is minimalistic in a sense. Design patterns assist you in splitting large and daunting problems into small and manageable chunks, thus making your code neater and your life simpler. Big picture focus: Design patterns allow you to spend less time on code structure and more time on doing cool stuff. This will enable you to concentrate more on producing great features rather than struggling with the fundamentals. Common language: Design patterns provide the developers with a common language, so when you say, “Let’s use a Singleton here,” everyone gets it. This leads to more efficient work and less confusion. Reusability and maintainability: The design patterns encourage code reuse via inheritance and interfaces, which allows classes to be adaptable and systems easy to maintain. This method shortens development cycles and keeps systems fortified over time. Improved scalability and flexibility: The MVC pattern allows for a more defined separation of the different parts of your code, making your system more flexible and able to grow with little adjustments. Boosted readability and understandability: Properly implemented design patterns increase the readability and understandability of your code, making it easier for other people to understand and contribute without too much explanation. In a nutshell, design patterns are all about making coding more comfortable, efficient, and even entertaining. They enable you to work on extension rather than invention, which allows you to improve the software without reinventing the wheel. Navigating the Tricky Side of Design Patterns Design patterns are secret ingredients that make writing code more accessible and practical. But they are not ideal. Here are a couple of things to be aware of: Not suitable for every programming language: However, using a design pattern may sometimes not be appropriate for a specific language in a programming language. For instance, a complex pattern may be redundant if the language has a simple feature that can do the job. It is just like employing an intelligent instrument while a simple one is sufficient. Being too rigid with patterns: Although design patterns are derived from best practices, their strict adherence may cause undesirable behavior. It’s similar to sticking to a recipe so rigidly that you do not make it according to your taste. At times, you need to modify to suit the particular requirements of your project. Overusing patterns: It is pretty simple to lose control and believe that every problem can be addressed through a design pattern. Yet, not all problems need a pattern. It is akin to using a hammer for all tasks when, at times, a screwdriver is sufficient. Adding unnecessary complexity: Design patterns can also introduce complexity to your code. If not handled with care, they can complicate your project. How To Avoid the Pitfalls However, despite the troubles, design patterns are still quite helpful. The key is to use them wisely: Choose the appropriate tool for the task: Not all problems need a design pattern. Sometimes, simpler is better. Adapt and customize: Never be afraid to adjust a pattern to make it suit you better. Please keep it simple: Do not make your code more complicated by using patterns that are not required. In summary, design patterns are similar to spices in cooking: applied correctly, they can improve your dish (or project). Yet, it’s necessary to employ them in moderation and not let them overcome the food. Types of Design Patterns Design patterns are beneficial methods applied in software design. They facilitate code organization and management during the development and preservation of applications. Regard them as clever construction techniques and improvements to your software projects. Let’s quickly check out the three main types: Creational Patterns: Building Blocks Creational patterns are equivalent to picking up the suitable LEGO blocks to begin your model building. Their attention is directed to simplifying the process of creating objects or groups of objects. This way, you can build up the software flexibly and efficiently, as if picking out the LEGO pieces that fit your design. Structural Patterns: Putting It All Together Structural patterns are all about how you build your LEGO bricks. They help you arrange the pieces (or objects) into more significant structures, with everything neat and well-arranged. It is akin to following a LEGO manual to guarantee your spaceship or castle will be sturdy and neat. Behavioral Patterns: Making It Work LEGO behavioral patterns are just about making your LEGO creation do extraordinary things. For instance, think about making the wings of your LEGO spaceship move. In software, these patterns enable various program components to interact and cooperate, ensuring everything functions as intended. Design patterns could be as simple as idioms that only run in a programming language or as complicated as architectural patterns that shape the entire application. They are your tool in the tool kit, available during a small function and throughout the software’s structure. Comprehending these patterns is like learning the tricks of constructing the most incredible LEGO sets. They make you a software genius; all your coding will seem relaxed and fun! Conclusion Our first module is finally over. It has been a fantastic trip into the principles behind design patterns and how the patterns are leveraged in software engineering. We found it fascinating to understand the concept of design patterns and their role in software engineering. Design patterns are not merely coding shortcuts but crystallized wisdom that provides reusable solutions for typical design issues. They simplify the object-oriented programming process and make it work faster, thus creating cleaner codes. On the other hand, they are not simple. We have pointed out that it is essential to know when and how to use them appropriately. In closing this chapter, we invite you to browse the other parts of the “Mastering Object-Oriented Design Patterns” series. Each part reinforces your comprehension and skill, making you more confident when applying design patterns to your projects. If you want to develop your architectural skills, speed up your development process, or improve the quality of your code, this series is here to help you. References Design Patterns Head First Design
Reactive programming has become increasingly popular in modern software development, especially in building scalable and resilient applications. Kotlin, with its expressive syntax and powerful features, has gained traction among developers for building reactive systems. In this article, we’ll delve into reactive programming using Kotlin Coroutines with Spring Boot, comparing it with WebFlux, another choice for reactive programming yet more complex in the Spring ecosystem. Understanding Reactive Programming Reactive programming is a programming paradigm that deals with asynchronous data streams and the propagation of changes. It focuses on processing streams of data and reacting to changes as they occur. Reactive systems are inherently responsive, resilient, and scalable, making them well-suited for building modern applications that need to handle high concurrency and real-time data. Kotlin Coroutines Kotlin Coroutines provides a way to write asynchronous, non-blocking code in a sequential manner, making asynchronous programming easier to understand and maintain. Coroutines allow developers to write asynchronous code in a more imperative style, resembling synchronous code, which can lead to cleaner and more readable code. Kotlin Coroutines vs. WebFlux Spring Boot is a popular framework for building Java and Kotlin-based applications. It provides a powerful and flexible programming model for developing reactive applications. Spring Boot’s support for reactive programming comes in the form of Spring WebFlux, which is built on top of Project Reactor, a reactive library for the JVM. Both Kotlin Coroutines and WebFlux offer solutions for building reactive applications, but they differ in their programming models and APIs. 1. Programming Model Kotlin Coroutines: Kotlin Coroutines use suspend functions and coroutine builders like launch and async to define asynchronous code. Coroutines provide a sequential, imperative style of writing asynchronous code, making it easier to understand and reason about. WebFlux: WebFlux uses a reactive programming model based on the Reactive Streams specification. It provides a set of APIs for working with asynchronous data streams, including Flux and Mono, which represent streams of multiple and single values, respectively. 2. Error Handling Kotlin Coroutines: Error handling in Kotlin Coroutines is done using standard try-catch blocks, making it similar to handling exceptions in synchronous code. WebFlux: WebFlux provides built-in support for error handling through operators like onErrorResume and onErrorReturn, allowing developers to handle errors in a reactive manner. 3. Integration With Spring Boot Kotlin Coroutines: Kotlin Coroutines can be seamlessly integrated with Spring Boot applications using the spring-boot-starter-web dependency and the kotlinx-coroutines-spring library. WebFlux: Spring Boot provides built-in support for WebFlux, allowing developers to easily create reactive RESTful APIs and integrate with other Spring components. Show Me the Code The Power of Reactive Approach Over Imperative Approach The provided code snippets illustrate the implementation of a straightforward scenario using both imperative and reactive paradigms. This scenario involves two stages, each taking 1 second to complete. In the imperative approach, the service responds in 2 seconds as it executes both stages sequentially. Conversely, in the reactive approach, the service responds in 1 second by executing each stage in parallel. However, even in this simple scenario, the reactive solution exhibits some complexity, which could escalate significantly in real-world business scenarios. Here’s the Kotlin code for the base service: Kotlin @Service class HelloService { fun getGreetWord() : Mono<String> = Mono.fromCallable { Thread.sleep(1000) "Hello" } fun formatName(name:String) : Mono<String> = Mono.fromCallable { Thread.sleep(1000) name.replaceFirstChar { it.uppercase() } } } Imperative Solution Kotlin fun greet(name:String) :String { val greet = helloService.getGreetWord().block(); val formattedName = helloService.formatName(name).block(); return "$greet $formattedName" } Reactive Solution Kotlin fun greet(name:String) :Mono<String> { val greet = helloService.getGreetWord().subscribeOn(Schedulers.boundedElastic()) val formattedName = helloService.formatName(name).subscribeOn(Schedulers.boundedElastic()) return greet .zipWith(formattedName) .map { it -> "${it.t1} ${it.t2}" } } In the imperative solution, the greet function awaits the completion of the getGreetWord and formatName methods sequentially before returning the concatenated result. On the other hand, in the reactive solution, the greet function uses reactive programming constructs to execute the tasks concurrently, utilizing the zipWith operator to combine the results once both stages are complete. Simplifying Reactivity With Kotlin Coroutines To simplify the complexity inherent in reactive programming, Kotlin’s coroutines provide an elegant solution. Below is a Kotlin coroutine example demonstrating the same scenario discussed earlier: Kotlin @Service class CoroutineHelloService() { suspend fun getGreetWord(): String { delay(1000) return "Hello" } suspend fun formatName(name: String): String { delay(1000) return name.replaceFirstChar { it.uppercase() } } fun greet(name:String) = runBlocking { val greet = async { getGreetWord() } val formattedName = async { formatName(name) } "${greet.await()} ${formattedName.await()}" } } In the provided code snippet, we leverage Kotlin coroutines to simplify reactive programming complexities. The HelloServiceCoroutine class defines suspend functions getGreetWord and formatName, which simulates asynchronous operations using delay. The greetCoroutine function demonstrates an imperative solution using coroutines. Within a runBlocking coroutine builder, it invokes suspend functions sequentially to retrieve the greeting word and format the name, finally combining them into a single greeting string. Conclusion In this exploration, we compared reactive programming in Kotlin Coroutines with Spring Boot to WebFlux. Kotlin Coroutines offer a simpler, more sequential approach, while WebFlux, based on Reactive Streams, provides a comprehensive set of APIs with a steeper learning curve. Code examples demonstrated how reactive solutions outperform imperative ones by leveraging parallel execution. Kotlin Coroutines emerged as a concise alternative, seamlessly integrated with Spring Boot, simplifying reactive programming complexities. In summary, Kotlin Coroutines excels in simplicity and integration, making them a compelling choice for developers aiming to streamline reactive programming in Spring Boot applications.
Integrating assets from diverse platforms and ecosystems presents a significant challenge in enterprise application development, where projects often span multiple technologies and languages. Seamlessly incorporating web-based assets such as JavaScript, CSS, and other resources is a common yet complex requirement in Java web applications. The diversity of development ecosystems — each with its tools, package managers, and distribution methods — complicates including these assets in a unified development workflow. This fragmentation can lead to inefficiencies, increased development time, and potential for errors as developers navigate the intricacies of integrating disparate systems. Recognizing this challenge, the open-source project Npm2Mvn offers a solution to streamline the inclusion of NPM packages into Java workspaces, thereby bridging the gap between the JavaScript and Java ecosystems. Understanding NPM and Maven Before diving into the intricacies of Npm2Mvn, it's essential to understand the platforms it connects: NPM and Maven. NPM (Node Package Manager) is the default package manager for Node.js, primarily used for managing dependencies of various JavaScript projects. It hosts thousands of packages developers provide worldwide, facilitating the sharing and distribution of code. NPM simplifies adding, updating, and managing libraries and tools in your projects, making it an indispensable tool for JavaScript developers. Maven, on the other hand, is a powerful build automation tool used primarily for Java projects. It goes beyond simple build tasks by managing project dependencies, documentation, SCM (Source Code Management), and releases. Maven utilizes a Project Object Model (POM) file to manage a project's build configuration, dependencies, and other elements, ensuring developers can easily manage and build their Java applications. The Genesis of Npm2Mvn Npm2Mvn emerges as a solution to a familiar challenge developers face: incorporating the vast array of JavaScript libraries and frameworks available on NPM into Java projects. While Java and JavaScript operate in markedly different environments, the demand for utilizing web assets (like CSS, JavaScript files, and fonts) within Java applications has grown exponentially. It is particularly relevant for projects that require rich client interfaces or the server-side rendering of front-end components. Many Javascript projects are distributed exclusively through NPM, so like me, if you have found yourself copying and pasting assets from an NPM archive across to your Java web application workspace, then Npm2Mvn is just the solution you need. Key Features of Npm2Mvn Designed to automate the transformation of NPM packages into Maven-compatible jar files, Npm2Mvn makes NPM packages readily consumable by Java developers. This process involves several key steps: Standard Maven repository presentation: Utilizing another open-source project, uHTTPD, NPM2MVN presents itself as a standard Maven repository. Automatic package conversion: When a request for a Maven artifact in the group npm is received, NPM2MVN fetches the package metadata and tarball from NPM. It then enriches the package with additional metadata required for Maven, such as POM files and MANIFEST.MF. Inclusion of additional metadata: Besides standard Maven metadata, NPM2MVN adds specific metadata for Graal native images, enhancing compatibility and performance for projects leveraging GraalVM. Seamless integration into local Maven cache: The final jar file, enriched with the necessary metadata, is placed in the local Maven cache, just like any other artifact, ensuring that using NPM packages in Java projects is as straightforward as adding a Maven dependency. Benefits for Java Developers Npm2Mvn offers several compelling benefits for Java developers: Access to a vast repository of JavaScript libraries: By bridging NPM and Maven, Java developers can easily incorporate thousands of JavaScript libraries and frameworks into their projects. This access significantly expands the resources for enhancing Java applications, especially for UI/UX design, without leaving the familiar Maven ecosystem. Simplified dependency management: Managing dependencies across different ecosystems can be cumbersome. Npm2Mvn streamlines this process, allowing developers to handle NPM packages with the Maven commands they are accustomed to. Enhanced productivity: By automating the conversion of NPM packages to Maven artifacts, NPM2MVN saves developers considerable time and effort. This efficiency boost enables developers to focus more on building their applications than wrestling with package management intricacies. Real-world applications: Projects like Fontawesome, Xterm, and Bootstrap, staples for frontend development, can seamlessly integrate into Java applications. How To Use Using Npm2Mvn is straightforward. Jadaptive, the project's developers, host a repository here. This repository is open and free to use. You can also download a copy of the server to host in a private build environment. To use this service, add the repository entry to your POM file. XML <repositories> <repository> <id>npm2mvn</id> <url>https://npm2mvn.jadaptive.com</url> </repository> </repositories> Now, declare your NPM packages. For example, I am including the JQuery NPM package here. XML <dependency> <groupId>npm</groupId> <artifactId>jquery</artifactId> <version>3.7.1</version> </dependency> That's all we need to include and version manage NPM packages into the classpath. Consuming the NPM Resources in Your Java Application The resources of the NPM package are placed in the jar under a fixed prefix, allowing multiple versions of multiple NPM packages to be available to the JVM via the classpath or module path. For example, if the NPM package bootstrap@v5.3.1 contains a resource with the path css/bootstrap.css, then the Npm2Mvn package will make that resource available at the resource path /npm2mvn/npm/bootstrap/5.3.1/css/boostrap.css. Now that you know the path of the resources in your classpath, you can prepare to consume them in your Java web application by implementing a Servlet or other mechanism to serve the resources from the classpath. How you do this depends on your web application platform and any framework you use. In Spring Boot, we would add a resource handler as demonstrated below. Java @Configuration @EnableWebMvc public class MvcConfig implements WebMvcConfigurer { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry .addResourceHandler("/npm2mvn/**") .addResourceLocations("classpath:/npm2mvn/"); } } With this configuration in a Spring Boot application, we can now reference NPM assets directly in HTML files we use in the application. HTML <script type="text/javascript" src="/npm2mvn/npm/jquery/3.7.1/dist/jquery.min.js"> But What About NPM Scopes? NPM version 2 supports scopes which, according to their website: ... allows you to create a package with the same name as a package created by another user or organization without conflict. In the examples above, we are not using scopes. If the package you require uses a scope, you must modify your pom.xml dependency and the resource path. Taking the FontAwesome project as an example, to include the @fortawesome/fontawesome-free module in our Maven build, we modify the groupId to include the scope as demonstrated below. XML <dependency> <groupId>npm.fortawesome</groupId> <artifactId>fontawesome-free</artifactId> <version>6.5.1</version> </dependency> Similarly, in the resource path, we change the second path value from 'npm' to the same groupId we used above. HTML <link rel="stylesheet" href="/npm2nvm/npm.fortawesome/fontawesome-free/6.5.1/css/all.css"/> You can download a full working Spring Boot example that integrates the Xterm NPM module and add-ons from GitHub. Dependency Generator The website at the hosted version of Npm2Mvn provides a useful utility that developers can use to get the correct syntax for the dependencies needed to build the artifacts. Here we have entered the scope, package, and version to get the correct dependency entry for the Maven build. If the project does not have a scope simply leave the first field blank. Conclusion Npm2Mvn bridges the JavaScript and Java worlds, enhancing developers' capabilities and project possibilities. By simplifying the integration of NPM packages into Java workspaces, Npm2Mvn promotes a more interconnected and efficient development environment. It empowers developers to leverage the best of both ecosystems in their applications.
In modern web applications, integrating with external services is a common requirement. However, when interacting with these services, it's crucial to handle scenarios where responses might be delayed or fail to arrive. Spring Boot, with its extensive ecosystem, offers robust solutions to address such challenges. In this article, we'll explore how to implement timeouts using three popular approaches: RestClient, RestTemplate, and WebClient, all essential components in Spring Boot. 1. Timeout With RestTemplate First, let's demonstrate setting a timeout using RestTemplate, a synchronous HTTP client. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateExample { public static void main(String[] args) { var restTemplate = new RestTemplate(); var url = "https://api.example.com/data"; var timeout = 5000; // Timeout in milliseconds restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } In this snippet, we're performing a GET request to `https://api.example.com/data`. However, we haven't set any timeout, which means the request might hang indefinitely in case of network issues or server unavailability. To set a timeout, we need to configure RestTemplate with an appropriate `ClientHttpRequestFactory`, such as `HttpComponentsClientHttpRequestFactory`. Java import org.springframework.web.client.RestTemplate; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.http.ResponseEntity; import org.springframework.http.HttpStatus; public class RestTemplateTimeoutExample { public static void main(String[] args) { var url = "https://api.example.com/data"; var timeout = 5000; var clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(); clientHttpRequestFactory.setConnectTimeout(timeout); clientHttpRequestFactory.setConnectionRequestTimeout(timeout); var restTemplate = new RestTemplate(clientHttpRequestFactory); restTemplate.getForEntity(url, String.class); System.out.println(response.getBody()); } } 2. Timeout With WebClient WebClient is a non-blocking, reactive HTTP client introduced in Spring WebFlux. Let's see how we can use it with a timeout: Java import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class WebClientTimeoutExample { public static void main(String[] args) { var client = WebClient.builder() .baseUrl("https://api.example.com") .build(); client.get() .uri("/data") .retrieve() .bodyToMono(String.class) .timeout(Duration.ofMillis(5000)) .subscribe(System.out::println); } } Here, we're using WebClient to make a GET request to `/data` endpoint. The `timeout` operator specifies a maximum duration for the request to wait for a response. 3. Timeout With RestClient RestClient is a synchronous HTTP client that offers a modern, fluent API since Spring Boot 3.2. New Spring Boot applications should replace RestTemplate code with RestClient API. Now, let's implement a RestClient with timeout using `HttpComponentsClientHttpRequestFactory`: Java import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; import org.springframework.web.reactive.function.client.WebClient; import java.time.Duration; public class RestClientTimeoutExample { public static void main(String[] args) { var factory = new HttpComponentsClientHttpRequestFactory(); factory.setConnectTimeout(5000); factory.setReadTimeout(5000); var restClient = RestClient .builder() .requestFactory(clientHttpRequestFactory) .build(); var response = restClient .get() .uri("https://api.example.com/data") .retrieve() .toEntity(String.class); System.out.println(response.getBody()); } } In this code, we define a specified timeout using HttpComponentsClientHttpRequestFactory and use it in RestClient.builder(). By setting timeouts appropriately, we ensure that our application remains responsive even in scenarios where external services are slow or unresponsive. This proactive approach enhances the overall reliability and resilience of our Spring Boot applications. Conclusion In summary, handling timeouts is important for web apps to stay responsive and robust during interactions with external services. We explored three popular Spring Boot approaches for implementing timeouts effectively: RestTemplate, WebClient, and RestClient. By setting appropriate timeouts, developers can ensure applications gracefully handle delayed or failed responses and enhance overall reliability and user experience in network conditions and service availability.
Deciding on full-stack technology is daunting. There's a large number of frameworks to assess. I'll share my contrarian viewpoint. Choose Django. The Django Python framework is old, but it gets the job done quickly and cheaply. Django has an "everything included and opinionated" philosophy. This approach makes it very fast to get started with Django. As your project scales, separate Django into individual components that can be used as a SQL database manager, an object-relational mapper, or an API server. In 2024, we stopped using Firebase and MongoDB on the backend and moved to Django. While many projects can benefit from more modern frameworks, choosing newer technology can also lead to more costs as teams struggle to find developers with the required skills and later find the development budget expanding as unforeseen problems arise. With Django, the development path has been smoothly paved by tens of thousands of developers before you. For the types of small projects we build for business workflows, Django is not only faster to bring ideas into prototype, it is also cheaper to build and maintain. Here are the top 5 reasons why Django should be considered in 2024. 1. SQL as a Service Django uses an SQL database. Like Django, SQL is considered old and not as glamorous as newer NoSQL databases like MongoDB. One way to view Django is as an SQL database with pre-built code to make the database easier to use. If Django fails to meet your future requirements for a specific function, you can still use the SQL database alone or use portions of Django to make the database easier to access. In our case, part of our Django deployments consists of a hosted PostgreSQL database as a service. To start, we simply add the connection information to the Django app. This architecture also allows us to use Django as a temporary prototype tool to set up a working interface and logic to data. As the project evolves, we can bypass Django and connect directly to the database with another application such as Flutter or React, or set up a stripped-down intermediary server to bridge the database and front end. The maturity of SQL is also an advantage when hiring talent. Many universities and technical schools include SQL as part of the mainstream curriculum. Hiring talent from schools is easier as the students have recent academic experience and are eager to gain more real-world experience. Most business software projects can be completed without advanced SQL database administration skills. 2. API Service The ability to use SQL or the Django REST framework to expose API endpoints allows us to quickly build multiple interfaces to the database using different technologies. Although Django templates are easy to learn and powerful, the templates aren't as interactive as more reactive frameworks. Fortunately, Django can serve both the Django page templates and a REST API for the same data. This means that you can test out different front-end architectures while reusing most of the core logic of the backend. The benefit of starting with Django instead of an API server is that it is easier to get started with Django and prototype a concept. Django is super easy to install and comes with everything needed for a web application. pip install django will get you going. It's very fast to build a functional prototype and deploy it. Django comes with authentication, a database, a development web server, and template syntax. This allows complex sites to be built quickly with different security groups and a built-in admin panel for staff. 3. Serverless Architecture Django makes it easy to deploy to a serverless architecture and eliminates virtual server setup and maintenance. Starting in 2023, we moved our Django projects to a serverless architecture. Previously, we used Ubuntu Linux in virtual servers. We used to run PostgreSQL, NGINX, and Django in the same scalable virtual server. Moving to a set of cloud services for database, storage, CPU, and network took less than a day to start a new project. During software development, our code is pushed to GitHub on a branch that we can preview as a web application in a browser. After a peer or manager reviews the pull request, it's merged into the main branch which triggers automatic deployment of the code to the production service to run the application. The GitHub deployment script for Django as well as the settings was provided by the application service. We just copied it and deployed. As our application service can't be used to storage media and data, we use a different service to store media such as images. Since Django has been used for a long time, there is going to be a solution for almost any connectivity problem you encounter. For example, AWS SDK for Python, Boto3, provides easy access to S3, which is how we handle storage. 4. Python Python remains one of the most popular languages. It is widely taught at all levels of education in the US, including elementary, middle, high, and college. This vast pool of people makes recruitment easier and the integration of Python into a structured academic curriculum leads to better programming practices. Even with no experience with Python, people familiar with another language can get productive quickly. With no compilation, Python promotes a very fast development workflow. The lack of compilation does come with a performance penalty. For most projects, hardware is cheaper than labor. With a scalable cloud-based architecture, the performance of Django is never an issue for projects with low simultaneous connections. Although Python is dynamically typed, you can use typing to take advantage of type hints. Perhaps more important than the language features is the toolchain. As Python does not require compilation, the toolchain is easier to set up and maintain. Lightweight tools such as pyenv and venv can keep the version of Python and packages consistent across the development team. As long as the PATH is set up correctly on the developer workstations, development and deployment are usually free of drama. Our team uses a mix of Windows 10, Windows 11, macOS, and Ubuntu 22.04 for development. Although not required, everyone uses VS Code with a range of Python extensions. 5. Community After more than 20 years, almost all Django problems have been solved by other people. The huge size and long history of the community also help with staff changes. Most of the information on Django can be found with a search, either using a QA site or an AI site. For comparison, when I use other, newer frameworks, I find that I need to get my information by asking questions to people, often on systems such as Discord or Slack. For Django, I usually don't need to wait for a person to answer my question as the question has usually been asked and answered by the tens of thousands of Django developers before me. Although there are a number of free and inexpensive courses on both Django and Python, most people simply start coding and search for answers. While the Python language and Django are both under active development, most problems in small projects do not occur with the edge case of the language or framework. Django Growth in the Future I'm not going to predict the future. Young people are going to create it. In my company, we have an active undergraduate intern program. Of course, they use and like Django. However, that's part of their job. What surprises me is that they use Python and Django in their own projects outside of work. I usually expect people under 24 to use JavaScript and possibly TypeScript for both the back end and the front end of a full-stack project. The young programmers are the future and they often use Python as the backend and a combination of Django admin panel, Django template syntax, and something like React on the front end. Ultimately, a more modern framework may work better for your project. If you like Python, then Flask or FastAPI may be better than Django. GraphQL with Django may be better than the tried and true REST protocol. However, before you dismiss Django as old and monolithic, take another look at it from a componentized perspective. It may be the fastest and cheapest way to bring your creative ideas to life. After 20 years, Django still works great.
In Java programming, object creation or instantiation of a class is done with "new" operator and with a public constructor declared in the class as below. Java Clazz clazz = new Clazz(); We can read the code snippet as follows: Clazz() is the default public constructor called with "new" operator to create or instantiate an object for Clazz class and assigned to variable clazz, whose type is Clazz. While creating a singleton, we have to ensure only one single object is created or only one instantiation of a class takes place. To ensure this, the following common things become the prerequisite. All constructors need to be declared as "private" constructors. It prevents the creation of objects with "new" operator outside the class. A private constant/variable object holder to hold the singleton object is needed; i.e., a private static or a private static final class variable needs to be declared. It holds the singleton object. It acts as a single source of reference for the singleton object By convention, the variable is named as INSTANCE or instance. A static method to allow access to the singleton object by other objects is required. This static method is also called a static factory method, as it controls the creation of objects for the class. By convention, the method is named as getInstance(). With this understanding, let us delve deeper into understanding singleton. Following are the 6 ways one can create a singleton object for a class. 1. Static Eager Singleton Class When we have all the instance properties in hand, and we like to have only one object and a class to provide a structure and behavior for a group of properties related to each other, we can use the static eager singleton class. This is well-suited for application configuration and application properties. Java public class EagerSingleton { private static final EagerSingleton INSTANCE = new EagerSingleton(); private EagerSingleton() {} public static EagerSingleton getInstance() { return INSTANCE; } public static void main(String[] args) { EagerSingleton eagerSingleton = EagerSingleton.getInstance(); } } The singleton object is created while loading the class itself in JVM and assigned to the INSTANCE constant. getInstance() provides access to this constant. While compile-time dependencies over properties are good, sometimes run-time dependencies are required. In such a case, we can make use of a static block to instantiate singleton. Java public class EagerSingleton { private static EagerSingleton instance; private EagerSingleton(){} // static block executed during Class loading static { try { instance = new EagerSingleton(); } catch (Exception e) { throw new RuntimeException("Exception occurred in creating EagerSingleton instance"); } } public static EagerSingleton getInstance() { return instance; } } The singleton object is created while loading the class itself in JVM as all static blocks are executed while loading. Access to the instance variable is provided by the getInstance() static method. 2. Dynamic Lazy Singleton Class Singleton is more suited for application configuration and application properties. Consider heterogenous container creation, object pool creation, layer creation, facade creation, flyweight object creation, context preparation per requests, and sessions, etc.: they all require dynamic construction of a singleton object for better "separation of concern." In such cases, dynamic lazy singletons are required. Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static LazySingleton getInstance() { if (instance == null) { instance = new LazySingleton(); } return instance; } } The singleton object is created only when the getInstance() method is called. Unlike the static eager singleton class, this class is not thread-safe. Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static synchronized LazySingleton getInstance() { if (instance == null) { instance = new LazySingleton(); } return instance; } } The getInstance() method needs to be synchronized to ensure the getInstance() method is thread-safe in singleton object instantiation. 3. Dynamic Lazy Improved Singleton Class Java public class LazySingleton { private static LazySingleton instance; private LazySingleton(){} public static LazySingleton getInstance() { if (instance == null) { synchronized (LazySingleton.class) { if (instance == null) { instance = new LazySingleton(); } } } return instance; } } Instead of locking the entire getInstance() method, we could lock only the block with double-checking or double-checked locking to improve performance and thread contention. Java public class EagerAndLazySingleton { private EagerAndLazySingleton(){} private static class SingletonHelper { private static final EagerAndLazySingleton INSTANCE = new EagerAndLazySingleton(); } public static EagerAndLazySingleton getInstance() { return SingletonHelper.INSTANCE; } } The singleton object is created only when the getInstance() method is called. It is a Java memory-safe singleton class. It is a thread-safe singleton and is lazily loaded. It is the most widely used and recommended. Despite performance and safety improvement, the only objective to create just one object for a class is challenged by memory reference, reflection, and serialization in Java. Memory reference: In a multithreaded environment, reordering of read and writes for threads can occur on a referenced variable, and a dirty object read can happen anytime if the variable is not declared volatile. Reflection: With reflection, the private constructor can be made public and a new instance can be created. Serialization: A serialized instance object can be used to create another instance of the same class. All of these affect both static and dynamic singletons. In order to overcome such challenges, it requires us to declare the instance holder as volatile and override equals(), hashCode() and readResolve() of default parent class of all classes in Java, Object.class. 4. Singleton With Enum The issue with memory safety, reflection, and serialization can be avoided if enums are used for static eager singleton. Java public enum EnumSingleton { INSTANCE; } These are static eager singletons in disguise, thread safe. It is good to prefer an enum where a static eagerly initialized singleton is required. 5. Singleton With Function and Libraries While understanding the challenges and caveats in singleton is a must to appreciate, why should one worry about reflection, serialization, thread safety, and memory safety when one can leverage proven libraries? Guava is such a popular and proven library, handling a lot of best practices for writing effective Java programs. I have had the privilege of using the Guava library to explain supplier-based singleton object instantiation to avoid a lot of heavy-lifting lines of code. Passing a function as an argument is the key feature of functional programming. While the supplier function provides a way to instantiate object producers, in our case, the producer must produce only one object and should keep returning the same object repeatedly after a single instantiation. We can memoize/cache the created object. Functions defined with lambdas are usually lazily invoked to instantiate objects and the memoization technique helps in lazily invoked dynamic singleton object creation. Java import com.google.common.base.Supplier; import com.google.common.base.Suppliers; public class SupplierSingleton { private SupplierSingleton() {} private static final Supplier<SupplierSingleton> singletonSupplier = Suppliers.memoize(()-> new SupplierSingleton()); public static SupplierSingleton getInstance() { return singletonSupplier.get(); } public static void main(String[] args) { SupplierSingleton supplierSingleton = SupplierSingleton.getInstance(); } } Functional programming, supplier function, and memoization help in the preparation of singletons with a cache mechanism. This is most useful when we don't want heavy framework deployment. 6. Singleton With Framework: Spring, Guice Why worry about even preparing an object via supplier and maintaining cache? Frameworks like Spring and Guice work on POJO objects to provide and maintain singleton. This is heavily used in enterprise development where many modules each require their own context with many layers. Each context and each layer are good candidates for singleton patterns. Java import org.springframework.beans.factory.config.ConfigurableBeanFactory; import org.springframework.context.annotation.AnnotationConfigApplicationContext; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Scope; class SingletonBean { } @Configuration public class SingletonBeanConfig { @Bean @Scope(value = ConfigurableBeanFactory.SCOPE_SINGLETON) public SingletonBean singletonBean() { return new SingletonBean(); } public static void main(String[] args) { AnnotationConfigApplicationContext applicationContext = new AnnotationConfigApplicationContext(SingletonBean.class); SingletonBean singletonBean = applicationContext.getBean(SingletonBean.class); } } Spring is a very popular framework. Context and Dependency Injection are the core of Spring. import com.google.inject.AbstractModule; import com.google.inject.Guice; import com.google.inject.Injector; interface ISingletonBean {} class SingletonBean implements ISingletonBean { } public class SingletonBeanConfig extends AbstractModule { @Override protected void configure() { bind(ISingletonBean.class).to(SingletonBean.class); } public static void main(String[] args) { Injector injector = Guice.createInjector(new SingletonBeanConfig()); SingletonBean singletonBean = injector.getInstance(SingletonBean.class); } } Guice from Google is also a framework to prepare singleton objects and an alternative to Spring. Following are the ways singleton objects are leveraged with "Factory of Singletons." Factory Method, Abstract Factory, and Builders are associated with the creation and construction of specific objects in JVM. Wherever we envision the construction of an object with specific needs, we can discover the singleton's need. Further places where one can check out and discover singleton are as follows. Prototype or Flyweight Object pools Facades Layering Context and class loaders Cache Cross-cutting concerns and aspect-oriented programming Conclusion Patterns appear when we solve use cases for our business problems and for our non-functional requirement constraints like performance, security, and CPU and memory constraints. Singleton objects for a given class is such a pattern, and requirements for its use will fall in place to discover. The class by nature is a blueprint to create multiple objects, yet the need for dynamic heterogenous containers to prepare "context," "layer,", "object pools," and "strategic functional objects" did push us to make use of declaring globally accessible or contextually accessible objects. Thanks for your valuable time, and I hope you found something useful to revisit and discover.