A Comparative Exploration of LLM and RAG Technologies: Shaping the Future of AI
Designing Developer-Friendly APIs and SDKs: Strategies for Platform Success
Modern API Management
When assessing prominent topics across DZone — and the software engineering space more broadly — it simply felt incomplete to conduct research on the larger impacts of data and the cloud without talking about such a crucial component of modern software architectures: APIs. Communication is key in an era when applications and data capabilities are growing increasingly complex. Therefore, we set our sights on investigating the emerging ways in which data that would otherwise be isolated can better integrate with and work alongside other app components and across systems.For DZone's 2024 Modern API Management Trend Report, we focused our research specifically on APIs' growing influence across domains, prevalent paradigms and implementation techniques, security strategies, AI, and automation. Alongside observations from our original research, practicing tech professionals from the DZone Community contributed articles addressing key topics in the API space, including automated API generation via no and low code; communication architecture design among systems, APIs, and microservices; GraphQL vs. REST; and the role of APIs in the modern cloud-native landscape.
Open Source Migration Practices and Patterns
MongoDB Essentials
I would like to to introduce you a Java class with less than 170 lines of code to facilitate work with SQL queries called via the JDBC API. What makes this solution interesting? The class can be embedded in a Java version 17 script. Using a Java Script The advantage of a Java script is easy portability in text format and the possibility of running without prior compilation, while we have considerable resources available from the language's standard library at runtime. The use of scripts is offered for various prototypes, in which even more complicated data exports or data conversions can be solved (after connecting to the database). Scripts are useful wherever we don't want to (or can't) put the implementation into a standard Java project. However, the use of the script has some limitations. For example, the code must be written in a single file. We can include all the necessary libraries when we run the script, but these will likely have additional dependencies, and simply listing them on the command line can be frustrating. The complications associated with the distribution of such a script probably do not need to be emphasized. For the above reasons, I believe that external libraries in scripts are best avoided. If we still want to go the script route, the choice falls on pure JDBC. Multi-line text literals can be advantageously used for writing SQL queries, and the automatic closing of objects like PreparedStatement (implementing the interface AutoCloseable). So what's the problem? Mapping SQL Parameter Values For security reasons, it is advisable to map SQL parameter values to question marks. I consider the main handicap of JDBC to be the mapping of parameters using the sequence number of the question mark (starting with one). The first version of the parameter mapping to the SQL script often turns out well, but the risk of error increases as the number of parameters and additional SQL modifications increase. I remind you that by inserting a new parameter in the first position, the following row must be renumbered. Another complication is the use of the operator IN because for each value of the enumeration, a question mark must be written in the SQL template which must be mapped to a separate parameter. If the parameter list is dynamic, the list of question marks in the SQL template must also be dynamic. Debugging a larger number of more complex SQLs can start to take a significant amount of time. For inserting SQL parameters using String Templates we will have to wait a little longer. However, inserting SQL parameters could be facilitated by a simple wrapper over the interfacePreparedStatement, which would (before calling the SQL statement) append the parameters using JPA-style named tags (alphanumeric text starting with a colon). A wrapper could also simplify reading data from the database (with a SELECT statement) if it allowed the necessary methods to be chained into a single statement, preferably with a return type Stream<ResultSet>. SqlParamBuilder Class Visualization of the SQL command with attached parameters would sometimes be useful for debugging or logging the SQL query. I present to you the class SqlParamBuilder. The priority of the implementation was to cover the stated requirements with a single Java class with minimalistic code. The programming interface was inspired by the library JDBI. The samples use the H2 database in in-memory mode. However, connecting the database driver will be necessary. Java void mainStart(Connection dbConnection) throws Exception { try (var builder = new SqlParamBuilder(dbConnection)) { System.out.println("# CREATE TABLE"); builder.sql(""" CREATE TABLE employee ( id INTEGER PRIMARY KEY , name VARCHAR(256) DEFAULT 'test' , code VARCHAR(1) , created DATE NOT NULL ) """) .execute(); System.out.println("# SINGLE INSERT"); builder.sql(""" INSERT INTO employee ( id, code, created ) VALUES ( :id, :code, :created ) """) .bind("id", 1) .bind("code", "T") .bind("created", someDate) .execute(); System.out.println("# MULTI INSERT"); builder.sql(""" INSERT INTO employee (id,code,created) VALUES (:id1,:code,:created), (:id2,:code,:created) """) .bind("id1", 2) .bind("id2", 3) .bind("code", "T") .bind("created", someDate.plusDays(7)) .execute(); builder.bind("id1", 11) .bind("id2", 12) .bind("code", "V") .execute(); System.out.println("# SELECT"); List<Employee> employees = builder.sql(""" SELECT t.id, t.name, t.created FROM employee t WHERE t.id < :id AND t.code IN (:code) ORDER BY t.id """) .bind("id", 10) .bind("code", "T", "V") .streamMap(rs -> new Employee( rs.getInt("id"), rs.getString("name"), rs.getObject("created", LocalDate.class))) .toList(); System.out.printf("# PRINT RESULT OF: %s%n", builder.toStringLine()); employees.stream() .forEach((Employee employee) -> System.out.println(employee)); assertEquals(3, employees.size()); assertEquals(1, employees.get(0).id); assertEquals("test", employees.get(0).name); assertEquals(someDate, employees.get(0).created); } } record Employee (int id, String name, LocalDate created) {} static class SqlParamBuilder {…} Usage Notes and Final Thoughts An instance of the type SqlParamBuilder can be recycled for multiple SQL statements. After calling the command, the parameters can be changed and the command can be run again. The parameters are assigned to the last used object PreparedStatement. Method sql() automatically closes the internal object PrepradedStatement (if there was one open before). If we change the group of parameters (typically for the IN operator), we need to send the same number for the same PreparedStatement. Otherwise, the method againsql() will need to be used. An object is required after the last command execution to explicitly close the SqlParamBuilder. However, since we are implementing an interface AutoCloseable, just enclose the entire block in a try block. Closing does not affect the contained database connection. In the Bash shell, the sample can be run with a script SqlExecutor.sh, which can download the necessary JDBC driver (here, for the H2 database). If we prefer Kotlin, we can try a Bash script SqlExecutorKt.sh, which migrates the prepared Kotlin code to a script and runs it. Let's not get confused by the fact that the class is stored in a Maven-type project. One reason is the ease of running JUnit tests. The class is licensed under the Apache License, Version 2.0. Probably the fastest way to create your own implementation is to download the example script, redesign the method mainRun(), and modify the connection parameters to your own database. Use your own JDBC driver to run.
1. Use "&&" to Link Two or More Commands Use “&&” to link two or more commands when you want the previous command to be succeeded before the next command. If you use “;” then it would still run the next command after “;” even if the command before “;” failed. So you would have to wait and run each command one by one. However, using "&&" ensures that the next command will only run if the preceding command finishes successfully. This allows you to add commands without waiting, move on to the next task, and check later. If the last command ran, it indicates that all previous commands ran successfully. Example: Shell ls /path/to/file.txt && cp /path/to/file.txt /backup/ The above example ensures that the previous command runs successfully and that the file "file.txt" exists. If the file doesn't exist, the second command after "&&" won't run and won't attempt to copy it. 2. Use “grep” With -A and -B Options One common use of the "grep" command is to identify specific errors from log files. However, using it with the -A and -B options provides additional context within a single command, and it displays lines after and before the searched text, which enhances visibility into related content. Example: Shell % grep -A 2 "java.io.IOException" logfile.txt java.io.IOException: Permission denied (open /path/to/file.txt) at java.io.FileOutputStream.<init>(FileOutputStream.java:53) at com.pkg.TestClass.writeFile(TestClass.java:258) Using grep with -A here will also show 2 lines after the “java.io.IOException” was found from the logfile.txt. Similarly, Shell grep "Ramesh" -B 3 rank-file.txt Name: John Wright, Rank: 23 Name: David Ross, Rank: 45 Name: Peter Taylor, Rank: 68 Name Ramesh Kumar, Rank: 36 Here, grep with -B option will also show 3 lines before the “Ramesh” was found from the rank-file.txt 3. Use “>” to Create an Empty File Just write > and then the filename to create an empty file with the name provided after > Example: Shell >my-file.txt It will create an empty file with "my-file.txt" name in the current directory. 4. Use “rsync” for Backups "rsync" is a useful command for regular backups as it saves time by transferring only the differences between the source and destination. This feature is especially beneficial when creating backups over a network. Example: Shell rsync -avz /path/to/source_directory/ user@remotehost:/path/to/destination_directory/ 5. Use Tab Completion Using tab completion as a habit is faster than manually selecting filenames and pressing Enter. Typing the initial letters of filenames and utilizing Tab completion streamlines the process and is more efficient. 6. Use “man” Pages Instead of reaching the web to find the usage of a command, a quick way would be to use the “man” command to find out the manual of that command. This approach not only saves time but also ensures accuracy, as command options can vary based on the installed version. By accessing the manual directly, you get precise details tailored to your existing version. Example: Shell man ps It will get the manual page for the “ps” command 7. Create Scripts For repetitive tasks, create small shell scripts that chain commands and perform actions based on conditions. This saves time and reduces risks in complex operations. Conclusion In conclusion, becoming familiar with these Linux commands and tips can significantly boost productivity and streamline workflow on the command line. By using techniques like command chaining, context-aware searching, efficient file management, and automation through scripts, users can save time, reduce errors, and optimize their Linux experience.
Debugging Terraform providers is crucial for ensuring the reliability and functionality of infrastructure deployments. Terraform providers, written in languages like Go, can have complex logic that requires careful debugging when issues arise. One powerful tool for debugging Terraform providers is Delve, a debugger for the Go programming language. Delve allows developers to set breakpoints, inspect variables, and step through code, making it easier to identify and resolve bugs. In this blog, we will explore how to use Delve effectively for debugging Terraform providers. Setup Delve for Debugging Terraform Provider Shell # For Linux sudo apt-get install -y delve # For macOS brew instal delve Refer here for more details on the installation. Debug Terraform Provider Using VS Code Follow the below steps to debug the provider Download the provider code. We will use IBM Cloud Terraform Provider for this debugging example. Update the provider’s main.go code to the below to support debugging Go package main import ( "flag" "log" "github.com/IBM-Cloud/terraform-provider-ibm/ibm/provider" "github.com/IBM-Cloud/terraform-provider-ibm/version" "github.com/hashicorp/terraform-plugin-sdk/v2/plugin" ) func main() { var debug bool flag.BoolVar(&debug, "debug", true, "Set to true to enable debugging mode using delve") flag.Parse() opts := &plugin.ServeOpts{ Debug: debug, ProviderAddr: "registry.terraform.io/IBM-Cloud/ibm", ProviderFunc: provider.Provider, } log.Println("IBM Cloud Provider version", version.Version) plugin.Serve(opts) } Launch VS Code in debug mode. Refer here if you are new to debugging in VS Code. Create the launch.json using the below configuration. JSON { "version": "0.2.0", "configurations": [ { "name": "Debug Terraform Provider IBM with Delve", "type": "go", "request": "launch", "mode": "debug", "program": "${workspaceFolder}", "internalConsoleOptions": "openOnSessionStart", "args": [ "-debug" ] } ] } In VS Code click “Start Debugging”. Starting the debugging starts the provider for debugging. To attach the Terraform CLI to the debugger, console prints the environment variable TF_REATTACH_PROVIDERS. Copy this from the console. Set this as an environment variable in the terminal running the Terraform code. Now in the VS Code where the provider code is in debug mode, open the go code to set up break points. To know more on breakpoints in VS Code refer here. Execute 'terraform plan' followed by 'terraform apply', to notice the Terraform provider breakpoint to be triggered as part of the terraform apply execution. This helps to debug the Terraform execution and comprehend the behavior of the provider code for the particular inputs supplied in Terraform. Debug Terraform Provider Using DLV Command Line Follow the below steps to debug the provider using the command line. To know more about the dlv command line commands refer here. Follow the 1& 2 steps mentioned in Debug Terraform provider using VS Code In the terminal navigate to the provider go code and issue go build -gcflags="all=-N -l" to compile the code To execute the precompiled Terraform provider binary and begin a debug session, run dlv exec --accept-multiclient --continue --headless <path to the binary> -- -debug where the build file is present. For IBM Cloud Terraform provider use dlv exec --accept-multiclient --continue --headless ./terraform-provider-ibm -- -debug In another terminal where the Terraform code would be run, set the TF_REATTACH_PROVIDERS as an environment variable. Notice the “API server” details in the above command output. In another (third) terminal connect to the DLV server and start issuing the DLV client commands Set the breakpoint using the break command Now we are set to debug the Terraform provider when Terraform scripts are executed. Issue continue in the DLV client terminal to continue until the breakpoints are set. Now execute the terraform plan and terraform apply to notice the client waiting on the breakpoint. Use DLV CLI commands to stepin / stepout / continue the execution. This provides a way to debug the terraform provider from the command line. Remote Debugging and CI/CD Pipeline Debugging Following are the extensions to the debugging using the dlv command line tool. Remote Debugging Remote debugging allows you to debug a Terraform provider running on a remote machine or environment. Debugging in CI/CD Pipelines Debugging in CI/CD pipelines involves setting up your pipeline to run Delve and attach to your Terraform provider for debugging. This can be challenging due to the ephemeral nature of CI/CD environments. One approach is to use conditional logic in your pipeline configuration to only enable debugging when a specific environment variable is set. For example, you can use the following script in your pipeline configuration to start Delve and attach to your Terraform provider – YAML - name: Debug Terraform Provider if: env(DEBUG) == 'true' run: | dlv debug --headless --listen=:2345 --api-version=2 & sleep 5 # Wait for Delve to start export TF_LOG=TRACE terraform init terraform apply Best Practices for Effective Debugging With Delve Here are some best practices for effective debugging with Delve, along with tips for improving efficiency and minimizing downtime: Use version control: Always work with version-controlled code. This allows you to easily revert changes if debugging introduces new issues. Start small: Begin debugging with a minimal, reproducible test case. This helps isolate the problem and reduces the complexity of debugging. Understand the code: Familiarize yourself with the codebase before debugging. Knowing the code structure and expected behavior can speed up the debugging process. Use logging: Add logging statements to your code to track the flow of execution and the values of important variables. This can provide valuable insights during debugging. Use breakpoints wisely: Set breakpoints strategically at critical points in your code. Too many breakpoints can slow down the debugging process. Inspect variables: Use the print (p) command in Delve to inspect the values of variables. This can help you understand the state of your program at different points in time. Use conditional breakpoints: Use conditional breakpoints to break execution only when certain conditions are met. This can help you focus on specific scenarios or issues. Use stack traces: Use the stack command in Delve to view the call stack. This can help you understand the sequence of function calls leading to an issue. Use goroutine debugging: If your code uses goroutines, use Delve's goroutine debugging features to track down issues related to concurrency. Automate debugging: If you're debugging in a CI/CD pipeline, automate the process as much as possible to minimize downtime and speed up resolution. By following these best practices, you can improve the efficiency of your debugging process and minimize downtime caused by issues in your code. Conclusion In conclusion, mastering the art of debugging Terraform providers with Delve is a valuable skill that can significantly improve the reliability and performance of your infrastructure deployments. By setting up Delve for debugging, exploring advanced techniques like remote debugging and CI/CD pipeline debugging, and following best practices for effective debugging, you can effectively troubleshoot issues in your Terraform provider code. Debugging is not just about fixing bugs; it's also about understanding your code better and improving its overall quality. Dive deep into Terraform provider debugging with Delve, and empower yourself to build a more robust and efficient infrastructure with Terraform.
The amount of data generated by modern systems has become a double-edged sword for security teams. While it offers valuable insights, sifting through mountains of logs and alerts manually to identify malicious activity is no longer feasible. Here's where rule-based incident detection steps in, offering a way to automate the process by leveraging predefined rules to flag suspicious activity. However, the choice of tool for processing high-volume data for real-time insights is crucial. This article delves into the strengths and weaknesses of two popular options: Splunk, a leading batch search tool, and Flink, a powerful stream processing framework, specifically in the context of rule-based security incident detection. Splunk: Powerhouse Search and Reporting Splunk has become a go-to platform for making application and infrastructure logs readily available for ad-hoc search. Its core strength lies in its ability to ingest log data from various sources, centralize it, and enable users to explore it through powerful search queries. This empowers security teams to build comprehensive dashboards and reports, providing a holistic view of their security posture. Additionally, Splunk supports scheduled searches, allowing users to automate repetitive queries and receive regular updates on specific security metrics. This can be particularly valuable for configuring rule-based detections, monitoring key security indicators, and identifying trends over time. Flink: The Stream Processing Champion Apache Flink, on the other hand, takes a fundamentally different approach. It is a distributed processing engine designed to handle stateful computations over unbounded and bounded data streams. Unlike Splunk's batch processing, Flink excels at real-time processing, enabling it to analyze data as it arrives, offering near-instantaneous insights. This makes it ideal for scenarios where immediate detection and response are paramount, such as identifying ongoing security threats or preventing fraudulent transactions in real time. Flink's ability to scale horizontally across clusters makes it suitable for handling massive data volumes, a critical factor for organizations wrestling with ever-growing security data. Case Study: Detecting User Login Attacks Let's consider a practical example: a rule designed to detect potential brute-force login attempts. This rule aims to identify users who experience a high number of failed login attempts within a specific timeframe (e.g., an hour). Here's how the rule implementation would differ in Splunk and Flink: Splunk Implementation sourcetype=login_logs (result="failure" OR "failed") | stats count by user within 1h | search count > 5 | alert "Potential Brute Force Login Attempt for user: $user$" This Splunk search query filters login logs for failed attempts, calculates the count of failed attempts per user within an hour window, and then triggers an alert if the count exceeds a predefined threshold (5). While efficient for basic detection, it relies on batch processing, potentially introducing latency in identifying ongoing attacks. Flink Implementation SQL SELECT user, COUNT(*) AS failed_attempts FROM login_logs WHERE result = 'failure' OR result = 'failed' GROUP BY user, TUMBLE(event_time, INTERVAL '1 HOUR') HAVING failed_attempts > 5; Flink takes a more real-time approach. As each login event arrives, Flink checks the user and result. If it's a failed attempt, a counter for that user's window (1 hour) is incremented. If the count surpasses the threshold (5) within the window, Flink triggers an alert. This provides near-instantaneous detection of suspicious login activity. A Deep Dive: Splunk vs. Flink for Detecting User Login Attacks The underlying processing models of Splunk and Flink lead to fundamental differences in how they handle security incident detection. Here's a closer look at the key areas: Batch vs. Stream Processing Splunk Splunk operates on historical data. Security analysts write search queries that retrieve and analyze relevant logs. These queries can be configured to run periodically automatically. This is a batch processing approach, meaning Splunk needs to search through potentially a large volume of data to identify anomalies or trends. For the login attempt example, Splunk would need to query all login logs within the past hour every time the search is run to calculate the failed login count per user. This can introduce significant latency in detecting, and increase the cost of compute, especially when dealing with large datasets. Flink Flink analyzes data streams in real-time. As each login event arrives, Flink processes it immediately. This stream-processing approach allows Flink to maintain a continuous state and update it with each incoming event. In the login attempt scenario, Flink keeps track of failed login attempts per user within a rolling one-hour window. With each new login event, Flink checks the user and result. If it's a failed attempt, the counter for that user's window is incremented. This eliminates the need to query a large amount of historical data every time a check is needed. Windowing Splunk Splunk performs windowing calculations after retrieving all relevant logs. In our example, the search stats count by user within 1h retrieves all login attempts within the past hour and then calculates the count for each user. This approach can be inefficient for real-time analysis, especially as data volume increases. Flink Flink maintains a rolling window and continuously updates the state based on incoming events. Flink uses a concept called "time windows" to partition the data stream into specific time intervals (e.g., one hour). For each window, Flink keeps track of relevant information, such as the number of failed login attempts per user. As new data arrives, Flink updates the state for the current window. This eliminates the need for a separate post-processing step to calculate windowed aggregations. Alerting Infrastructure Splunk Splunk relies on pre-configured alerting actions within the platform. Splunk allows users to define search queries that trigger alerts when specific conditions are met. These alerts can be delivered through various channels such as email, SMS, or integrations with other security tools. Flink Flink might require integration with external tools for alerts. While Flink can identify anomalies in real time, it may not have built-in alerting functionalities like Splunk. Security teams often integrate Flink with external Security Information and Event Management (SIEM) solutions for alert generation and management. In essence, Splunk operates like a detective sifting through historical evidence, while Flink functions as a security guard constantly monitoring activity. Splunk is a valuable tool for forensic analysis and identifying historical trends. However, for real-time threat detection and faster response times, Flink's stream processing capabilities offer a significant advantage. Choosing the Right Tool: A Balancing Act While Splunk provides a user-friendly interface and simplifies rule creation, its batch processing introduces latency, which can be detrimental to real-time security needs. Flink excels in real-time processing and scalability, but it requires more technical expertise to set up and manage. Beyond Latency and Ease of Use: Additional Considerations The decision between Splunk and Flink goes beyond just real-time processing and ease of use. Here are some additional factors to consider: Data Volume and Variety Security teams are often overwhelmed by the sheer volume and variety of data they need to analyze. Splunk excels at handling structured data like logs but struggles with real-time ingestion and analysis of unstructured data like network traffic or social media feeds. Flink, with its distributed architecture, can handle diverse data types at scale. Alerting and Response Both Splunk and Flink can trigger alerts based on rule violations. However, Splunk integrates seamlessly with existing Security Information and Event Management (SIEM) systems, streamlining the incident response workflow. Flink might require additional development effort to integrate with external alerting and response tools. Cost Splunk's licensing costs are based on data ingestion volume, which can become expensive for organizations with massive security data sets. Flink, being open-source, eliminates licensing fees. However, the cost of technical expertise for setup, maintenance, and rule development for Flink needs to be factored in. The Evolving Security Landscape: A Hybrid Approach The security landscape is constantly evolving, demanding a multifaceted approach. Many organizations find value in a hybrid approach, leveraging the strengths of both Splunk and Flink. Splunk as the security hub: Splunk can serve as a central repository for security data, integrating logs from various sources, including real-time data feeds from Flink. Security analysts can utilize Splunk's powerful search capabilities for historical analysis, threat hunting, and investigation. Flink for real-time detection and response: Flink can be deployed for real-time processing of critical security data streams, focusing on identifying and responding to ongoing threats. This combination allows security teams to enjoy the benefits of both worlds: Comprehensive security visibility: Splunk provides a holistic view of historical and current security data. Real-time threat detection and response: Flink enables near-instantaneous identification and mitigation of ongoing security incidents. Conclusion: Choosing the Right Tool for the Job Neither Splunk nor Flink is a one-size-fits-all solution for rule-based incident detection. The optimal choice depends on your specific security needs, data volume, technical expertise, and budget. Security teams should carefully assess these factors and potentially consider a hybrid approach to leverage the strengths of both Splunk and Flink for a robust and comprehensive security posture. By understanding the strengths and weaknesses of each tool, security teams can make informed decisions about how to best utilize them to detect and respond to security threats in a timely and effective manner.
Java adoption has shifted from version 1.8 to at least Java 17. Concurrently, Spring Boot has advanced from version 2.x to 3.2.2. The springdoc project has transitioned from the older library 'springdoc-openapi-ui' to 'springdoc-openapi-starter-webmvc-ui' for its functionality. These updates mean that readers relying on older articles may find themselves years behind in these technologies. The author has updated this article so that readers are using the latest versions and don't struggle with outdated information during migration. This is part one of a three-part series. You can check out the other articles below. OpenAPI 3 Documentation With Spring Boot Doing More With Springdoc OpenAPI Extending Swagger and Springdoc Open API In this tutorial, we are going to try out a Spring Boot Open API 3-enabled REST project and explore some of its capabilities. The springdoc-openapi Java library has quickly become very compelling. We are going to refer to Building a RESTful Web Service and springdoc-openapi v2.5.0. Prerequisites Java 17.x Maven 3.x Steps Start by creating a Maven JAR project. Below, you will see the pom.xml to use: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.2</version> <relativePath ></relativePath> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>sample</artifactId> <version>0.0.1</version> <name>sample</name> <description>Demo project for Spring Boot with openapi 3 documentation</description> <properties> <java.version>17</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note the "springdoc-openapi-starter-webmvc-ui" dependency. Now, let's create a small Java bean class. Java package sample; import org.hibernate.validator.constraints.CreditCardNumber; import jakarta.validation.constraints.Email; import jakarta.validation.constraints.Max; import jakarta.validation.constraints.Min; import jakarta.validation.constraints.NotBlank; import jakarta.validation.constraints.NotNull; import jakarta.validation.constraints.Pattern; import jakarta.validation.constraints.Size; public class Person { private long id; private String firstName; @NotNull @NotBlank private String lastName; @Pattern(regexp = ".+@.+\\..+", message = "Please provide a valid email address" ) private String email; @Email() private String email1; @Min(18) @Max(30) private int age; @CreditCardNumber private String creditCardNumber; public String getCreditCardNumber() { return creditCardNumber; } public void setCreditCardNumber(String creditCardNumber) { this.creditCardNumber = creditCardNumber; } public long getId() { return id; } public void setId(long id) { this.id = id; } public String getEmail1() { return email1; } public void setEmail1(String email1) { this.email1 = email1; } @Size(min = 2) public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } - This is an example of a Java bean. Now, let's create a controller. Java package sample; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import io.swagger.v3.oas.annotations.media.Content; import io.swagger.v3.oas.annotations.media.ExampleObject; import jakarta.validation.Valid; @RestController public class PersonController { @RequestMapping(path = "/person", method = RequestMethod.POST) @io.swagger.v3.oas.annotations.parameters.RequestBody(required = true, content = @Content(examples = { @ExampleObject(value = INVALID_REQUEST, name = "invalidRequest", description = "Invalid Request"), @ExampleObject(value = VALID_REQUEST, name = "validRequest", description = "Valid Request") })) public Person person(@Valid @RequestBody Person person) { return person; } private static final String VALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" }"""; private static final String INVALID_REQUEST = """ { "id": 0, "firstName": "string", "lastName": "string", "email": "abcabc.com", "email1": "abcabc.com", "age": 17, "creditCardNumber": "411111111111111" }"""; } - Above is a sample REST Controller. Side Note: Normally I don't like to clutter already annotation-cluttered code with additional annotations, but I do think having ready-made examples like these can be useful. Another reason that forced me to do this was the default examples now generated from Swagger UI appear to be generating some confusing text when using @Pattern. It appears to be a Spring UI issue and not a Springdoc issue. Let's make some entries in src\main\resources\application.properties. Properties files application-description=@project.description@ application-version=@project.version@ logging.level.org.springframework.boot.autoconfigure=ERROR # server.error.include-binding-errors is now needed if we # want to display the errors as shown in this article # this can also be avoided in other ways as we will see # in later articles server.error.include-binding-errors=always The above entries will pass on Maven build-related information to the OpenAPI documentation and also include the new server.error.include-binding-errors property. Finally, let's write the Spring Boot application class: Java package sample; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; import io.swagger.v3.oas.models.info.License; @SpringBootApplication public class SampleApplication { public static void main(String[] args) { SpringApplication.run(SampleApplication.class, args); } @Bean public OpenAPI customOpenAPI(@Value("${application-description}") String appDesciption, @Value("${application-version}") String appVersion) { return new OpenAPI() .info(new Info() .title("sample application API") .version(appVersion) .description(appDesciption) .termsOfService("http://swagger.io/terms/") .license(new License().name("Apache 2.0").url("http://springdoc.org"))); } } - Also, note how the API version and description are being leveraged from application.properties. At this stage, this is what the project looks like in Eclipse: The project contents are above. Next, execute the mvn clean package from the command prompt or terminal. Then, execute java -jar target\sample-0.0.1.jar. You can also launch the application by running the SampleApplication.java class from your IDE. Now, let's visit the Swagger UI — http://localhost:8080/swagger-ui.html. Click the green Post button and expand the > symbol on the right of Person under Schemas. Let's expand the last schemas section a bit more: The nice thing is how the contract is automatically detailed leveraging JSR-303 annotations on the model. It out-of-the-box covers many of the important annotations and documents them. However, I did not see it support out of the box @javax.validation.constraints.Email and @org.hibernate.validator.constraints.CreditCardNumber at this point. The issue is that they are not documented in the generated Swagger specs, but those constraints are functional. We will discuss more on this in the subsequent article. For completeness, let's post a request. Press the Try it out button. Press the blue Execute button. Let's feed in a valid input by copying the below or by selecting the valid Input drop-down. JSON { "id": 0, "firstName": "string", "lastName": "string", "email": "abc@abc.com", "email1": "abc@abc.com", "age": 20, "creditCardNumber": "4111111111111111" } Let's feed that valid input into the Request body section. (We can also select "validRequest" from the Examples dropdown as shown below.) Upon pressing the blue Execute button, we see the below: This was only a brief introduction to the capabilities of the dependency: XML <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>2.5.0</version> </dependency> Troubleshooting Tips Ensure prerequisites. If using the Eclipse IDE, we might need to do a Maven update on the project after creating all the files. In the Swagger UI, if you are unable to access the “Schema” definitions link, it might be because you need to come out of the “try it out “ mode. Click on one or two Cancel buttons that might be visible. Source code Git Clone URL, Branch: springdoc-openapi-intro-update1.
As we delve into the dynamic world of Kubernetes, understanding its core components and functionalities becomes pivotal for anyone looking to make a mark in the cloud computing and containerization arena. Among these components, static pods hold a unique place, often overshadowed by more commonly discussed resources like deployments and services. In this comprehensive guide, we will unveil the power of static pods, elucidating their utility, operational principles, and how they can be an asset in your Kubernetes arsenal. Understanding Static Pods Static pods are Kubernetes pods that are managed directly by the kubelet daemon on a specific node, without the API server observing them. Unlike other pods that are controlled by the Kubernetes API server, static pods are defined by placing their configuration files directly on a node's filesystem, which the kubelet periodically scans and ensures that the pods defined in these configurations are running. Why Use Static Pods? Static pods serve several critical functions in a Kubernetes environment: Cluster Bootstrapping They are essential for bootstrapping a Kubernetes cluster before the API server is up and running. Since they do not depend on the API server, they can be used to deploy the control plane components as static pods. Node-Level System Pods Static pods are ideal for running node-level system components, ensuring that these essential services remain running, even if the Kubernetes API server is unreachable. Simplicity and Reliability For simpler deployments or edge environments where high availability is not a primary concern, static pods offer a straightforward and reliable deployment option. Creating Your First Static Pod Let’s walk through the process of creating a static pod. You'll need access to a Kubernetes node to follow along. 1. Access Your Kubernetes Node First, SSH into your Kubernetes node: ssh your_username@your_kubernetes_node 2. Create a Pod Definition File Create a simple pod definition file. Let’s deploy an Nginx static pod as an example. Save the following configuration in /etc/kubernetes/manifests/nginx-static-pod.yaml: apiVersion: v1 kind: Pod metadata: name: nginx-static-pod labels: role: myrole spec: containers: - name: nginx image: nginx ports: - containerPort: 80 3. Configure the kubelet to Use This Directory Ensure the kubelet is configured to monitor the /etc/kubernetes/manifests directory for pod manifests. This is typically set by the --pod-manifest-path kubelet command-line option. 4. Verify the Pod Is Running After a few moments, use the docker ps command (or crictl ps if you're using CRI-O or containerd) to check that the Nginx container is running: docker ps | grep nginx Or, if your cluster allows it, you can check from the Kubernetes API server with: kubectl get pods --all-namespaces | grep nginx-static-pod Note that while you can see the static pod through the API server, you cannot manage it (delete, scale, etc.) through the API server. Advantages of Static Pods Simplicity: Static pods are straightforward to set up and manage on a node-by-node basis. Self-sufficiency: They can operate independently of the Kubernetes API server, making them resilient in scenarios where the API server is unavailable. Control plane bootstrapping: Static pods are instrumental in the initial setup of a Kubernetes cluster, particularly for deploying control plane components. Considerations and Best Practices While static pods offer simplicity and independence from the Kubernetes API server, they also come with considerations that should not be overlooked: Cluster management: Static pods are not managed by the API server, which means they do not benefit from some of the orchestration features like scaling, lifecycle management, and health checks. Deployment strategy: They are best used for node-specific tasks or cluster bootstrapping, rather than general application deployment. Monitoring and logging: Ensure that your node-level monitoring and logging tools are configured to include static pods. Conclusion Static pods, despite their simplicity, play a critical role in the Kubernetes ecosystem. They offer a reliable method for running system-level services directly on nodes, independent of the cluster's control plane. By understanding how to deploy and manage static pods, you can ensure your Kubernetes clusters are more robust and resilient. Whether you're bootstrapping a new cluster or managing node-specific services, static pods are a tool worth mastering. This beginner's guide aims to demystify static pods and highlight their importance within Kubernetes architectures. As you advance in your Kubernetes journey, remember that the power of Kubernetes lies in its flexibility and the diversity of options it offers for running containerized applications. Static pods are just one piece of the puzzle, offering a unique blend of simplicity and reliability for specific use cases. I encourage you to explore static pods further, experiment with deploying different applications as static pods, and integrate them into your Kubernetes strategy where appropriate. Happy Kubernetes-ing!
Readers of my publications are likely familiar with the idea of employing an API First approach to developing microservices. Countless times I have realized the benefits of describing the anticipated URIs and underlying object models before any development begins. In my 30+ years of navigating technology, however, I’ve come to expect the realities of alternate flows. In other words, I fully expect there to be situations where API First is just not possible. For this article, I wanted to walk through an example of how teams producing microservices can still be successful at providing an OpenAPI specification for others to consume without manually defining an openapi.json file. I also wanted to step outside my comfort zone and do this without using Java, .NET, or even JavaScript. Discovering FastAPI At the conclusion of most of my articles I often mention my personal mission statement: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” – J. Vester My point in this mission statement is to make myself accountable for making the best use of my time when trying to reach goals and objectives set at a higher level. Basically, if our focus is to sell more widgets, my time should be spent finding ways to make that possible – steering clear of challenges that have already been solved by existing frameworks, products, or services. I picked Python as the programming language for my new microservice. To date, 99% of the Python code I’ve written for my prior articles has been the result of either Stack Overflow Driven Development (SODD) or ChatGPT-driven answers. Clearly, Python falls outside my comfort zone. Now that I’ve level-set where things stand, I wanted to create a new Python-based RESTful microservice that adheres to my personal mission statement with minimal experience in the source language. That’s when I found FastAPI. FastAPI has been around since 2018 and is a framework focused on delivering RESTful APIs using Python-type hints. The best part about FastAPI is the ability to automatically generate OpenAPI 3 specifications without any additional effort from the developer’s perspective. The Article API Use Case For this article, the idea of an Article API came to mind, providing a RESTful API that allows consumers to retrieve a list of my recently published articles. To keep things simple, let’s assume a given Article contains the following properties: id : Simple, unique identifier property (number) title : The title of the article (string) url : The full URL to the article (string) year : The year the article was published (number) The Article API will include the following URIs: GET /articles : Will retrieve a list of articles GET /articles/{article_id} : Will retrieve a single article by the id property POST /articles : Adds a new article FastAPI in Action In my terminal, I created a new Python project called fast-api-demo and then executed the following commands: Shell $ pip install --upgrade pip $ pip install fastapi $ pip install uvicorn I created a new Python file called api.py and added some imports, plus established an app variable: Python from fastapi import FastAPI, HTTPException from pydantic import BaseModel app = FastAPI() if __name__ == "__main__": import uvicorn uvicorn.run(app, host="localhost", port=8000) Next, I defined an Article object to match the Article API use case: Python class Article(BaseModel): id: int title: str url: str year: int With the model established, I needed to add the URIs…which turned out to be quite easy: Python # Route to add a new article @app.post("/articles") def create_article(article: Article): articles.append(article) return article # Route to get all articles @app.get("/articles") def get_articles(): return articles # Route to get a specific article by ID @app.get("/articles/{article_id}") def get_article(article_id: int): for article in articles: if article.id == article_id: return article raise HTTPException(status_code=404, detail="Article not found") To save myself from involving an external data store, I decided to add some of my recently published articles programmatically: Python articles = [ Article(id=1, title="Distributed Cloud Architecture for Resilient Systems: Rethink Your Approach To Resilient Cloud Services", url="https://dzone.com/articles/distributed-cloud-architecture-for-resilient-syste", year=2023), Article(id=2, title="Using Unblocked to Fix a Service That Nobody Owns", url="https://dzone.com/articles/using-unblocked-to-fix-a-service-that-nobody-owns", year=2023), Article(id=3, title="Exploring the Horizon of Microservices With KubeMQ's New Control Center", url="https://dzone.com/articles/exploring-the-horizon-of-microservices-with-kubemq", year=2024), Article(id=4, title="Build a Digital Collectibles Portal Using Flow and Cadence (Part 1)", url="https://dzone.com/articles/build-a-digital-collectibles-portal-using-flow-and-1", year=2024), Article(id=5, title="Build a Flow Collectibles Portal Using Cadence (Part 2)", url="https://dzone.com/articles/build-a-flow-collectibles-portal-using-cadence-par-1", year=2024), Article(id=6, title="Eliminate Human-Based Actions With Automated Deployments: Improving Commit-to-Deploy Ratios Along the Way", url="https://dzone.com/articles/eliminate-human-based-actions-with-automated-deplo", year=2024), Article(id=7, title="Vector Tutorial: Conducting Similarity Search in Enterprise Data", url="https://dzone.com/articles/using-pgvector-to-locate-similarities-in-enterpris", year=2024), Article(id=8, title="DevSecOps: It's Time To Pay for Your Demand, Not Ingestion", url="https://dzone.com/articles/devsecops-its-time-to-pay-for-your-demand", year=2024), ] Believe it or not, that completes the development for the Article API microservice. For a quick sanity check, I spun up my API service locally: Shell $ python api.py INFO: Started server process [320774] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) Then, in another terminal window, I sent a curl request (and piped it to json_pp): Shell $ curl localhost:8000/articles/1 | json_pp { "id": 1, "title": "Distributed Cloud Architecture for Resilient Systems: Rethink Your Approach To Resilient Cloud Services", "url": "https://dzone.com/articles/distributed-cloud-architecture-for-resilient-syste", "year": 2023 } Preparing To Deploy Rather than just run the Article API locally, I thought I would see how easily I could deploy the microservice. Since I had never deployed a Python microservice to Heroku before, I felt like now would be a great time to try. Before diving into Heroku, I needed to create a requirements.txt file to describe the dependencies for the service. To do this, I installed and executed pipreqs: Shell $ pip install pipreqs $ pipreqs This created a requirements.txt file for me, with the following information: Plain Text fastapi==0.110.1 pydantic==2.6.4 uvicorn==0.29.0 I also needed a file called Procfile which tells Heroku how to spin up my microservice with uvicorn. Its contents looked like this: Shell web: uvicorn api:app --host=0.0.0.0 --port=${PORT} Let’s Deploy to Heroku For those of you who are new to Python (as I am), I used the Getting Started on Heroku with Python documentation as a helpful guide. Since I already had the Heroku CLI installed, I just needed to log in to the Heroku ecosystem from my terminal: Shell $ heroku login I made sure to check all of my updates in my repository on GitLab. Next, the creation of a new app in Heroku can be accomplished using the CLI via the following command: Shell $ heroku create The CLI responded with a unique app name, along with the URL for app and the git-based repository associated with the app: Shell Creating app... done, powerful-bayou-23686 https://powerful-bayou-23686-2d5be7cf118b.herokuapp.com/ | https://git.heroku.com/powerful-bayou-23686.git Please note – by the time you read this article, my app will no longer be online. Check this out. When I issue a git remote command, I can see that a remote was automatically added to the Heroku ecosystem: Shell $ git remote heroku origin To deploy the fast-api-demo app to Heroku, all I have to do is use the following command: Shell $ git push heroku main With everything set, I was able to validate that my new Python-based service is up and running in the Heroku dashboard: With the service running, it is possible to retrieve the Article with id = 1 from the Article API by issuing the following curl command: Shell $ curl --location 'https://powerful-bayou-23686-2d5be7cf118b.herokuapp.com/articles/1' The curl command returns a 200 OK response and the following JSON payload: JSON { "id": 1, "title": "Distributed Cloud Architecture for Resilient Systems: Rethink Your Approach To Resilient Cloud Services", "url": "https://dzone.com/articles/distributed-cloud-architecture-for-resilient-syste", "year": 2023 } Delivering OpenAPI 3 Specifications Automatically Leveraging FastAPI’s built-in OpenAPI functionality allows consumers to receive a fully functional v3 specification by navigating to the automatically generated /docs URI: Shell https://powerful-bayou-23686-2d5be7cf118b.herokuapp.com/docs Calling this URL returns the Article API microservice using the widely adopted Swagger UI: For those looking for an openapi.json file to generate clients to consume the Article API, the /openapi.json URI can be used: Shell https://powerful-bayou-23686-2d5be7cf118b.herokuapp.com/openapi.json For my example, the JSON-based OpenAPI v3 specification appears as shown below: JSON { "openapi": "3.1.0", "info": { "title": "FastAPI", "version": "0.1.0" }, "paths": { "/articles": { "get": { "summary": "Get Articles", "operationId": "get_articles_articles_get", "responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { } } } } } }, "post": { "summary": "Create Article", "operationId": "create_article_articles_post", "requestBody": { "content": { "application/json": { "schema": { "$ref": "#/components/schemas/Article" } } }, "required": true }, "responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { } } } }, "422": { "description": "Validation Error", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/HTTPValidationError" } } } } } } }, "/articles/{article_id}": { "get": { "summary": "Get Article", "operationId": "get_article_articles__article_id__get", "parameters": [ { "name": "article_id", "in": "path", "required": true, "schema": { "type": "integer", "title": "Article Id" } } ], "responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { } } } }, "422": { "description": "Validation Error", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/HTTPValidationError" } } } } } } } }, "components": { "schemas": { "Article": { "properties": { "id": { "type": "integer", "title": "Id" }, "title": { "type": "string", "title": "Title" }, "url": { "type": "string", "title": "Url" }, "year": { "type": "integer", "title": "Year" } }, "type": "object", "required": [ "id", "title", "url", "year" ], "title": "Article" }, "HTTPValidationError": { "properties": { "detail": { "items": { "$ref": "#/components/schemas/ValidationError" }, "type": "array", "title": "Detail" } }, "type": "object", "title": "HTTPValidationError" }, "ValidationError": { "properties": { "loc": { "items": { "anyOf": [ { "type": "string" }, { "type": "integer" } ] }, "type": "array", "title": "Location" }, "msg": { "type": "string", "title": "Message" }, "type": { "type": "string", "title": "Error Type" } }, "type": "object", "required": [ "loc", "msg", "type" ], "title": "ValidationError" } } } } As a result, the following specification can be used to generate clients in a number of different languages via OpenAPI Generator. Conclusion At the start of this article, I was ready to go to battle and face anyone not interested in using an API First approach. What I learned from this exercise is that a product like FastAPI can help define and produce a working RESTful microservice quickly while also including a fully consumable OpenAPI v3 specification…automatically. Turns out, FastAPI allows teams to stay focused on their goals and objectives by leveraging a framework that yields a standardized contract for others to rely on. As a result, another path has emerged to adhere to my personal mission statement. Along the way, I used Heroku for the first time to deploy a Python-based service. This turned out to require little effort on my part, other than reviewing some well-written documentation. So another mission statement bonus needs to be mentioned for the Heroku platform as well. If you are interested in the source code for this article you can find it on GitLab. Have a really great day!
Bedrock is the new Amazon service that democratizes the users' access to the most up-to-date Foundation Models (FM) made available by some of the highest-ranked AI actors. Their list is quite impressive and it includes but isn't limited to: Titan Claude Mistral AI Llama2 ... Depending on your AWS region, some of these FMs might not be available. For example, as per this post, in my region, which is eu-west-3 the only available FMs are Titan and Mistral AI, but things are changing very fast. So, what's the point of using this service which, apparently, doesn't do anything else than give you access to other FMs? Well, the added value of Amazon Bedrock is to expose via APIs all these FMs, giving you the opportunity to easily integrate generative AI in your applications, through ubiquitous techniques like Serverless or REST. This is what this post is trying to demonstrate. So, let's go! A Generative AI Gateway The project chosen in order to illustrate this post is showing a Generative AI Gateway, where the user is given access to a certain number of FMs, each one being specialized in a different type of use case like, for example, text generation, conversational interfaces, text summarization, image generation, etc. The diagram below shows the general architecture of the sample application. The sample application architecture diagram As you can see, the sample application consists of the following components: A web front-end that allows the user to select an FM, to configure its parameters, like the temperature, the max tokens, etc. and to start the dialog with it, for example asking questions. Our application being a Quarkus one, we are using here the quarkus-primefaces extension. An AWS REST Gateway that aims at exposing dedicated endpoints, depending on the chosen FM. Here we're using the quarkus-amazon-lambda-rest extension which, as you'll see soon, is able to automatically generate the SAM (Serverless Application Model) template required to deploy the REST Gateway to AWS. Several REST endpoints processing POST requests and aiming at invoking the chosen FM via a Bedrock client. The FM responses are brought back to our web application, through the REST Gateway. Let's look now in greater detail at the implementation. The REST Gateway The module bedrock-gateway-api of our Maven multi-module project, implements this component. It consists of a Quarkus RESTeasy API exposing several endpoints which are processing POST requests, having the user interaction as input parameters, and returning the FM responses. The input parameters are strings and, in the case where the user requests result in a really large amount of text, they are input files. The endpoints process these POST requests by converting the associated input into an FM-specific syntax, including the following parameters: The temperature: a real number between 0 and 1 which aims at influencing the FM's predictability. A lower value consists of a more predictable output while a higher one will generate a more random response. The top P: a real number between 0 and 1 whose value is supposed to select the most likely tokens in a distribution. A lower value results in a more limited number of choices for the response. The max-tokens: an integer value representing the maximum number of words that the FM will process for any given request. The Bedrock documentation is at your disposal in order to bring you all the required missing details concerning the parameters above. The Bedrock client used to interact with the FM service is instantiated as shown below: Java private final BedrockRuntimeAsyncClient client = BedrockRuntimeAsyncClient.builder().region(Region.EU_WEST_3).build(); This requires using the following Maven artifact: XML <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>bedrockruntime</artifactId> </dependency> There is a synchronous and an asynchronous Bedrock client and, given the relative latency generally associated with an FM invocation, we have chosen the 2nd one. The Web Front-End The Web front-end is a simple Jakarta Faces application implemented using the PrimeFaces library as well as the Facelets notation in order to define the layouts. If this architecture choice might surprise the reader more to JavaScript/TypeScript-based front-ends, then please have a look at this article. The only special thing to be noticed is the way it uses the Microprofile JAX-RS Client implementation by Quarkus to call the AWS REST Gateway. Java @RegisterRestClient @Path("/bedrock") @Produces(MediaType.TEXT_PLAIN) @Consumes(MediaType.APPLICATION_JSON) public interface BedrockAiEndpoint { @POST @Path("mistral2") Response callMistralFm (BedrockAiInputParam bedrockAiInputParam); @POST @Path("titan2") Response callTitanFm (BedrockAiInputParam bedrockAiInputParam); } This interface is all that's required, Quarkus will generate from it the associated implementation client class. Running the Sample Application The application can be run in two ways: Executing locally the AWS REST Gateway and the associated AWS Lambda endpoints; Executing in the cloud the AWS REST Gateway and the associated AWS Lambda endpoints. Running Locally The shell script named run-local.sh runs locally the AWS REST Gateway together with the associated AWS Lambda endpoints. Here is the code: Shell #!/bin/bash mvn -Durl=http://localhost:3000 clean install sed -i 's/java11/java17/g' bedrock-gateway-api/target/sam.jvm.yaml sam local start-api -t ./bedrock-gateway-api/target/sam.jvm.yaml --log-file ./bedrock-gateway-api/sam.log & mvn -DskipTests=false failsafe:integration-test docker run --name bedrock -p 8082:8082 --rm --network host nicolasduminil/bedrock-gateway-web:1.0-SNAPSHOT ./cleanup-local.sh The first thing that we need to do here is to build the application by running the Maven command. This will result, among others, in a Docker image named nicolasduminil/bedrock-gateway-web which is dedicated to run the web front-end. It also will result in the generation by Quarkus of the SAM template (target\sam.jvm.yam) that creates the AWS CloudFormation stack containing the AWS REST Gateway together with the endpoints AWS Lambda functions. For some reason, the Quarkus quarkus-amazon-lambda-rest extension used for this purpose configures the runtime as being Java 11 and, even after having contacted the support, I didn't find any way to change that. Accordingly, the sed command is used in the script to modify the runtime to be Java 17. Then, the sam cli is used to run the command start-api which will execute locally the gateway with the required endpoints. Next, we are in the position to run the integration tests, on behalf of the Maven failsafe plugin. We couldn't do it while initially running the build as the local stack wasn't deployed yet. Last but not least, the script starts a Docker container running the nicolasduminil/bedrock-gateway-web image, created previously by the quarkus-container-image-jib extension. This is our front end. Now, in order to test it, you can jump to the next section which explains how. Running in the Cloud The script named `deploy.sh`, shown below, deploys in the cloud our application: Shell #!/bin/bash mvn -pl bedrock-gateway-api -am clean install sed -i 's/java11/java17/g' bedrock-gateway-api/target/sam.jvm.yaml RANDOM=$$ BUCKET_NAME=bedrock-gateway-bucket-$RANDOM STACK_NAME=bedrock-gateway-stack echo $BUCKET_NAME > bucket-name.txt aws s3 mb s3://$BUCKET_NAME sam deploy -t bedrock-gateway-api/src/main/resources/template.yaml --s3-bucket $BUCKET_NAME --stack-name $STACK_NAME --capabilities CAPABILITY_IAM API_ENDPOINT=$(aws cloudformation describe-stacks --stack-name $STACK_NAME --query 'Stacks[0].Outputs[0].OutputValue' --output text) mvn -pl bedrock-gateway-web -Durl=$API_ENDPOINT clean install docker run --name bedrock -p 8082:8082 --rm --network host nicolasduminil/bedrock-gateway-web:1.0-SNAPSHOT This time things are a bit more complicated. The Maven build in the script's first line uses the -pl switch to select only the bedrock-gateway-api module. This is because, in this case, we don't know in advance the AWS RESY Gateway URL, which the other module, bedrock-gateway-web needs in order to it the Microprofile JAX-RS client. Next, the sed command serves the same purposes as previously but, in order to deploy our stack in the cloud, we need an S3 bucket. And since the S3 bucket names have to be unique worldwide, we need to generate them randomly and store them in a text file, such that to be able to find them later, when it comes to destroying it. Now, it's time to deploy our CloudFormation stack. Please notice the way we catch the associated URL, by using the --query and the --output option. This is the moment to build the bedrock-gateway-web module as we have now the AWS REST Gateway URL, which we're passing as an environment variable, via the -D option of Maven. At this point, we only have to start our Docker container and start testing. Testing the Application In order to test the application, be it locally or in the cloud, proceed as follows: Clone the repository: Shell $ git clone https://github.com/nicolasduminil/bedrock-gateway.git cdin the root directory: Shell $ cd bedrock-gateway Run the start script (run-local.sh or deploy.sh). The execution might take a while, especially if this is the first time you're running it. Fire your preferred browser to http://localhost:8082. You'll be presented with the screen below: Using the menu bar, select the Titan sandbox. A new screen will be presented to you, as shown below. Using the sliders, configure as you wish the parameters Temperature, Top P and Max tokens. Then type in the text area labeled Prompt your question the chosen FM. Its response will display in the rightmost text area labeled Response. Please use different combinations of parameters to notice the differences between the two FM responses. And in the case you're testing in the cloud, don't forget to run the script cleanup.sh when finished, such that to avoid being invoiced. Have fun!
If you're eager to learn or understand decision trees, I invite you to explore this article. Alternatively, if decision trees aren't your current focus, you may opt to scroll through social media. About Decision Trees Figure 1: Simple Decision tree The image above shows an example of a simple decision tree. Decision trees are tree-shaped diagrams used for making decisions based on a series of logical conditions. In a decision tree, each node represents a decision statement, and the tree proceeds to make a decision based on whether the given statement is true or false. There are two main types of decision trees: Classification trees and Regression trees. A Classification tree categorizes problems by classifying the output of the decision statement into categories using if-else logical conditions. Conversely, a Regression tree classifies the output into numeric values. In Figure 2, the topmost node of a decision tree is called the Root node, while the nodes following the root node are referred to as Internal nodes or branches. These branches are characterized by arrows pointing towards and away from them. At the bottom of the tree are the Leaf nodes, which carry the final classification or decision of the tree. Leaf nodes are identifiable by arrows pointing to them, but not away from them. Figure 2: Nodes of a Decision tree Primary Objective of Decision Trees The primary objective of a decision tree is to partition the given data into subsets in a manner that maximizes the purity of the outcomes. Advantages of Decision Trees Simplicity: Decision trees are straightforward to understand, interpret, and visualize. Minimal data preparation: They require minimal effort for data preparation compared to other algorithms. Handling of data types: Decision trees can handle both numeric and categorical data efficiently. Robustness to non-linear parameters: Non-linear parameters have minimal impact on the performance of decision trees. Disadvantages of Decision Trees Overfitting: Decision trees may overfit the training data, capturing noise and leading to poor generalization on unseen data. High variance: The model may become unstable with small variations in the training data, resulting in high variance. Low bias, high complexity: Highly complex decision trees have low bias, making them prone to difficulties in generalizing new data. Important Terms in Decision Trees Below are important terms that are also used for measuring impurity in decision trees: 1. Entropy Entropy is a measure of randomness or unpredictability in a dataset. It quantifies the impurity of the dataset. A dataset with high entropy contains a mix of different classes or categories, making predictions more uncertain. Example: Consider a dataset containing data from various animals as in Figure 3. If the dataset includes a diverse range of animals with no clear patterns or distinctions, it has high entropy. Figure 3: Animal datasets 2. Information Gain Information gain is the measure of the decrease in entropy after splitting the dataset based on a particular attribute or condition. It quantifies the effectiveness of a split in reducing uncertainty. Example: When we split the data into subgroups based on specific conditions (e.g., features of the animals) like in Figure 3, we calculate information gain by subtracting the entropy of each subgroup from the entropy before the split. Higher information gain indicates a more effective split that results in greater homogeneity within subgroups. 3. Gini Impurity Gini impurity is another measure of impurity or randomness in a dataset. It calculates the probability of misclassifying a randomly chosen element if it were randomly labeled according to the distribution of labels in the dataset. In decision trees, Gini impurity is often used as an alternative to entropy for evaluating splits. Example: Suppose we have a dataset with multiple classes or categories. The Gini impurity is high when the classes are evenly distributed or when there is no clear separation between classes. A low Gini impurity indicates that the dataset is relatively pure, with most elements belonging to the same class. Classifications and Variations Implementation in Python The following is used to predict the Lung_cancer of the patients. 1. Importing necessary libraries for data analysis and visualization in Python: Python import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # to ensure plots are displayed inline in Notebook %matplotlib inline # Set Seaborn style for plots sns.set_style("whitegrid") # Set default Matplotlib style plt.style.use("fivethirtyeight") 2. Uploading the CSV file containing the data and loading: Python import pandas as pd # Load the data from the CSV file df = pd.read_csv('survey_lung_cancer.csv') Python df.head() # Displaying first five rows of the dataframe EDA (Exploratory Data Analysis): Python sns.countplot(x='LUNG_CANCER', data=df) # Count plot using Seaborn # to visualize the distribution of values in "LUNG_CANCER" column Python # title AGE from matplotlib import pyplot as plt df['AGE'].plot(kind='hist', bins=20, title='AGE') plt.gca().spines[['top', 'right',]].set_visible(False) 3. Iterating through columns, identifying categorical columns, and appending: Python categorical_col = [] for column in df.columns: if df[column].dtype == object and len(df[column].unique()) <= 50: categorical_col.append(column) df['LUNG_CANCER'] = df.LUNG_CANCER.astype("category").cat.codes 4. Removing the column "LUNG_CANCER" for further processing: Python categorical_col.remove('LUNG_CANCER') 5. Encoding categorical variables using LabelEncoder: Python from sklearn.preprocessing import LabelEncoder # creating an instance of the LabelEncoder class # LabelEncoder will be used to transform categorical values into numerical labels label = LabelEncoder() for column in categorical_col: df[column] = label.fit_transform(df[column]) 6. Dataset splitting for Machine Learning, train_test_split: Python from sklearn.model_selection import train_test_split # X contains the features (all columns except 'LUNG_CANCER') # y contains the target variable ('LUNG_CANCER') from the DataFrame df X = df.drop('LUNG_CANCER', axis=1) y = df.LUNG_CANCER # performing the Split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) 7. Function for model evaluation and reporting: Overall, the function below serves as a convenient tool for assessing the performance of classification models and generating detailed reports, facilitating model evaluation and interpretation. Python # import functions from scikit-learn for model evaluation from sklearn.metrics import accuracy_score, confusion_matrix, classification_report # clf: The classifier model to be evaluated # X_train, y_train: The features and target variable of the training set # X_test, y_test: The features and target variable of the testing set def print_score(clf, X_train, y_train, X_test, y_test, train=True): if train: pred = clf.predict(X_train) clf_report = pd.DataFrame(classification_report(y_train, pred, output_dict=True)) print("Train Result:\n_________________________") print(f"Accuracy Score: {accuracy_score(y_train, pred) * 100:.2f}%") print("_________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_________________________________________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_train, pred)}\n") elif train==False: pred = clf.predict(X_test) clf_report = pd.DataFrame(classification_report(y_test, pred, output_dict=True)) print("\nTest Result:\n_________________________") print(f"Accuracy Score: {accuracy_score(y_test, pred) * 100:.2f}%") print("_________________________") print(f"CLASSIFICATION REPORT:\n{clf_report}") print("_________________________________________________________________________") print(f"Confusion Matrix: \n {confusion_matrix(y_test, pred)}\n") Training and evaluation of decision tree classifier: Overall, this code provides a comprehensive evaluation of the decision tree classifier's performance on both the training and testing sets, including the accuracy score, classification report, and confusion matrix for each set. During the training process, the decision tree algorithm uses entropy and information gain to recursively split nodes and build a tree that maximizes information gain at each step. Python from sklearn.tree import DecisionTreeClassifier tree_clf = DecisionTreeClassifier(random_state=42) tree_clf.fit(X_train, y_train) print_score(tree_clf, X_train, y_train, X_test, y_test, train=True) print_score(tree_clf, X_train, y_train, X_test, y_test, train=False) The results above indicate that the decision tree classifier achieved high accuracy and performance on the training set, with some level of overfitting as evident from the difference in performance between the training and testing sets. While the classifier performed well on the testing set, there is room for improvement, particularly in terms of reducing false positives and false negatives. Further tuning of hyperparameters or exploring other algorithms may help improve generalization performance. 8. Visualization of decision tree classifier: Python # Importing Dependencies # Image is used to display images in the IPython environment # StringIO is used to create a file-like object in memory # export_graphviz is used to export the decision tree in Graphviz DOT format # pydot is used to interface with the Graphviz library from IPython.display import Image from six import StringIO from sklearn.tree import export_graphviz import pydot features = list(df.columns) features.remove("LUNG_CANCER") Python dot_data = StringIO() export_graphviz(tree_clf, out_file=dot_data, feature_names=features, filled=True) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph[0].create_png()) 9. Training and evaluation of Random Forest classifier: Python from sklearn.ensemble import RandomForestClassifier # Creating an instance of the Random Forest classifier with n_estimators=100 # which specifies the number of decision trees in the forest rf_clf = RandomForestClassifier(n_estimators=100) rf_clf.fit(X_train, y_train) print_score(rf_clf, X_train, y_train, X_test, y_test, train=True) print_score(rf_clf, X_train, y_train, X_test, y_test, train=False) This code below will generate heatmaps for both the training and testing sets' confusion matrices. The heatmaps use different shades to represent the counts in the confusion matrix. The diagonal elements (true positives and true negatives) will have higher values and appear lighter, while off-diagonal elements (false positives and false negatives) will have lower values and appear darker. Python import seaborn as sns import matplotlib.pyplot as plt # Create heatmap for training set plt.figure(figsize=(8, 6)) sns.heatmap(cm_train, annot=True, fmt='d', cmap='viridis', annot_kws={"size": 16}) plt.title('Confusion Matrix for Training Set') plt.xlabel('Predicted labels') plt.ylabel('True labels') plt.show() # Create heatmap for testing set plt.figure(figsize=(8, 6)) sns.heatmap(cm_test, annot=True, fmt='d', cmap='plasma', annot_kws={"size": 16}) plt.title('Confusion Matrix for Testing Set') plt.xlabel('Predicted labels') plt.ylabel('True labels') plt.show() XGBoost for Classification Python from xgboost import XGBClassifier from sklearn.metrics import accuracy_score # Instantiate XGBClassifier xgb_clf = XGBClassifier() # Train the classifier xgb_clf.fit(X_train, y_train) # Predict on the testing set y_pred = xgb_clf.predict(X_test) # Evaluate accuracy accuracy = accuracy_score(y_test, y_pred) print("Accuracy:", accuracy) The accuracy above indicates that the model's predictions align closely with the actual class labels, demonstrating its effectiveness in distinguishing between the classes. This code below will generate a bar plot showing the relative importance of the top features in the XGBoost model. The importance is typically calculated based on metrics such as gain, cover, or frequency of feature usage across all trees in the ensemble. Python from xgboost import plot_importance import matplotlib.pyplot as plt # Plot feature importance plt.figure(figsize=(10, 6)) plot_importance(xgb_clf, max_num_features=10) # Specify the maximum number of features to show plt.show() 10. Plotting the first tree in the XGBoost model: Python from xgboost import plot_tree # Plot the first tree plt.figure(figsize=(10, 20)) plot_tree(xgb_clf, num_trees=0, rankdir='TB') # Specify the tree number to plot plt.show() Conclusion In conclusion, this article gives an idea about how decision trees and their advanced variants like Random Forest and XGBoost offer powerful tools for classification and regression machine learning tasks. Through this journey, we've explored the fundamental concepts of decision trees, including entropy, information gain, and Gini impurity, which form the basis of their decision-making process. As we continue to delve deeper into the realm of machine learning, the versatility and effectiveness of decision trees and their variants underscore their significance in solving real-world problems across diverse domains. Whether it's classifying medical conditions, predicting customer behavior, or optimizing business processes, decision trees remain a cornerstone in the arsenal of machine learning techniques, driving innovation and progress in the field.
Reactive programming has become increasingly popular in modern software development, especially in building scalable and resilient applications. Kotlin, with its expressive syntax and powerful features, has gained traction among developers for building reactive systems. In this article, we’ll delve into reactive programming using Kotlin Coroutines with Spring Boot, comparing it with WebFlux, another choice for reactive programming yet more complex in the Spring ecosystem. Understanding Reactive Programming Reactive programming is a programming paradigm that deals with asynchronous data streams and the propagation of changes. It focuses on processing streams of data and reacting to changes as they occur. Reactive systems are inherently responsive, resilient, and scalable, making them well-suited for building modern applications that need to handle high concurrency and real-time data. Kotlin Coroutines Kotlin Coroutines provides a way to write asynchronous, non-blocking code in a sequential manner, making asynchronous programming easier to understand and maintain. Coroutines allow developers to write asynchronous code in a more imperative style, resembling synchronous code, which can lead to cleaner and more readable code. Kotlin Coroutines vs. WebFlux Spring Boot is a popular framework for building Java and Kotlin-based applications. It provides a powerful and flexible programming model for developing reactive applications. Spring Boot’s support for reactive programming comes in the form of Spring WebFlux, which is built on top of Project Reactor, a reactive library for the JVM. Both Kotlin Coroutines and WebFlux offer solutions for building reactive applications, but they differ in their programming models and APIs. 1. Programming Model Kotlin Coroutines: Kotlin Coroutines use suspend functions and coroutine builders like launch and async to define asynchronous code. Coroutines provide a sequential, imperative style of writing asynchronous code, making it easier to understand and reason about. WebFlux: WebFlux uses a reactive programming model based on the Reactive Streams specification. It provides a set of APIs for working with asynchronous data streams, including Flux and Mono, which represent streams of multiple and single values, respectively. 2. Error Handling Kotlin Coroutines: Error handling in Kotlin Coroutines is done using standard try-catch blocks, making it similar to handling exceptions in synchronous code. WebFlux: WebFlux provides built-in support for error handling through operators like onErrorResume and onErrorReturn, allowing developers to handle errors in a reactive manner. 3. Integration With Spring Boot Kotlin Coroutines: Kotlin Coroutines can be seamlessly integrated with Spring Boot applications using the spring-boot-starter-web dependency and the kotlinx-coroutines-spring library. WebFlux: Spring Boot provides built-in support for WebFlux, allowing developers to easily create reactive RESTful APIs and integrate with other Spring components. Show Me the Code The Power of Reactive Approach Over Imperative Approach The provided code snippets illustrate the implementation of a straightforward scenario using both imperative and reactive paradigms. This scenario involves two stages, each taking 1 second to complete. In the imperative approach, the service responds in 2 seconds as it executes both stages sequentially. Conversely, in the reactive approach, the service responds in 1 second by executing each stage in parallel. However, even in this simple scenario, the reactive solution exhibits some complexity, which could escalate significantly in real-world business scenarios. Here’s the Kotlin code for the base service: Kotlin @Service class HelloService { fun getGreetWord() : Mono<String> = Mono.fromCallable { Thread.sleep(1000) "Hello" } fun formatName(name:String) : Mono<String> = Mono.fromCallable { Thread.sleep(1000) name.replaceFirstChar { it.uppercase() } } } Imperative Solution Kotlin fun greet(name:String) :String { val greet = helloService.getGreetWord().block(); val formattedName = helloService.formatName(name).block(); return "$greet $formattedName" } Reactive Solution Kotlin fun greet(name:String) :Mono<String> { val greet = helloService.getGreetWord().subscribeOn(Schedulers.boundedElastic()) val formattedName = helloService.formatName(name).subscribeOn(Schedulers.boundedElastic()) return greet .zipWith(formattedName) .map { it -> "${it.t1} ${it.t2}" } } In the imperative solution, the greet function awaits the completion of the getGreetWord and formatName methods sequentially before returning the concatenated result. On the other hand, in the reactive solution, the greet function uses reactive programming constructs to execute the tasks concurrently, utilizing the zipWith operator to combine the results once both stages are complete. Simplifying Reactivity With Kotlin Coroutines To simplify the complexity inherent in reactive programming, Kotlin’s coroutines provide an elegant solution. Below is a Kotlin coroutine example demonstrating the same scenario discussed earlier: Kotlin @Service class CoroutineHelloService() { suspend fun getGreetWord(): String { delay(1000) return "Hello" } suspend fun formatName(name: String): String { delay(1000) return name.replaceFirstChar { it.uppercase() } } fun greet(name:String) = runBlocking { val greet = async { getGreetWord() } val formattedName = async { formatName(name) } "${greet.await()} ${formattedName.await()}" } } In the provided code snippet, we leverage Kotlin coroutines to simplify reactive programming complexities. The HelloServiceCoroutine class defines suspend functions getGreetWord and formatName, which simulates asynchronous operations using delay. The greetCoroutine function demonstrates an imperative solution using coroutines. Within a runBlocking coroutine builder, it invokes suspend functions sequentially to retrieve the greeting word and format the name, finally combining them into a single greeting string. Conclusion In this exploration, we compared reactive programming in Kotlin Coroutines with Spring Boot to WebFlux. Kotlin Coroutines offer a simpler, more sequential approach, while WebFlux, based on Reactive Streams, provides a comprehensive set of APIs with a steeper learning curve. Code examples demonstrated how reactive solutions outperform imperative ones by leveraging parallel execution. Kotlin Coroutines emerged as a concise alternative, seamlessly integrated with Spring Boot, simplifying reactive programming complexities. In summary, Kotlin Coroutines excels in simplicity and integration, making them a compelling choice for developers aiming to streamline reactive programming in Spring Boot applications.
Jira For Product Managers: Useful Features Explained
April 24, 2024 by
Software Testing as a Debugging Tool
April 24, 2024 by CORE
Java Container Application Memory Analysis
April 24, 2024 by
Understanding LLMs: Mixture of Experts
April 24, 2024 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by CORE
Java Container Application Memory Analysis
April 24, 2024 by
Is Your Roadmap Prioritizing Memory-Safe Programming Languages?
April 24, 2024 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Java Container Application Memory Analysis
April 24, 2024 by
Jira For Product Managers: Useful Features Explained
April 24, 2024 by
Java Container Application Memory Analysis
April 24, 2024 by
Jira For Product Managers: Useful Features Explained
April 24, 2024 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Java Container Application Memory Analysis
April 24, 2024 by
Understanding LLMs: Mixture of Experts
April 24, 2024 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by