Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
Java Concurrency: The Happens-Before Guarantee
As part of migrating Java enterprise applications to the cloud, it is beneficial to replace parts of the technology stack with equivalent cloud services. As an example, traditional messaging services are being replaced by Amazon SQS as part of the migration to AWS. Java Messaging Service (JMS) has been a mainstay in the technology stack of Java enterprise applications. The support provided by SQS for the JMS standard helps with an almost seamless technology replacement. In this article, I have described the steps to integrate a legacy Java application with SQS through JMS. Legacy Java Application For the purposes of this article, a legacy Java application has the following characteristics: Uses the JMS standard to interact with messaging services The application is not built using Maven/Gradle or any other package dependency managers as part of Java code build and packaging. Maven "Uber JAR" Approach The AWS Java SDK, which contains the Java Messaging Service library for SQS, requires the use of Maven or Gradle to set up the development environment and write code that uses various AWS services. As the legacy application does not use Maven/Gradle, we will use an approach called the "Uber JAR" approach. In this approach, a Maven project outside the source tree of the legacy application will be created. This Maven project will use the packaging capability of Maven to create an Uber JAR, which is basically a JAR file containing the Java messaging library for SQS and all its direct and transitive dependencies. For example, the SQS Java messaging library depends on the base SQS library which, in turn, has a dependency on multiple Apache libraries (among many other dependencies). All these libraries are packaged into one JAR file. This JAR file can then be added to the classpath of the legacy application and included in its deployment packages. An example pom file to create the Uber JAR is given below. XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.aws</groupId> <artifactId>sqssample</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <maven.shade.plugin.version>3.2.1</maven.shade.plugin.version> <maven.compiler.plugin.version>3.6.1</maven.compiler.plugin.version> <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version> </properties> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>amazon-sqs-java-messaging-lib</artifactId> <version>2.0.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven.compiler.plugin.version}</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>${maven.shade.plugin.version}</version> <configuration> <createDependencyReducedPom>false</createDependencyReducedPom> <finalName>sqsjmssample</finalName> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Running "mvn package" for this pom will create the Uber JAR file, sqsjmssample.jar. This JAR file should then be added to the classpath of the legacy application just like any other third-party library. Maven Shading What happens if the SQS library and the legacy Java application depend on the same open-source library but different versions? This can cause subtle issues during the runtime that affect the functioning of both the application and SQS library. In the pom file above, you will notice that the Maven shade plugin is being used. As part of creating the Uber JAR, this plugin can rename packages of the dependencies. Assuming that both the SQS library and the Java application depend on different versions of Apache libraries, adding the shading configuration given below will avoid any issues during runtime. XML <relocations> <relocation> <pattern>org.apache</pattern> <shadedPattern>software.amazon.awssdk.thirdparty.org.apache</shadedPattern> </relocation> </relocations> This snippet has to be added under the <configuration> section of the shade plugin declaration in the pom file. You will notice that Uber JAR built with this configuration has all the Apache libraries moved to the software.amazon.awssdk.thirdparty.org.apache.** packages. The shade plugin also repoints any dependencies on the Apache libraries to these new code packages. Sample Java Code Now that we have the Uber JAR, let us look at a sample Java code to send a message to a queue destination. Java SqsClient sqsClient = SqsClient.builder().region(Region.US_EAST_1).build(); try { String queueName = "myqueue"; String message = "Test Message"; // Create the connection factory based on the config SQSConnectionFactory connectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), SqsClient.builder().region(Region.US_EAST_1).build()); // Create the connection SQSConnection connection = connectionFactory.createConnection(); // Create the session Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer producer = session.createProducer(session.createQueue(queueName)); TextMessage txtmessage = session.createTextMessage(message); producer.send(txtmessage); connection.close(); } catch (SqsException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } catch (JMSException e) { e.printStackTrace(); } Note that apart from the SQSConnectionFactory and SQSConnection, the rest of the code uses standard JMS types. Ideally, the existing messaging service can be replaced by SQS without any code changes to the Java application. Just changing the Application Server configuration for the JMS resources involved (to point to SQS Queues and Connection Factories) should suffice. This approach would require a JCA (Java Connector Architecture)-compliant Resource Adapter implementation for SQS. Even though there are some open-source SQS Resource Adapter implementations, the SQS Java Messaging library does not include one. Of course, using the open-source Resource Adapters carries licensing and support risks. Design Considerations As we saw above with the Maven shading approach, the code packages of open-source libraries can be changed. This could trip up the vulnerability reports produced by OSS (Open Source Software) scanners. So, it is important to set up the Maven project also as another code stream that would go through the same security scans as the Java application codebase. The Uber JAR approach discussed above can be used for any AWS service. For example, the Java Application could be modernized to store documents in S3 buckets instead of in the application database. So, it is important to consider the range of AWS services to be used by the application while setting up the required Maven dependencies, to avoid creating numerous Uber JARs with duplicate code. The SQS Java Messaging Library does not support Distributed (XA) transactions. If the Java application being migrated requires the messaging provider to support distributed transactions, Amazon MQ (Active MQ) could be a better choice than SQS. Alternatively, the application may have to be refactored to use a pseudo-distributed transaction pattern like SAGA OR to avoid using distributed transactions altogether.
The language C# stands out as the top 5th programming language in a Stack Overflow survey. It is widely used for creating various applications, ranging from desktop to mobile to cloud native. With so many language keywords and features it will be taxing to developers to keep up to date with new feature releases. This article delves into the top 10 C# keywords every C# developer should know. 1. Async and Await Keywords: async, await The introduction of async and await keywords in C# make it easy to handle asynchronous programming in C#. They allow you to write code that performs operations without blocking the main thread. This capability is particularly useful for tasks that are I/O-bound or CPU-intensive. By making use of these keywords, programmers can easily handle long-running compute operations like invoking external APIs to get data or writing or reading from a network drive. This will help in developing responsive applications and can handle concurrent operations. Example C# public async Task<string> GetDataAsync() { using (HttpClient client = new HttpClient()) { string result = await client.GetStringAsync("http://bing.com"); return result; } } 2. LINQ Keywords: from, select, where, group, into, order by, join LINQ (Language Integrated Query) provides an easy way to query various data sources, such as databases, collections, and XML, directly within C# without interacting with additional frameworks like ADO.NET, etc. By using a syntax that is identical to SQL, LINQ enables developers to write queries in a readable way. Example C# var query = from student in students where student.Age > 18 orderby student.Name select student; 3. Properties Keyword: property Properties are mainly members that provide a flexible mechanism to read, write, or compute the value of a private field. Generally, we hide the internal private backing fields and expose them via a public property. This enables data to be accessed easily by the callers. In the below example, Name is the property that is hiding a backing field called name, marked as private to avoid outside callers modifying the field directly. Example C# class Person { private string name; // backing field public string Name // property { get { return name; } set { name = value; } } } class Program { static void Main(string[] args) { Person P1 = new Person(); P1.Name = "Sunny"; Console.WriteLine(myObj.Name); } } 4. Generics Keywords: generic, <T> Generics allows you to write the code for a class without specifying the data type(s) that the class works on. It is a class that allows the user to define classes and methods with a placeholder. The introduction of Generics in C#2.0 has completely changed the landscape of creating modular reusable code which otherwise needs to be duplicated in multiple places. Imagine you are handling the addition of 2 numbers that are of int and then comes a requirement to add floats or double datatypes. We ended up creating the same duplicate code because we already defined a method with int datatypes in the method parameters. Generics makes it easy to define the placeholders and handle logic for different datatypes. Example C# public class Print { // Generic method which can take any datatype as method parameter public void Display<T>(T value) { Console.WriteLine($"The value is: {value}"); } } public class Program { public static void Main(string[] args) { Print print = new Print(); // Call the generic method with different data types print.Display<int>(10); print.Display<string>("Hello World"); print.Display<double>(20.5); } } 5. Delegates and Events Keywords: delegate, event A delegate is nothing but an object that refers to a method that you can invoke directly via delegate without calling the method directly. Delegates are equivalent to function pointers in C++. Delegates are type-safe pointers to any method. Delegates are mainly used in implementing the call-back methods and for handling events. Func<T> and Action<T> are inbuilt delegates provided out of the box in C#. Events, on the other hand, enable a class or object to notify other classes or objects when something of interest occurs. For example, think of a scenario where a user clicks a button on your website. It generates an event (in this case button click) to be handled by a corresponding event handler code. Examples Example code for declaring and instantiating a delegate: C# public delegate void MyDelegate1(string msg); // declare a delegate // This method will be pointed to by the delegate public static void PrintMessage(string message) { Console.WriteLine(message); } public static void Main(string[] args) { // Instantiate the delegate MyDelegate1 del = PrintMessage; // Call the method through the delegate del("Hello World"); } Example code for initiating an event and handling it via an event handler: C# // Declare a delegate public delegate void Notify(); public class ProcessBusinessLogic { public event Notify ProcessCompleted; // Declare an event public void StartProcess() { Console.WriteLine("Process Started!"); // Some actual work here.. OnProcessCompleted(); } // Method to call when the process is completed protected virtual void OnProcessCompleted() { ProcessCompleted?.Invoke(); } } public class Program { public static void Main(string[] args) { ProcessBusinessLogic bl = new ProcessBusinessLogic(); bl.ProcessCompleted += bl_ProcessCompleted; // Register event handler bl.StartProcess(); } // Event handler public static void bl_ProcessCompleted() { Console.WriteLine("Process Completed!"); } } 6. Lambda Expressions Keyword: lambda, => Lambda expressions provide an easy way to represent methods, particularly useful in LINQ queries and for defining short inline functions. This feature allows developers to write readable code by eliminating the need for traditional method definitions when performing simple operations. Lambda expressions enhance code clarity and efficiency by making them an invaluable tool for developers when working with C#. Example C# Func<int, int, int> add = (x, y) => x + y; int result = add(3, 4); // result is 7 7. Nullable Types Keyword: ? In C#, nullable types allow value types to have a null state, too. This comes in handy when you're working with databases or data sources that might have null values. Adding a ? after a value type helps developers handle cases where data could be missing or not defined. This prevents in causing potential errors when the code is running. This feature makes applications more reliable by giving a clear and straightforward way to handle optional or missing data. Example: C# int? num = null; if (num.HasValue) { Console.WriteLine($"Number: {num.Value}"); } else { Console.WriteLine("No value assigned."); } 8. Pattern Matching Keywords: switch, case Pattern matching is another useful feature introduced in C# 7.0 which then underwent a series of improvements in successive versions of the language. Pattern matching takes an expression and it helps in testing whether it matches a certain criteria or not. Instead of lengthy if-else statements, we can write code in a compact way that is easy to read. In the below example, I have used object where I assigned value 5 (which is of int datatype), which then uses pattern matching to print which datatype it is. Example C# object obj = 5; if (obj is int i) { Console.WriteLine($"Integer: {i}"); } switch (obj) { case int j: Console.WriteLine($"Integer: {j}"); break; case string s: Console.WriteLine($"String: {s}"); break; default: Console.WriteLine("Unknown type."); break; } 9. Extension Methods Keyword: this (in method signature) Extension methods allow developers to add new methods to existing types without changing their original code. These methods are static but work like instance methods of the extended type, offering a smooth way to add new functionality. Extension methods make code more modular and reusable giving developers the ability to extend types from outside libraries without messing up with the original code. Extension methods also support the "Open/Closed" principle, which means code is open to extension but closed to modifications. Example C# public static class StringExtensions { public static bool IsNullOrEmpty(this string value) { return string.IsNullOrEmpty(value); } } // Usage string str = null; bool result = str.IsNullOrEmpty(); // result is true 10. Tuples Keyword: tuple Tuples let you group multiple values into one single unit. They help when you want to send back more than one value from a method without using out parameters or making a new class only for the purpose of transferring data between objects. With tuples, you can package and return a set of related values, which makes our code easier to read and understand. You can give names to the fields in tuples or leave them unnamed. You then refer to the values using Item1 and Item2 as shown below. Example C# public (int, string) GetPerson() { return (1, "John Doe"); } // Usage var person = GetPerson(); Console.WriteLine($"ID: {person.Item1}, Name: {person.Item2}"); Conclusion By using async/await to handle tasks well, LINQ to get data, Properties to keep data safe, Generics to make sure the types are right, Delegates and Events for programs that react to events, Lambda expressions to write short functions, nullable types to deal with missing info, pattern matching to make code clearer and say more, extension methods to add new features, and tuples to organize data well, you can write code that's easier to manage and less likely to break. When you get good at using these features, you'll be able to build responsive, scalable, and top-notch applications. Happy Coding!!!
In this article, I will discuss in a practical and objective way the integration of the Spring framework with the resources of the OpenAI API, one of the main artificial intelligence products on the market. The use of artificial intelligence resources is becoming increasingly necessary in several products, and therefore, presenting its application in a Java solution through the Spring framework allows a huge number of projects currently in production to benefit from this resource. All of the code used in this project is available via GitHub. To download it, simply run the following command: git clone https://github.com/felipecaparelli/openai-spring.git or via SSL git clone. Note: It is important to notice that there is a cost in this API usage with the OpenAI account. Make sure that you understand the prices related to each request (it will vary by tokens used to request and present in the response). Assembling the Project 1. Get API Access As defined in the official documentation, first, you will need an API key from OpenAI to use the GPT models. Sign up at OpenAI's website if you don’t have an account and create an API key from the API dashboard. Going to the API Keys page, select the option Create new secret key. Then, in the popup, set a name to identify your key (optional) and press Create secret key. Now copy the API key value that will be used in your project configuration. 2. Configure the Project Dependencies The easiest way to prepare your project structure is via the Spring tool called Spring Initializr. It will generate the basic skeleton of your project, add the necessary libraries, the configuration, and also the main class to start your application. You must select at least the Spring Web dependency. In the Project type, I've selected Maven, and Java 17. I've also included the library httpclient5 because it will be necessary to configure our SSL connector. Follow the snipped of the pom.xml generated: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.3.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>br.com.erakles</groupId> <artifactId>spring-openai</artifactId> <version>0.0.1-SNAPSHOT</version> <name>spring-openai</name> <description>Demo project to explain the Spring and OpenAI integration</description> <properties> <java.version>17</java.version> <spring-ai.version>1.0.0-M1</spring-ai.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.apache.httpcomponents.client5</groupId> <artifactId>httpclient5</artifactId> <version>5.3.1</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> 3. Basic Configuration On your configuration file (application.properties), set the OpenAI secret key in the property openai.api.key. You can also replace the model version on the properties file to use a different API version, like gpt-4o-mini. Properties files spring.application.name=spring-openai openai.api.url=https://api.openai.com/v1/chat/completions openai.api.key=YOUR-OPENAI-API-KEY-GOES-HERE openai.api.model=gpt-3.5-turbo A tricky part about connecting with this service via Java is that it will, by default, require your HTTP client to use a valid certificate while executing this request. To fix it we will skip this validation step. 3.1 Skip the SSL validation To disable the requirement for a security certificate required by the JDK for HTTPS requests you must include the following modifications in your RestTemplate bean, via a configuration class: Java import org.apache.hc.client5.http.classic.HttpClient; import org.apache.hc.client5.http.impl.classic.HttpClients; import org.apache.hc.client5.http.impl.io.BasicHttpClientConnectionManager; import org.apache.hc.client5.http.socket.ConnectionSocketFactory; import org.apache.hc.client5.http.socket.PlainConnectionSocketFactory; import org.apache.hc.client5.http.ssl.NoopHostnameVerifier; import org.apache.hc.client5.http.ssl.SSLConnectionSocketFactory; import org.apache.hc.core5.http.config.Registry; import org.apache.hc.core5.http.config.RegistryBuilder; import org.apache.hc.core5.ssl.SSLContexts; import org.apache.hc.core5.ssl.TrustStrategy; import org.springframework.boot.web.client.RestTemplateBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate; import javax.net.ssl.SSLContext; @Configuration public class SpringOpenAIConfig { @Bean public RestTemplate secureRestTemplate(RestTemplateBuilder builder) throws Exception { // This configuration allows your application to skip the SSL check final TrustStrategy acceptingTrustStrategy = (cert, authType) -> true; final SSLContext sslContext = SSLContexts.custom() .loadTrustMaterial(null, acceptingTrustStrategy) .build(); final SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslContext, NoopHostnameVerifier.INSTANCE); final Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory> create() .register("https", sslsf) .register("http", new PlainConnectionSocketFactory()) .build(); final BasicHttpClientConnectionManager connectionManager = new BasicHttpClientConnectionManager(socketFactoryRegistry); HttpClient client = HttpClients.custom() .setConnectionManager(connectionManager) .build(); return builder .requestFactory(() -> new HttpComponentsClientHttpRequestFactory(client)) .build(); } } 4. Create a Service To Call the OpenAI API Now that we have all of the configuration ready, it is time to implement a service that will handle the communication with the ChatGPT API. I am using the Spring component RestTemplate, which allows the execution of the HTTP requests to the OpenAI endpoint. Java import org.springframework.beans.factory.annotation.Value; import org.springframework.http.HttpEntity; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpMethod; import org.springframework.http.MediaType; import org.springframework.stereotype.Service; import org.springframework.web.client.RestTemplate; @Service public class JavaOpenAIService { @Value("${openai.api.url}") private String apiUrl; @Value("${openai.api.key}") private String apiKey; @Value("${openai.api.model}") private String modelVersion; private final RestTemplate restTemplate; public JavaOpenAIService(RestTemplate restTemplate) { this.restTemplate = restTemplate; } /** * @param prompt - the question you are expecting to ask ChatGPT * @return the response in JSON format */ public String ask(String prompt) { HttpEntity<String> entity = new HttpEntity<>(buildMessageBody(modelVersion, prompt), buildOpenAIHeaders()); return restTemplate .exchange(apiUrl, HttpMethod.POST, entity, String.class) .getBody(); } private HttpHeaders buildOpenAIHeaders() { HttpHeaders headers = new HttpHeaders(); headers.set("Authorization", "Bearer " + apiKey); headers.set("Content-Type", MediaType.APPLICATION_JSON_VALUE); return headers; } private String buildMessageBody(String modelVersion, String prompt) { return String.format("{ \"model\": \"%s\", \"messages\": [{\"role\": \"user\", \"content\": \"%s\"}]}", modelVersion, prompt); } } 5. Create Your REST API Then, you can create your own REST API to receive the questions and redirect it to your service. Java import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; import br.com.erakles.springopenai.service.SpringOpenService; @RestController public class SpringOpenAIController { private final SpringOpenService springOpenService; SpringOpenAIController(SpringOpenService springOpenService) { this.springOpenService = springOpenService; } @GetMapping("/chat") public ResponseEntity<String> sendMessage(@RequestParam String prompt) { return ResponseEntity.ok(springOpenService.askMeAnything(prompt)); } } Conclusion These are the steps required to integrate your web application with the OpenAI service, so you can improve it later by adding more features like sending voice, images, and other files to their endpoints. After starting your Spring Boot application (./mvnw spring-boot:run), to test your web service, you must run the following URL http://localhost:8080/ask?promp={add-your-question}. If you did everything right, you will be able to read the result on your response body as follows: JSON { "id": "chatcmpl-9vSFbofMzGkLTQZeYwkseyhzbruXK", "object": "chat.completion", "created": 1723480319, "model": "gpt-3.5-turbo-0125", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Scuba stands for \"self-contained underwater breathing apparatus.\" It is a type of diving equipment that allows divers to breathe underwater while exploring the underwater world. Scuba diving involves using a tank of compressed air or other breathing gas, a regulator to control the flow of air, and various other accessories to facilitate diving, such as fins, masks, and wetsuits. Scuba diving allows divers to explore the underwater environment and observe marine life up close.", "refusal": null }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 90, "total_tokens": 102 }, "system_fingerprint": null } I hope this tutorial helped in your first interaction with the OpenAI and makes your life easier while diving deeper into your AI journey. If you have any questions or concerns don't hesitate to send me a message.
Unit Tests Unit testing is a fundamental part of software development that ensures individual components of your code work as expected. In Go, unit tests are straightforward to write and execute, making them an essential tool for maintaining code quality. What Is a Unit Test? A unit test is a small, focused test that validates the behavior of a single function or method. The goal is to ensure that the function works correctly in isolation, without depending on external systems like databases, file systems, or network connections. By isolating the function, you can quickly identify and fix bugs within a specific area of your code. How Do Unit Tests Look in Go? Go has built-in support for testing with the testing package, which provides the necessary tools to write and run unit tests. A unit test in Go typically resides in a file with a _test.go suffix and includes one or more test functions that follow the naming convention TestXxx. Here’s an example: Go package math import "testing" func Add(a, b int) int { return a + b } func TestAdd(t *testing.T) { result := Add(2, 3) expected := 5 if result != expected { t.Errorf("Add(2, 3) = %d; want %d", result, expected) } } In this example, the TestAdd function tests the Add function. It checks if the output of Add(2, 3) matches the expected result, 5. If the results don’t match, the test will fail, and the error will be reported. How To Execute Unit Tests Running unit tests in Go is simple. You can execute all tests in a package using the go test command. From the command line, navigate to your package directory and run: go test This command will discover all files with the _test.go suffix, execute the test functions, and report the results. For more detailed output, including the names of passing tests, use the -v flag: go test -v If you want to run a specific test, you can use the -run flag followed by a regular expression that matches the test name: go test -run TestAdd When To Use Unit Tests Unit tests are most effective when: Isolating bugs: They help isolate and identify bugs early in the development process. Refactoring code: Unit tests provide a safety net that ensures your changes don’t break existing functionality. Ensuring correctness: They verify that individual functions behave as expected under various conditions. Documenting code: Well-written tests serve as documentation, demonstrating how the function is expected to be used and what outputs to expect. In summary, unit tests in Go are easy to write, execute, and maintain. They help ensure that your code behaves as expected, leading to more robust and reliable software. In the next section, we’ll delve into integration tests, which validate how different components of your application work together. Integration Tests While unit tests are crucial for verifying individual components of your code, integration tests play an equally important role by ensuring that different parts of your application work together as expected. Integration tests are particularly useful for detecting issues that may not be apparent when testing components in isolation. What Is an Integration Test? An integration test examines how multiple components of your application interact with each other. Unlike unit tests, which focus on a single function or method, integration tests validate the interaction between several components, such as functions, modules, or even external systems like databases, APIs, or file systems. The goal of integration testing is to ensure that the integrated components function correctly as a whole, detecting problems that can arise when different parts of the system come together. How Do Integration Tests Look in Go? Integration tests in Go are often structured similarly to unit tests but involve more setup and possibly external dependencies. These tests may require initializing a database, starting a server, or interacting with external services. They are typically placed in files with a _test.go suffix, just like unit tests, but may be organized into separate directories to distinguish them from unit tests. Here’s an example of a basic integration test: Go package main import ( "database/sql" "testing" _ "github.com/mattn/go-sqlite3" ) func TestDatabaseIntegration(t *testing.T) { db, err := sql.Open("sqlite3", ":memory:") if err != nil { t.Fatalf("failed to open database: %v", err) } defer db.Close() // Setup - Create a table _, err = db.Exec("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)") if err != nil { t.Fatalf("failed to create table: %v", err) } // Insert data _, err = db.Exec("INSERT INTO users (name) VALUES ('Alice')") if err != nil { t.Fatalf("failed to insert data: %v", err) } // Query data var name string err = db.QueryRow("SELECT name FROM users WHERE id = 1").Scan(&name) if err != nil { t.Fatalf("failed to query data: %v", err) } // Validate the result if name != "Alice" { t.Errorf("expected name to be 'Alice', got '%s'", name) } } In this example, the test interacts with an in-memory SQLite database to ensure that the operations (creating a table, inserting data, and querying data) work together as expected. This test checks the integration between the database and the code that interacts with it. How To Execute Integration Tests You can run integration tests in the same way as unit tests using the go test command: go test However, because integration tests might involve external dependencies, it’s common to organize them separately or use build tags to distinguish them from unit tests. For example, you can create an integration build tag and run your tests like this: Go // +build integration package main import "testing" func TestSomething(t *testing.T) { // Integration test logic } To execute only the integration tests, use: go test -tags=integration This approach helps keep unit and integration tests separate, allowing you to run only the tests that are relevant to your current development or CI/CD workflow. When To Use Integration Tests Integration tests are particularly useful in the following scenarios: Testing interactions: When you need to verify that different modules or services interact correctly End-to-end scenarios: For testing complete workflows that involve multiple parts of your application, such as user registration or transaction processing Validating external dependencies: To ensure that your application correctly interacts with external systems like databases, APIs, or third-party services Ensuring system stability: Integration tests help catch issues that may not be apparent in isolated unit tests, such as race conditions, incorrect data handling, or configuration problems. Summary In summary, integration tests in Go provide a powerful way to ensure that your application’s components work together correctly. While they are more complex and may require additional setup compared to unit tests, they are invaluable for maintaining the integrity of your software as it scales and becomes more interconnected. Together with unit tests, integration tests form a comprehensive testing strategy that helps you deliver robust, reliable applications.
The Meta of Design With several decades of experience, I love building enterprise applications for companies. Each solution requires a set of models: an SQL database, an API (Application Programming Interface), declarative rules, declarative security (role-based access control), test-driven scenarios, workflows, and user interfaces. The "meta" approach to design requires thinking of how each of these components interacts with the other. We also need to understand how changes in the scope of the project impact each of these meta-components. While I have worked in many different languages (APL, Revelation/PICK, BASIC, Smalltalk, Object/1, Java, JavaScript, Node.js, Python) these models are always the foundation that influences the final integrated solution. Models are meta abstractions that describe how the shape, content, and ability of the object will behave in the running environment regardless of language, platform, or operating system (OS). Model First Approach Starting with an existing SQL Schema and a good ORM allows the abstraction of the database and the generation of an API. I have been working with ApiLogicServer (a GenAI-powered Python open-source platform) which has a command line interface to connect the major SQL databases and create an SQLAlchemy ORM (Object-Relational Model). From this model, an Open API (aka Swagger) for JSON API is created, and a YAML file (model) drives a react-admin runtime. The YAML file is also used to build an Ontimize (Angular) user interface. Note that the GenAI part of ApiLogicServer lets me use a prompt-driven approach to get this entire running stack using just a few keywords. Command Line Tools The CLI (Command Line Interface) is used to create a new ApiLogicServer (ALS) Python project, connect to an SQL database, use KeyCloak for single sign-on authentication, rebuild the SQLAlchemy ORM if the database changes, generate an Angular application from the API, and much more. Most of the work of building an API is done by the CLI, mapping tables and columns, dealing with datatypes, defaults, column aliases, quoted identifiers, and relationships between parent/child tables. The real power of this tool is the things you cannot see. Command Line to build the Northwind Demo: Markdown als create --project-name=demo --db-url=nw+ Developer Perspective As a developer/consultant, I need more than one framework and set of tools to build and deliver a complete microservice solution. ApiLogicServer is a framework that works with the developer to enhance and extend these various models with low code and DSL (Domain Specific Language) services. VSCode with a debugger is an absolute requirement. Copilot for code completion and code generation Python (3.12) open-source framework and libraries Kafka integration (producer and consumer) KeyCloak framework for single sign-on LogicBank declarative rules engine integrated with the ORM model and all CRUD operations GitHub integration for source code management (VSCode extension) SQLAlchemy ORM/Flask and JSON API open-source libraries Declarative security for role-based access control Support both react-admin and Angular UI using a YAML model Docker tools to build and deploy containers Behave Test Driven tools Optimistic Locking (optional) on all API endpoints Open Source (no license issues) components Access to Python libraries for extensibility API Model Lifecycles Database First Every application will undergo change as stakeholders and end-users interact with the system. The earlier the feedback, the easier it will be to modify and test the results. The first source model is the SQL schema(s): missing attributes, foreign key lookups, datatype changes, default values, and constraints require a rebuild of the ORM. ApiLogicServer uses a command-line feature “rebuild-from-database” that rebuilds the SQLAlchemy ORM model and the YAML files used by the various UI tools. This approach requires knowledge of SQL to define tables, columns, keys, constraints, and insert data. The GenAI feature will allow an iterative and incremental approach to building the database, but in the end, an actual database developer is needed to complete the effort. Model First (GenAI) An interesting feature of SQLAlchemy is the ability to modify the ORM and rebuild the SQL database. This can be useful if it is a new application without existing data. This is how the GenAI works out of the box: it will ask ChatGPT to build an SQLALchemy ORM model and then build a database from the model. This seems to work very well for prototypes and quick solutions. GenAI can create the model and populate a small SQLite database. If the system has existing data, adding columns or new tables for aggregations requires a bit more effort and SQL knowledge. Virtual Columns and Relationships There are many use cases that prevent the developer from "touching" the database. This requires that the framework have the ability to declare virtual columns (like check_sum for optimistic locking) and virtual relationships to define one-to-many and many-to-one relationships between entities. SQLAlchemy and ALS support both of these features. Custom API Definitions There are many use cases that require API endpoints that do not map directly to the SQLAlchemy model. ApiLogicServer provides an extensible framework to define and implement new API endpoints. Further, there are use cases that require a JSON response to be formatted in a manner suitable for the consumer (e.g., nested documents) or transforms on the results that simple JSON API cannot support. This is probably one of the best features of ALS: the extensible nature of custom user endpoints. LogicBank: Declarative Logic Rules are written in an easy-to-understand DSL to support derivations (formula, sums, counts, parent copy), constraints (reject when), and events. Rules can be extended with Python functions (e.g., commit-event calling a Kafka producer). Rules can be added or changed without knowledge of the order of operations (like a spreadsheet); rules operate on state change of dependent entities and fields. These LogicBank rules can be partially generated using Copilot for formulas, sums, counts, and constraints. Sometimes, the introduction of sums and counts requires the addition of parent tables and relationships to store the column aggregates. Python Rule.formula(derive=LineItem.Total, as_expression=lambda row: row.UnitPrice * row.Quantity) Rule.copy(derive=LineItm.UnitPrice, from_parent=Product.UnitPrice) Events This is the point where developers can integrate business and API transactions with external systems. Events are applied to an entity (early, row, commit, or flush) and the existing integration with a Kafka broker demonstrates how a triggering event can be used to produce a message. This can also be used to interface with a workflow system. For example, if the commit event is used on an Order, when all the rules and constraints are completed (and successful), the commit event is called and a Python function is used to send mail, produce a Kafka message, or call another microservice API to ship order. Python def send_order_to_shipping(row: models.Order, old_row: models.Order, logic_row: LogicRow): """ #als: Send Kafka message formatted by OrderShipping RowDictMapper Format row per shipping requirements, and send (e.g., a message) NB: the after_flush event makes Order.Id available. Args: row (models.Order): inserted Order old_row (models.Order): n/a logic_row (LogicRow): bundles curr/old row, with ins/upd/dlt logic """ if (logic_row.is_inserted() and row.Ready == True) or \ (logic_row.is_updated() and row.Ready == True and old_row.Ready == False): kafka_producer.send_kafka_message(logic_row=logic_row, row_dict_mapper=OrderShipping, kafka_topic="order_shipping", kafka_key=str(row.Id), msg="Sending Order to Shipping") Rule.after_flush_row_event(on_class=models.Order, calling=send_order_to_shipping) Declarative Security Model Using a single sign-on like KeyCloak will return authentication, but authorization can be declared based on a user-defined role. Each role can have read, insert, update, or delete permissions and roles can grant specific permission for a role to a specific Entity (API) and even apply row-level filter permissions. This fine-grained approach can be added and tested anytime in the development lifecycle. Python DefaultRolePermission(to_role = Roles.public, can_read=True, ... can_delete=False) DefaultRolePermission(to_role = Roles.Customer, can_read=True, ... can_delete=True) # customers can only see their own account Grant( on_entity = models.Customer, to_role = Roles.customer, filter = lambda : models.Customer.Id == Security.current_user().id) Summary ApiLogicServer (ALS) and GenAI-powered development change the deployment of microservice applications. ALS has the features and functionality for most developers and is based on open-source components. LogicBank requires a different way of thinking about data but the investment is an improvement in time spent writing code. ALS is well-suited for database transaction systems that need an API and the ability to build a custom front-end user interface. Model-driven development is the way to implement GenAI-powered applications and ALS is a platform for developers/consultants to deliver these solutions.
This article will explore the critical topic of creating effective exceptions in your Java code. Exceptions are crucial in identifying when something goes wrong during code execution. They are instrumental in managing data inconsistency and business validation errors. We will outline three key steps for writing effective exceptions in your Java code. Writing and Defining an Exception Hierarchy Creating a Trackable Exception Message Avoiding Security Problems with Exceptions 1. Writing and Defining an Exception Hierarchy Define the exception hierarchy as the first step in your design process. By considering the domain, you can begin with more general exceptions and then move towards more specific ones. This approach enables you to trace issues using the hierarchy tree or exception names. As emphasized in Domain-Driven Design (DDD), meaningful names are also important for exception names. For example, in the domain of credit cards, you can start with a generic organization exception, then move to a domain-specific exception like credit card exceptions, and finally, to specific errors. Here’s an example: Java public class MyCompanyException extends RuntimeException { public MyCompanyException(String message) { super(message); } public MyCompanyException(String message, Throwable cause) { super(message, cause); } } public class CreditCardException extends MyCompanyException { public CreditCardException(String message) { super(message); } public CreditCardException(String message, Throwable cause) { super(message, cause); } } public class CreditCardNotFoundException extends CreditCardException { public CreditCardNotFoundException(String message) { super(message); } public CreditCardNotFoundException(String message, Throwable cause) { super(message, cause); } } With this hierarchy, if an exception is thrown, we know it belongs to the credit card domain, which is part of the organization’s exceptions. 2. Creating a Trackable Exception Message The second step is to make the exception message trackable. While creating a well-structured hierarchy is essential, it’s equally important to provide detailed messages that can help identify the exact issue. For instance, in a service that processes credit card payments, including the credit card ID in the exception message can be very helpful. Java public void pay(UUID creditCardId, Product product) { LOGGER.info("Paying with credit card: " + creditCardId); LOGGER.fine("Paying for product: " + product); findById(creditCardId).orElseThrow(() -> new CreditCardNotFoundException("Credit card not found with the id: " + creditCardId)); } In this example, if the credit card is not found, the exception message will include the ID, making it easier to understand why the error occurred and to check why this information is missing. 3. Avoiding Security Problems With Exceptions Security is a critical aspect of exception handling. Ensure that the exception messages do not contain sensitive information that could lead to data breaches. Like best logging practices, you should avoid exposing critical data in exception messages. For example, instead of including sensitive details in the exception, provide a generic message and log the details securely: Java public void pay(UUID creditCardId, Product product) { LOGGER.info("Paying with credit card: " + creditCardId); LOGGER.fine("Paying for product: " + product); findById(creditCardId).orElseThrow(() -> { LOGGER.severe("Credit card not found with id: " + creditCardId); // Detailed log return new CreditCardNotFoundException("Credit card not found"); // Generic message }); } Additional Resources For further learning, you can watch the complimentary video below. Access the source code here: GitHub - Java Exception Handling Code Following these steps, you can create a robust exception-handling mechanism in your Java applications, making your code more maintainable, secure, and easier to debug. Video
Previously, we examined the happens before guarantee in Java. This guarantee gives us confidence when we write multithreaded programs with regard to the re-ordering of statements that can happen. In this post, we shall focus on variable visibility between two threads and what happens when we change a variable that is shared. Code Examination Let’s examine the following code snippet: Java import java.util.Date; public class UnSynchronizedCountDown { private int number = Integer.MAX_VALUE; public Thread countDownUntilAsync(final int threshold) { return new Thread(() -> { while (number>threshold) { number--; System.out.println("Decreased "+number +" at "+ new Date()); } }); } private void waitUntilThresholdReached(int threshold) { while (number>threshold) { } } public static void main(String[] args) { int threshold = 2125840327; UnSynchronizedCountDown unSynchronizedCountDown = new UnSynchronizedCountDown(); unSynchronizedCountDown.countDownUntilAsync(threshold).start(); unSynchronizedCountDown.waitUntilThresholdReached(threshold); System.out.println("Threshold reached at "+new Date()); } } This is a bad piece of code: two threads operate on the same variable number without any synchronization. Now the code will likely run forever! Regardless of when the countDown thread reaches the goal, the main thread will not pick the new value which is below the threshold. This is because the changes made to the number variable have not been made visible to the main thread. So it’s not only about synchronizing and issuing thread-safe operations but also ensuring that the changes a thread has made are visible. Visibility and Synchronized Intrinsic locking in Java guarantees that one thread can see the changes of another thread. So when we use synchronized the changes of a thread become visible to the other thread that has stumbled on the synchronized block. Let’s change our example and showcase this: Java package com.gkatzioura.concurrency.visibility; public class SynchronizedCountDown { private int number = Integer.MAX_VALUE; private String message = "Nothing changed"; private static final Object lock = new Object(); private int getNumber() { synchronized (lock) { return number; } } public Thread countDownUntilAsync(final int threshold) { return new Thread(() -> { message = "Count down until "+threshold; while (number>threshold) { synchronized (lock) { number--; if(number<=threshold) { } } } }); } private void waitUntilThresholdReached(int threshold) { while (getNumber()>threshold) { } } public static void main(String[] args) { int threshold = 2147270516; SynchronizedCountDown synchronizedCountDown = new SynchronizedCountDown(); synchronizedCountDown.countDownUntilAsync(threshold).start(); System.out.println(synchronizedCountDown.message); synchronizedCountDown.waitUntilThresholdReached(threshold); System.out.println(synchronizedCountDown.message); } } Access to the number variable is protected by a lock. Also modifying the variable is synchronized using the same lock. Eventually, the program will terminate as expected since we will reach the threshold. Every time we enter the synchronized block the changes made by the countdown thread will be visible to the main thread. This applies not only to the variables involved on a synchronized block but also to the variables that were visible to the countdown thread. Thus although the message variable was not inside any synchronized block at the end of the program its altered value got publicized, thus saw the right value printed.
In cyber resilience, handling and querying data effectively is crucial for detecting threats, responding to incidents, and maintaining strong security. Traditional data management methods often fall short in providing deep insights or handling complex data relationships. By integrating semantic web technologies and RDF (Resource Description Framework), we can significantly enhance our data management capabilities. This tutorial demonstrates how to build a web application using Flask, a popular Python framework, that leverages these technologies for advanced semantic search and RDF data management. Understanding the Semantic Web The Semantic Web Imagine the web as a huge library where every piece of data is like a book. On the traditional web, we can look at these books, but computers don't understand their content or how they relate to one another. The semantic web changes this by adding extra layers of meaning to the data. It helps computers understand not just what the data is but also what it means and how it connects with other data. This makes data more meaningful and enables smarter queries and analysis. For example, if we have data about various cybersecurity threats, the semantic web lets a computer understand not just the details of each threat but also how they relate to attack methods, vulnerabilities, and threat actors. This deeper understanding leads to more accurate and insightful analyses. Ontologies Think of ontologies as a system for organizing data, similar to the Dewey Decimal System in a library. They define a set of concepts and the relationships between them. In cybersecurity, an ontology might define concepts like "attack vectors," "vulnerabilities," and "threat actors," and explain how these concepts are interconnected. This structured approach helps in organizing data so that it’s easier to search and understand in context. For instance, an ontology could show that a "vulnerability" can be exploited by an "attack vector," and a "threat actor" might use multiple "attack vectors." This setup helps in understanding the intricate relationships within the data. Linked Data Linked data involves connecting pieces of information together. Imagine adding hyperlinks to books in a library, not just pointing to other books but to specific chapters or sections within them. Linked data uses standard web protocols and formats to link different pieces of information, creating a richer and more integrated view of the data. This approach allows data from various sources to be combined and queried seamlessly. For example, linked data might connect information about a specific cybersecurity vulnerability with related data on similar vulnerabilities, attack vectors that exploit them, and threat actors involved. RDF Basics RDF (Resource Description Framework) is a standard way to describe relationships between resources. It uses a simple structure called triples to represent data: (subject, predicate, object). For example, in the statement “John knows Mary,” RDF breaks it down into a triple where "John" is the subject, "knows" is the predicate, and "Mary" is the object. This model is powerful because it simplifies representing complex relationships between pieces of data. Graph-Based Representation RDF organizes data in a graph format, where each node represents a resource or piece of data, and each edge represents a relationship between these nodes. This visual format helps in understanding how different pieces of information are connected. For example, RDF can show how various vulnerabilities are linked to specific attack vectors and how these connections can help in identifying potential threats. SPARQL SPARQL is the language used to query RDF data. If RDF is the data model, SPARQL is the tool for querying and managing that data. It allows us to write queries to find specific information, filter results, and combine data from different sources. For example, we can use SPARQL to find all vulnerabilities linked to a particular type of attack or identify which threat actors are associated with specific attack methods. Why Use Flask? Flask Overview Flask is a lightweight Python web framework that's great for building web applications. Its simplicity and flexibility make it easy to create applications quickly with minimal code. Flask lets us define routes (URLs), handle user requests, and render web pages, making it ideal for developing a web application that works with semantic web technologies and RDF data. Advantages of Flask Simplicity: Flask’s minimalistic design helps us focus on building our application without dealing with complex configurations. Flexibility: It offers the flexibility to use various components and libraries based on our needs. Extensibility: We can easily add additional libraries or services to extend your application’s functionality. Application Architecture Our Flask-based application has several key components: 1. Flask Web Framework This is the heart of the application, managing how users interact with the server. Flask handles HTTP requests, routes them to the right functions, and generates responses. It provides the foundation for integrating semantic web technologies and RDF data. 2. RDF Data Store This is where the RDF data is stored. It's similar to a traditional database but designed specifically for RDF triples. It supports efficient querying and management of data, integrating seamlessly with the rest of the application. 3. Semantic Search Engine This component allows users to search the RDF data using SPARQL. It takes user queries, executes SPARQL commands against the RDF data store, and retrieves relevant results. This is crucial for providing meaningful search capabilities. 4. User Interface (UI) The UI is the part of the application where users interact with the system. It includes search forms and result displays, letting users input queries, view results, and navigate through the application. 5. API Integration This optional component connects to external data sources or services. For example, it might integrate threat intelligence feeds or additional security data, enhancing the application’s capabilities. Understanding these components and how they work together will help us build a Flask-based web application that effectively uses semantic web technologies and RDF data management to enhance cybersecurity. Building the Flask Application 1. Installing Required Libraries To get started, we need to install the necessary Python libraries. We can do this using pip: Python pip install Flask RDFLib requests 2. Flask Application Setup Create a file named app.py in the project directory. This file will contain the core logic for our Flask application. app.py: Python from flask import Flask, request, render_template from rdflib import Graph, Namespace from rdflib.plugins.sparql import prepareQuery app = Flask(__name__) # Initialize RDFLib graph and namespaces g = Graph() STIX = Namespace("http://stix.mitre.org/") EX = Namespace("http://example.org/") # Load RDF data g.parse("data.rdf", format="xml") @app.route('/') def index(): return render_template('index.html') @app.route('/search', methods=['POST']) def search(): query = request.form['query'] results = perform_search(query) return render_template('search_results.html', results=results) @app.route('/rdf', methods=['POST']) def rdf_query(): query = request.form['rdf_query'] results = perform_sparql_query(query) return render_template('rdf_results.html', results=results) def perform_search(query): # Mock function to simulate search results return [ {"title": "APT28 Threat Actor", "url": "http://example.org/threat_actor/apt28"}, {"title": "Malware Indicator", "url": "http://example.org/indicator/malware"}, {"title": "Phishing Attack Pattern", "url": "http://example.org/attack_pattern/phishing"} ] def perform_sparql_query(query): q = prepareQuery(query) formatted_results = [] # Parse the SPARQL query qres = g.query(q) # # Iterate over the results # for row in qres: # # Convert each item in the row to a string # #formatted_row = tuple(str(item) for item in row) # formatted_results.append(row) return qres if __name__ == '__main__': app.run(debug=True) 3. Creating RDF Data RDF Data File To demonstrate the use of RDFLib in managing cybersecurity data, create an RDF file named data.rdf. This file will contain sample data relevant to cybersecurity. data.rdf: Python <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:stix="http://stix.mitre.org/"> <!-- Threat Actor --> <rdf:Description rdf:about="http://example.org/threat_actor/apt28"> <rdf:type rdf:resource="http://stix.mitre.org/ThreatActor"/> <rdfs:label>APT28</rdfs:label> <stix:description>APT28, also known as Fancy Bear, is a threat actor group associated with Russian intelligence.</stix:description> </rdf:Description> <!-- Indicator --> <rdf:Description rdf:about="http://example.org/indicator/malware"> <rdf:type rdf:resource="http://stix.mitre.org/Indicator"/> <rdfs:label>Malware Indicator</rdfs:label> <stix:description>Indicates the presence of malware identified through signature analysis.</stix:description> <stix:pattern>filemd5: 'e99a18c428cb38d5f260853678922e03'</stix:pattern> </rdf:Description> <!-- Attack Pattern --> <rdf:Description rdf:about="http://example.org/attack_pattern/phishing"> <rdf:type rdf:resource="http://stix.mitre.org/AttackPattern"/> <rdfs:label>Phishing</rdfs:label> <stix:description>Phishing is a social engineering attack used to trick individuals into divulging sensitive information.</stix:description> </rdf:Description> </rdf:RDF> Understanding RDF Data RDF (Resource Description Framework) is a standard model for data interchange on the web. It uses triples (subject-predicate-object) to represent data. In our RDF file: Threat actor: Represents a known threat actor; e.g., APT28 Indicator: Represents an indicator of compromise, such as a malware signature Attack pattern: Describes an attack pattern, such as phishing The namespaces stix and taxii are used to denote specific cybersecurity-related terms. 4. Flask Routes and Functions Home Route The home route (/) renders the main page where users can input their search and SPARQL queries. Search Route The search route (/search) processes user search queries. For this demonstration, it returns mock search results. Mock Search Function The perform_search function simulates search results. Replace this function with actual search logic when integrating with real threat intelligence sources. RDF Query Route The RDF query route (/rdf) handles SPARQL queries submitted by users. It uses RDFLib to execute the queries and returns the results. SPARQL Query Function The perform_sparql_query function executes SPARQL queries against the RDFLib graph and returns the results. 5. Creating HTML Templates Index Page The index.html file provides a form for users to input search queries and SPARQL queries. index.html: HTML <!DOCTYPE html> <html> <head> <title>Cybersecurity Search and RDF Query</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }"> </head> <body> <h1>Cybersecurity Search and RDF Query</h1> <form action="/search" method="post"> <label for="query">Search Threat Intelligence:</label> <input type="text" id="query" name="query" placeholder="Search for threat actors, indicators, etc."> <button type="submit">Search</button> </form> <form action="/rdf" method="post"> <label for="rdf_query">SPARQL Query:</label> <textarea id="rdf_query" name="rdf_query" placeholder="Enter your SPARQL query here"></textarea> <button type="submit">Run Query</button> </form> </body> </html> Search Results Page The search_results.html file displays the results of the search query. search_results.html: HTML <!DOCTYPE html> <html> <head> <title>Search Results</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }"> </head> <body> <h1>Search Results</h1> <ul> {% for result in results %} <li><a href="{{ result.url }">{{ result.title }</a></li> {% endfor %} </ul> <a href="/">Back to Home</a> </body> </html> SPARQL Query Results Page The rdf_results.html file shows the results of SPARQL queries. rdf_results.html: HTML <!DOCTYPE html> <html> <head> <title>SPARQL Query Results</title> <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css') }"> </head> <body> <h1>SPARQL Query Results</h1> {% if results %} <table border="1" cellpadding="5" cellspacing="0"> <thead> <tr> <th>Subject</th> <th>Label</th> <th>Description</th> </tr> </thead> <tbody> {% for row in results %} <tr> <td>{{ row[0] }</td> <td>{{ row[1] }</td> <td>{{ row[2] }</td> </tr> {% endfor %} </tbody> </table> {% else %} <p>No results found for your query.</p> {% endif %} </body> </html> 6. Application Home Page 7. SPARQL Query Example Query Attack Pattern To list all attack patterns described in the RDF data, the user can input: Python SELECT ?subject ?label ?description WHERE { ?subject rdf:type <http://stix.mitre.org/AttackPattern> . ?subject rdfs:label ?label . ?subject <http://stix.mitre.org/description> ?description . } Result Practical Applications 1. Threat Intelligence The web application’s search functionality can be used to monitor and analyze emerging threats. By integrating real threat intelligence data, security professionals can use the application to track malware, detect phishing attempts, and stay updated on threat actor activities. 2. Data Analysis RDFLib’s SPARQL querying capabilities allow for sophisticated data analysis. Security researchers can use SPARQL queries to identify patterns, relationships, and trends within the RDF data, providing valuable insights for threat analysis and incident response. 3. Integration With Security Systems The Flask application can be integrated with existing security systems to enhance its functionality: SIEM systems: Feed search results and RDF data into Security Information and Event Management (SIEM) systems for real-time threat detection and analysis. Automated decision-making: Use RDF data to support automated decision-making processes, such as alerting on suspicious activities based on predefined patterns. Conclusion This tutorial has demonstrated how to build a Flask-based web application that integrates semantic web search and RDF data management for a cybersecurity user case. By utilizing Flask, RDFLib, and SPARQL, the application provides a practical tool for managing and analyzing cyber safety data. The provided code examples and explanations offer a foundation for developing more advanced features and integrating them with real-world threat intelligence sources. As cyber threats continue to evolve, using semantic web technologies and RDF data will become increasingly important for effective threat detection and response.
Smart-Doc is a powerful documentation generation tool that helps developers easily create clear and detailed API documentation for Java projects. With the growing popularity of WebSocket technology, Smart-Doc has added support for WebSocket interfaces starting from version 3.0.7. This article will detail how to use Smart-Doc to generate Java WebSocket interface documentation and provide a complete example of a WebSocket server. Overview of WebSocket Technology First, let's briefly understand WebSocket technology. The WebSocket protocol provides a full-duplex communication channel, making data exchange between the client and server simpler and more efficient. In Java, developers can easily implement WebSocket servers and clients using JSR 356: Java API for WebSocket. WebSocket Annotations Overview In Java WebSocket, the @ServerEndpoint annotation is used to define a POJO class as a WebSocket server endpoint. Methods marked with this annotation can be automatically called when WebSocket events (such as connection establishment, message reception, etc.) occur. Besides @ServerEndpoint, there are several other WebSocket-related annotations: @OnOpen: This method is triggered when a client establishes a WebSocket connection with the server. It is usually used to initialize resources or send a welcome message. @OnMessage: This method is triggered when the server receives a message from the client. It is responsible for processing the received message and performing the corresponding operations. @OnClose: This method is triggered when the client closes the WebSocket connection. It is usually used to release resources or perform cleanup work. @OnError: This method is triggered if an error occurs during WebSocket communication. It handles error situations, such as logging or notifying the user. Introduction to Smart-Doc Smart-Doc is a lightweight API documentation generation tool based on Java. It supports extracting interface information from source code and comments, automatically generating documentation in Markdown format. For WebSocket projects, this means you can directly extract documentation from your ServerEndpoint classes without manually writing tedious documentation descriptions. GitHub Configuring Smart-Doc to Generate WebSocket Interface Documentation Preparing the Environment Ensure your development environment has the following components installed: Java 17 or higher Maven or Gradle as the build tool Latest version of Smart-Doc plugin WebSocket server implementation library, such as jakarta.websocket Creating a WebSocket Server Adding Plugin Dependency Add the Smart-Doc dependency in the pom.xml file: <plugins> <plugin> <groupId>com.ly.smart-doc</groupId> <artifactId>smart-doc-maven-plugin</artifactId> <version>[Latest version]</version> <configuration> <!--smart-doc--> <configFile>./src/main/resources/smart-doc.json</configFile> <!--Exclude jars that fail to load third-party dependent source code--> </configuration> </plugin> </plugins> Creating a WebSocket Server Endpoint Define the message type (Message), a simple POJO representing the message received from the client. public class Message { private String content; // getter and setter methods } Define the response type (SampleResponse), a simple POJO representing the response message to be sent back to the client. public class SampleResponse { private String responseContent; // getter and setter methods } Implement the message decoder (MessageDecoder), responsible for converting the message sent by the client from JSON format to a Message object. public class MessageDecoder implements Decoder.Text<Message> { private static final ObjectMapper objectMapper = new ObjectMapper(); @Override public Message decode(String s) throws DecodeException { try { return objectMapper.readValue(s, Message.class); } catch (Exception e) { throw new DecodeException(s, "Unable to decode text to Message", e); } } @Override public boolean willDecode(String s) { return (s != null); } @Override public void init(EndpointConfig endpointConfig) { } @Override public void destroy() { } } Implement the response encoder (MessageResponseEncoder). public class MessageResponseEncoder implements Encoder.Text<SampleResponse> { private static final ObjectMapper objectMapper = new ObjectMapper(); @Override public String encode(SampleResponse response) { try { return objectMapper.writeValueAsString(response); } catch (Exception e) { throw new RuntimeException("Unable to encode SampleResponse", e); } } @Override public void init(EndpointConfig endpointConfig) { } @Override public void destroy() { } } Use the ServerEndpoint annotation to create a simple WebSocket server. /** * WebSocket server endpoint example. */ @Component @ServerEndpoint(value = "/ws/chat/{userId}", decoders = {MessageDecoder.class}, encoders = {MessageResponseEncoder.class}) public class ChatEndpoint { /** * Called when a new connection is established. * * @param session the client session * @param userId the user ID */ @OnOpen public void onOpen(Session session, @PathParam("userId") String userId) { System.out.println("Connected: " + session.getId() + ", User ID: " + userId); } /** * Called when a message is received from the client. * * @param message the message sent by the client * @param session the client session * @return the response message */ @OnMessage public SampleResponse receiveMessage(Message message, Session session) { System.out.println("Received message: " + message); return new SampleResponse(message.getContent()); } /** * Called when the connection is closed. * * @param session the client session */ @OnClose public void onClose(Session session) { System.out.println("Disconnected: " + session.getId()); } /** * Called when an error occurs. * * @param session the client session * @param throwable the error */ @OnError public void onError(Session session, Throwable throwable) { throwable.printStackTrace(); } } Configuring Smart-Doc Create a smart-doc.json configuration file to let Smart-Doc know how to generate documentation. { "serverUrl": "http://smart-doc-demo:8080", // Set the server address, not required "outPath": "src/main/resources/static/doc" // Specify the output path of the document } Generating Documentation Run the following command in the command line to generate documentation: mvn smart-doc:websocket-html Viewing the Documentation After the documentation is generated, you can find it in the src/main/resources/static/doc/websocket directory. Open the websocket-index.html file in a browser to view the WebSocket API documentation. Conclusion Automatically generating Java WebSocket interface documentation with Smart-Doc not only saves a lot of manual documentation writing time but also ensures the accuracy and timely updates of the documentation. It has been proven that a good documentation management strategy can significantly improve development efficiency and code quality. With tools like Smart-Doc, you can focus more on the development of WebSocket applications without worrying about documentation maintenance issues.
When I was a child, I used to lie on the bed and gaze for a long time at the patterns on an old Soviet rug, seeing animals and fantastical figures within them. Now, I more often look at code, but similar images still emerge in my mind. Like on the rug, these images form repetitive patterns. They can be either pleasing or repulsive. Today, I want to tell you about one such unpleasant pattern that can be found in programming. Scenario Imagine a service that processes a client registration request and sends an event about it to Kafka. In this article, I will show an implementation example that I consider an antipattern and suggest an improved version. Option 1: Methodcentipede The Java code below shows the code of the RegistrationService class, which processes the request and sends the event. Java public class RegistrationService { private final ClientRepository clientRepository; private final KafkaTemplate<Object, Object> kafkaTemplate; private final ObjectMapper objectMapper; public void registerClient(RegistrationRequest request) { var client = clientRepository.save(Client.builder() .email(request.email()) .firstName(request.firstName()) .lastName(request.lastName()) .build()); sendEvent(client); } @SneakyThrows private void sendEvent(Client client) { var event = RegistrationEvent.builder() .clientId(client.getId()) .email(client.getEmail()) .firstName(client.getFirstName()) .lastName(client.getLastName()) .build(); Message message = MessageBuilder .withPayload(objectMapper.writeValueAsString(event)) .setHeader(KafkaHeaders.TOPIC, "topic-registration") .setHeader(KafkaHeaders.KEY, client.getEmail()) .build(); kafkaTemplate.send(message).get(); } @Builder public record RegistrationEvent(int clientId, String email, String firstName, String lastName) {} } The structure of the code can be simplified as follows: Here, you can see that the methods form an unbroken chain through which the data flows, like through a long, narrow intestine. The methods in the middle of this chain are responsible not only for the logic directly described in their body but also for the logic of the methods they call and their contracts (e.g., the need to handle specific errors). All methods preceding the invoked one inherit its entire complexity. For example, if kafkaTemplate.send has a side effect of sending an event, then the calling sendEvent method also acquires the same side effect. The sendEvent method also becomes responsible for serialization, including handling its errors. Testing individual parts of the code becomes more challenging because there is no way to test each part in isolation without using mocks. Option 2: Improved Version Code: Java public class RegistrationService { private final ClientRepository clientRepository; private final KafkaTemplate<Object, Object> kafkaTemplate; private final ObjectMapper objectMapper; @SneakyThrows public void registerClient(RegistrationController.RegistrationRequest request) { var client = clientRepository.save(Client.builder() .email(request.email()) .firstName(request.firstName()) .lastName(request.lastName()) .build()); Message<String> message = mapToEventMessage(client); kafkaTemplate.send(message).get(); } private Message<String> mapToEventMessage(Client client) throws JsonProcessingException { var event = RegistrationEvent.builder() .clientId(client.getId()) .email(client.getEmail()) .firstName(client.getFirstName()) .lastName(client.getLastName()) .build(); return MessageBuilder .withPayload(objectMapper.writeValueAsString(event)) .setHeader(KafkaHeaders.TOPIC, "topic-registration") .setHeader(KafkaHeaders.KEY, event.email) .build(); } @Builder public record RegistrationEvent(int clientId, String email, String firstName, String lastName) {} } The diagram is shown below: Here, you can see that the sendEvent method is completely absent, and kafkaTemplate.send is responsible for sending the message. The entire process of constructing the message for Kafka has been moved to a separate mapToEventMessage method. The mapToEventMessage method has no side effects, and its responsibility boundaries are clearly defined. Exceptions related to serialization and message sending are part of the individual methods' contracts and can be handled separately. The mapToEventMessage method is a pure function. When a function is deterministic and has no side effects, we call it a "pure" function. Pure functions are: Easier to read Easier to debug Easier to test Independent of the order in which they are called Simple to run in parallel Recommendations I would suggest the following techniques that can help avoid such antipatterns in the code: Testing Trophy Approach One Pile Technique Test-Driven Development (TDD) All these techniques are closely related and complement each other. Testing Trophy This is an approach to test coverage that emphasizes integration tests, which verify the service's contract as a whole. Unit tests are used for individual functions that are difficult or costly to test through integration tests. I have described tests with this approach in my articles: Ordering Chaos: Arranging HTTP Request Testing in Spring Enhancing the Visibility of Integration Tests Isolation in Testing with Kafka One Pile This technique is described in Kent Beck's book "Tidy First?". The main idea is that reading and understanding code is harder than writing it. If the code is broken into too many small parts, it may be helpful to first combine it into a whole to see the overall structure and logic, and then break it down again into more understandable pieces. In the context of this article, it is suggested not to break the code into methods until it ensures the fulfillment of the required contract. Test-Driven Development This approach allows separating the efforts of writing code to implement the contract and designing the code. We don't try to create a good design and write code that meets the requirements simultaneously; instead, we separate these tasks. The development process looks like this: Write tests for the service contract using the Testing Trophy approach. Write code in the One Pile style, ensuring that it fulfills the required contract. Don't worry about code design quality. Refactor the code. All the code is written, and we have a complete understanding of the implementation and potential bottlenecks. Conclusion The article discusses an example of an antipattern that can lead to difficulties in maintaining and testing code. Approaches like Testing Trophy, One Pile, and Test-Driven Development allow you to structure your work in a way that prevents code from turning into an impenetrable labyrinth. By investing time in the proper organization of code, we lay the foundation for the long-term sustainability and ease of maintenance of our software products. Thank you for your attention to the article, and good luck in your quest for writing simple code!