Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
Using Custom React Hooks to Simplify Complex Scenarios
Mocking and Its Importance in Integration and E2E Testing
I'm originally a .NET developer, and the breadth and depth of this framework are impressive. You probably know the phrase, "When all you have is a hammer, everything looks like a nail." Nevertheless, although I wrote a few articles about .NET-based Lambda function, this time, I decided to leave the hammer aside and look for another tool to serve the purpose. Background Choosing Python was an easy choice; it is simple to program, has many prebuilt libraries, and is supported well by the AWS Lambda function. So, this challenge is accepted! Here's the background for the problem I wanted to solve: Our product uses an external system that sends messages to our server, but sometimes a message is dropped. This inconsistency may reveal a bigger problem, so the goal was to raise an alert whenever a message hasn't arrived. Simple. So, I wrote a Python script that compares the outgoing messages (from the external system) to the incoming messages (our Elasticsearch DB). I tested this script locally when the ultimate goal was to run it on our AWS environment. AWS Lambda is perfect for running Python-based scripts. This article describes the journey for having a working AWS Lambda function based on Lambda layers. First Hurdle: Connecting to Elasticsearch Our data is stored on Elasticsearch, so I used the Python library to query our data and find this inconsistency. I thought it shouldn't be a problem until I ran the script for the first time and received the following error: "The client noticed that the server is not Elasticsearch and we do not support this unknown product" If you faced this problem too, you might not use the traditional Elastichsearch engine but its variant. In my case, I was using OpenSearch, a fork of Elasticsearch. Since the official Elasticsearch Python client checks for product compatibility, it rejected the connection to non-Elasticsearch servers. It can be fixed by changing your library; instead of importing the Elasticsearch library, import OpenSearch. Python pip install opensearch-py #and in the Python script - import OpenSearch instead of Elasticsearch library #from elasticsearch import Elasticsearch from opensearchpy import OpenSearch Besides the OpenSearch library, I also needed to install and import additional libraries for my sophisticated script: requestscolorama (using coloured text)datetime (for using datetime, timedelta, timezone)pytz (timezone conversion)math To summarize, I wrote a Python script that worked perfectly fine on my workstation. The next step was running the same logic on the AWS environment. Second Hurdle: Python Packages in AWS Lambda Function When I loaded the script to AWS, things went wrong. First, I copied the Python libraries using my Lambda function; I downloaded them locally, for example: Plain Text pip install opensearch-py -t python/my-packages/ Then, I tried to run the Lambda function but received an error: Plain Text AttributeError: module 'requests' has no attribute 'get' All the packages (e.g., requests, opensearch-py, etc.) were placed directly in the Lambda function's working folder, which led to namespace collisions and issues during import. When Python attempts to resolve imports, it may incorrectly reference your working folder as a module. Diving deeper into this problem, I understood Python mistakenly treats the local folder as the library and tries to resolve the method get() from it. To isolate the problem, I disabled the code for my REST requests and tried to test the connection to Elasticsearch. A similar error occurred when running a method call for the opensearch: Plain Text ImportError: cannot import name 'OpenSearch' from 'opensearchpy' To solve that problem, I could have changed the structure of the Python packages under the working directory. Nevertheless, I chose to move the Python libraries and place them elsewhere to better manage the packages, their version, and the dependencies. In other words, I used Lambda layers. The Solution: Using Layers Why? First, you can write a perfectly working AWS Lambda function without using Layers at all. Using external Python libraries, you can package them into your Lambda function by zipping the packages as part of the code. However, layers with Python provide several advantages that enhance code modularity, reusability, and deployment efficiency. Layers are considered cross-function additional resources, managed separately under the Lambda service. Lambda layers in Python streamline development, promote best practices, and simplify dependency management, making them a powerful tool for serverless application design. Here's a summary of the key benefits: 1. Code Reusability: Sharing Across Multiple Functions Advantage: Common Python libraries or custom utilities can be shared across numerous Lambda functions without duplicating the code.Example: If several functions use requests or opensearch-py, you can package these dependencies in a layer and reuse them across all functions, reducing redundancy. 2. Reduced Deployment Package Size: Improve Simplicity and Cost Saving Advantage: Offloading large dependencies to a layer reduces the main deployment package size. This simplifies versioning and keeps your Lambda codebase lightweight. In addition, you save on storage and deployment time by reducing deployment size and avoiding duplicated dependencies.Example: A deployment package containing only your function code is smaller and easier to manage, while the heavy dependencies reside in a reusable layer. As for the cost saving, smaller deployment packages lead to less storage usage in S3 for Lambda deployments and faster upload times. 3. Faster Iteration Advantage: When only the code logic changes, you can update the function without repackaging or redeploying the dependencies in the layer.Example: Modify and deploy just the Lambda code while keeping the dependencies in the unchanged layer. 4. Consistency Across Functions Advantage: Layers ensure consistent library versions across multiple Lambda functions.Example: A shared layer with a specific version of boto3 or pandas guarantees that all functions rely on identical library versions, avoiding compatibility issues. 5. Improved Development Workflow and Collaboration Advantage: Developers can test shared libraries or modules locally, package them in a layer, and use them consistently in all Lambda functions. Layers can be shared within teams or across AWS accounts, facilitating collaboration.Example: A common utility module (e.g., custom logging or data transformation functions) can be developed once and used by all functions. 6. Support for Complex Dependencies Advantage: Layers simplify the inclusion of Python packages with native or compiled dependencies.Example: Tools like numpy and scipy often require compilation. Layers allow these dependencies to be built in a compatible environment (e.g., using Docker) and shared. 7. Simplified Version Management Advantage: Layers can be versioned, allowing you to update a specific layer without affecting other versions used by other functions. Each version has a unique ARN (AWS Resource Name); You can choose different versions for different AWS Lambda functions.Example: You can deploy a new version of a layer with updated libraries while keeping older Lambda functions tied to a previous version. In general, the usage of layers increases when multiple Lambda functions use the same Python libraries or shared code. Nevertheless, I suggest using layers even if you have only a few Lambda functions as you set the foundations for future growth. With that, you keep layers modular and can focus only on specific functionality. Setting the Layers' Priority After loading the layers, you can associate them with your Lambda function. The merge order is a valuable feature for building the Lambda function. Layers are applied in a stacked order of priority, determining how files from multiple layers and the Lambda function's deployment package interact. Here's how Lambda resolves files and directories when using layers: Function Deployment Package Takes the Highest Priority Files in the Lambda function's deployment package override files from all layers.If a file exists both in the deployment package and a layer, the file from the deployment package is used.Example: File utils.py exists in both the function package and a layer.The version in the function deployment package will be used. The Last-Applied Layer Has a Higher Priority The last-added layer (highest in the stack) takes precedence if multiple layers contain the same file.Example: Layer 1 contains config.json, layer 2 (added after Layer 1) also contains config.json.The version from Layer 2 will override the version from Layer 1. File System Union When running the Lambda function, files from all layers and the deployment package are combined into a single file system.If there are no conflicts (i.e., duplicate filenames), all files from layers and the function are available. If there are conflicts, the two rules mentioned above apply. Example: Layer 1 contains lib1.py; Layer 2 contains lib2.py.Both lib1.py and lib2.py will be available in the Lambda execution environment. Practical Implications of Using Layers 1. Overriding Dependencies If you have multiple layers providing the same Python library (e.g., requests), the version from the layer with the highest priority (last-added) will be used.Always ensure the correct versions of dependencies are in the right layers to avoid unintended overrides. 2. Custom Logic Overriding Layer Files Use the Lambda deployment package to override any files provided by layers if you need to make adjustments without modifying the layer itself. 3. Debugging Conflicts Conflicts between files in layers and the deployment package can lead to unexpected behavior. Carefully document and manage the contents of each layer. Layer Priority in Execution When the Lambda function runs: AWS Lambda combines files from all layers and the deployment package into the /opt directory and the root file system.The Lambda environment resolves file paths based on the priority stack mentioned above: Files from the deployment package override files from /opt.Files from the last-added layer in /opt override earlier layers. Best Practices for Managing Layer Priority 1. Separate Concerns Use distinct layers for different purposes (e.g., one for shared libraries, another for custom utilities). 2. Version Control Explicitly version layers to ensure reproducibility and avoid conflicts when updates are applied. 3. Minimize Overlap Avoid including duplicate files or libraries in multiple layers to prevent unintentional overrides. 4. Test Layer Integration Test Lambda functions thoroughly when adding or updating layers to identify early conflicts. By understanding the layer priority and resolution process, you can effectively design and manage Lambda functions to ensure predictable behavior and seamless integration of dependencies. Let me know if you'd like further clarification! Building Python Packages Download the package's file locally if you want to use a pre-built Python package as a layer. You can specify the requested version for this package, but it's not mandatory. Python pip install <package-name>==<version> --target <local-folder> ## for example: pip install requests==2.32 --target c:\temp Once you obtain the files locally, place them in a folder hierarchy. The Python layer should have the following structure: Python ├── <package-folder-name>/ │ ├────── python/ │ ├────── lib/ │ ├────── python3/ │ ├────── site-packages/ │ └────── <package-content> Do not overlook the Python version in the folder name (i.e. python3.0), which must align with your Lambda function's Python version. For instance: After creating this hierarchy with the Python package's files, you only need to zip and load it as a Lambda layer. Here's how I copied the opensearch Python package locally: Python mkdir opensearch_layer cd opensearch_layer mkdir -p python/lib/python3.10/site-packages pip install opensearch-py -t python/lib/python3.10/site-packages/ zip -r opensearch_layer.zip python Since my Lambda function was based on Python3.10, I named the python folder accordingly. With that, you can create your own Python packages locally and load them as layers to your Lambda environment. Wrapping Up Lambda Layers make managing shared code and dependencies across multiple Lambda functions easy, keeping things organized and straightforward. By separating common libraries or utilities, layers help reduce the size of your deployment packages and make updates quicker — no need to redeploy your entire function to change a dependency. This not only saves time but also keeps your functions consistent and reusable. With Lambda Layers, building and maintaining serverless applications becomes smoother and more efficient, leaving you more time to focus on creating great solutions. I hope you find this content valuable. Keep on coding!
Human Capital Management (HCM) cloud systems, such as Oracle HCM and Workday, are vital for managing core HR operations. However, migrating to these systems and conducting necessary testing can be complex. Robotic Process Automation (RPA) provides a practical solution to streamline these processes. Organizations can accelerate the implementation and operationalization of HCM cloud applications through data migration, managing multi-factor authentication (MFA), assigning roles post-deployment, and conducting User Acceptance Testing (UAT). This article offers practical guidance for finance and IT teams on leveraging RPA tools to enhance HCM cloud implementation. By sharing best practices and real-world examples, we aim to present a roadmap for effectively applying RPA across various HCM platforms to overcome common implementation challenges. Introduction Through our work with HCM cloud systems, we’ve witnessed their importance in managing employee data, payroll, recruitment, and compliance. However, transitioning from legacy systems presents challenges such as complex data migration, secure API integrations, and multi-factor authentication (MFA). Additionally, role-based access control (RBAC) adds compliance complexities. Robotic Process Automation (RPA) can automate these processes, reducing manual efforts and errors while improving efficiency. This white paper explores how RPA tools, especially UiPath, can address these challenges, showcasing use cases and practical examples to help organizations streamline their HCM cloud implementations. Role of RPA in HCM Cloud Implementation and Testing RPA provides a powerful means to streamline repetitive processes, reduce manual efforts, and enhance operational efficiency. Below is the list of areas where RPA plays a key role in HCM cloud implementation and testing. 1. Automating Data Migration and Validation Migrating employee data from legacy systems to HCM cloud platforms can be overwhelming, especially with thousands of records to transfer. In several migration projects we managed, ensuring accuracy and consistency was critical to avoid payroll or compliance issues. Early on, we realized that manual efforts were prone to errors and delays, which is why we turned to RPA tools like UiPath to streamline these processes. In one project, we migrated employee data from a legacy payroll system to Oracle HCM. Our bot read records from Excel files, validated missing IDs and job titles, and flagged errors for quick resolution. This automation reduced a two-week manual effort to just a few hours, ensuring an accurate and smooth transition. Without automation, these discrepancies would have caused delays or disrupted payroll, but the bot gave our HR team confidence by logging and isolating issues for easy correction. Lessons from Experience Token refresh for API access: To prevent disruptions, we implemented automatic token refresh logic, ensuring smooth uploads.Batch processing for efficiency: In high-volume migrations, batch processing avoided API rate limits and system timeouts.Comprehensive error logging: Detailed logs allowed us to pinpoint and resolve issues without needing full reviews.Validation at key stages: Bots validated data both before and after migration, ensuring compliance and data integrity. Seeing firsthand how automation reduced errors, saved time, and gave HR teams peace of mind has been deeply rewarding. These experiences have confirmed my belief that RPA isn’t just a tool — it’s essential for ensuring seamless, reliable HCM transitions. 2. Handling Multi-Factor Authentication (MFA) and Secure Login Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. Example: Automating Login to HCM Cloud With MFA Handling In one of our projects, we automated the login process for an HCM cloud platform using UiPath, ensuring smooth OTP retrieval and submission. The bot launched the HCM portal, entered the username and password, retrieved the OTP from a connected SMS service, and completed the login process. This approach ensured that critical workflows were executed without manual intervention, even when MFA was enabled. Best Practices from Experience Secure credential management: Stored user credentials in vaults to protect sensitive data.Seamless OTP integration: Integrated bots with external OTP services, ensuring secure and real-time code retrieval.Validation and error handling: Bots were designed to log each login attempt for easy tracking and troubleshooting. This method not only ensured secure access but also improved operational efficiency by eliminating the need for manual logins. Our collaborative efforts using RPA have enabled businesses to navigate MFA challenges smoothly, reducing downtime and maintaining continuity in critical processes. 3. Automating Role-Based Access Control (RBAC) Setup It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. Example: Automating Role Assignment Using UiPath In one of our initiatives, we automated the role assignment process by reading role assignments from an Excel file and executing API calls to update user roles in the HCM cloud. The bot efficiently processed the data and assigned the appropriate roles based on the entries in the spreadsheet. The automation workflow involved reading the role assignments, iterating through each entry, and sending HTTP requests to the HCM cloud API to assign roles. This streamlined approach not only improved accuracy but also minimized the risk of human error in role assignments. Best Practices from Experience Secure credential management: We utilized RPA vaults or secret managers, such as HashiCorp Vault, to securely manage bot credentials, ensuring sensitive information remains protected.Audit logging: Implementing comprehensive audit logs allowed us to track role changes effectively, providing a clear history of modifications and enhancing accountability. By automating role assignments, we ensured that users maintained the appropriate access levels throughout their career transitions, aligning with compliance requirements and enhancing overall security within the organization. Our collaborative efforts in implementing RPA have significantly improved the management of user roles, contributing to a more efficient and secure operational environment. 4. Automated User Acceptance Testing (UAT) User Acceptance Testing (UAT) is a critical phase in ensuring that HCM cloud systems meet business requirements before going live. To streamline this process, we implemented RPA bots capable of executing predefined UAT scenarios, comparing expected and actual results, and automatically logging the test results. This automation not only accelerates the testing phase but also ensures that any issues are identified and resolved before the system goes live. In one of our initiatives, we developed a UiPath workflow that executed UAT scenarios from an Excel sheet, capturing the outcomes of each test. By systematically verifying each functionality, we ensured that the system performed as intended, significantly reducing the risk of post-deployment issues. Best Practices from Experience Automate end-to-end scenarios: We ensured higher test coverage by automating comprehensive end-to-end scenarios, providing confidence that the system meets all functional requirements.Report generation for UAT results: By implementing automated report generation for UAT results, we maintained clear documentation of test outcomes, facilitating transparency and accountability within the team. Through our collaborative efforts in automating UAT, we significantly improved the testing process, allowing for a smooth and successful go-live experience. 5. API Rate Limits and Error Handling With Exponential Backoff Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly. In our projects, we established a retry mechanism using UiPath that intelligently handled API requests. By incorporating an exponential backoff strategy, the bot could wait progressively longer between retries when encountering rate limit errors, thereby reducing the likelihood of being locked out. Best Practices from Experience Implement retry logic: We incorporated structured retry logic to handle API requests, allowing the bot to efficiently manage rate limits while ensuring successful execution.Logging and monitoring: By logging attempts and outcomes during the retry process, we maintained clear visibility into the bot's activities, which facilitated troubleshooting and optimization. By effectively managing API rate limits and implementing error-handling strategies, our collaborative efforts have enhanced the reliability of our automation initiatives, ensuring seamless integration with HCM systems and maintaining operational efficiency. Conclusion RPA tools significantly accelerate the implementation and testing of Human Capital Management (HCM) cloud systems by automating complex and repetitive tasks. This includes data migration, multifactor authentication (MFA) handling, role-based access setup, user acceptance testing (UAT) execution, and error handling. By automating these processes, organizations can complete them more quickly, without the need for human intervention, resulting in fewer errors. Organizations that adopt RPA for HCM cloud projects can achieve several key benefits: Faster deployment timelines: Automation reduces the time required for implementation and testing, allowing organizations to go live more swiftly.Improved data accuracy: Automated processes minimize the risk of human error during data migration and other critical tasks, ensuring that information remains accurate and reliable.Better compliance: RPA helps organizations adhere to security protocols and regulations by consistently managing tasks that require strict compliance measures. To fully realize the benefits of RPA in scaling HCM cloud implementations and maintaining operational efficiency over time, organizations should follow best practices. These include secure credential management, effective exception handling, and comprehensive reporting. By doing so, enterprises can leverage RPA to optimize their HCM cloud systems effectively.
LLMs need to connect to the real world. LangChain4j tools, combined with Apache Camel, make this easy. Camel provides robust integration, connecting your LLM to any service or API. This lets your AI interact with databases, queues, and more, creating truly powerful applications. We'll explore this powerful combination and its potential. Setting Up the Development Environment Ollama: Provides a way to run large language models (LLMs) locally. You can run many models, such as LLama3, Mistral, CodeLlama, and many others on your machine, with full CPU and GPU support.Visual Studio Code: With Kaoto, Java, and Quarkus plugins installed.OpenJDK 21MavenQuarkus 3.17Quarkus Dev Services: A feature of Quarkus that simplifies the development and testing of applications the development and testing of applications that rely on external services such as databases, messaging systems, and other resources. You can download the complete code at the following GitHub repo. The following instructions will be executed on Visual Studio Code: 1. Creating the Quarkus Project Shell mvn io.quarkus:quarkus-maven-plugin:3.17.6:create \ -DprojectGroupId=dev.mikeintoch \ -DprojectArtifactId=camel-agent-tools \ -Dextensions="camel-quarkus-core,camel-quarkus-langchain4j-chat,camel-quarkus-langchain4j-tools,camel-quarkus-platform-http,camel-quarkus-yaml-dsl" 2. Adding langchain4j Quarkus Extensions Shell ./mvnw quarkus:add-extension -Dextensions="io.quarkiverse.langchain4j:quarkus-langchain4j-core:0.22.0" ./mvnw quarkus:add-extension -Dextensions="io.quarkiverse.langchain4j:quarkus-langchain4j-ollama:0.22.0" 3. Configure Ollama to Run Ollama LLM Open the application.properties file and add the following lines: Properties files #Configure Ollama local model quarkus.langchain4j.ollama.chat-model.model-id=qwen2.5:0.5b quarkus.langchain4j.ollama.chat-model.temperature=0.0 quarkus.langchain4j.ollama.log-requests=true quarkus.langchain4j.log-responses=true quarkus.langchain4j.ollama.timeout=180s Quarkus uses Ollama to run LLM locally and also auto wire configuration for the use in Apache camel components in the following steps. 4. Creating Apache Camel Route Using Kaoto Create a new folder named route in the src/main/resources folder. Create a new file in the src/main/resources/routes folder and name route-main.camel.yaml, and Visual Studio Code opens the Kaoto visual editor. Click on the +New button and a new route will be created. Click on the circular arrows to replace the timer component. Search and select platform-http component from the catalog. Configure required platform-http properties: Set path with the value /camel/chat By default, platform-http will be serving on port 8080. Click on the Add Step Icon in the arrow after the platform-http component. Search and select the langchain4j-tools component in the catalog. Configure required langchain4j-tools properties: Set Tool Id with value my-tools.Set Tags with store (Defining tags is for grouping the tools to use with the LLM). You must process the user input message to the langchain4j-tools component able to use, then click on the Add Step Icon in the arrow after the platform-http component. Search and select the Process component in the catalog. Configure required properties: Set Ref with the value createChatMessage. The process component will use the createChatMessage method you will create in the following step. 5. Create a Process to Send User Input to LLM Create a new Java Class into src/main/java folder named Bindings.java. Java import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.HashMap; import org.apache.camel.BindToRegistry; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.builder.RouteBuilder; import dev.langchain4j.data.message.ChatMessage; import dev.langchain4j.data.message.SystemMessage; import dev.langchain4j.data.message.UserMessage; public class Bindings extends RouteBuilder{ @Override public void configure() throws Exception { // Routes are loading in yaml files. } @BindToRegistry(lazy=true) public static Processor createChatMessage(){ return new Processor() { public void process(Exchange exchange) throws Exception{ String payload = exchange.getMessage().getBody(String.class); List<ChatMessage> messages = new ArrayList<>(); String systemMessage = """ You are an intelligent store assistant. Users will ask you questions about store product. Your task is to provide accurate and concise answers. In the store have shirts, dresses, pants, shoes with no specific category %s If you are unable to access the tools to answer the user's query, Tell the user that the requested information is not available at this time and that they can try again later. """; String tools = """ You have access to a collection of tools You can use multiple tools at the same time Complete your answer using data obtained from the tools """; messages.add(new SystemMessage(systemMessage.formatted(tools))); messages.add(new UserMessage(payload)); exchange.getIn().setBody(messages); } }; } } This class helps create a Camel Processor to transform the user input into an object that can handle the langchain4j component in the route. It also gives the LLM context for using tools and explains the Agent's task. 6. Creating Apache Camel Tools for Using With LLM Create a new file in the src/main/resources/routes folder and name it route-tool-products.camel.yaml, and in Visual Studio Code, open the Kaoto visual editor. Click on the +New button, and a new route will be created. Click on the circular arrows to replace the timer component. Search and select the langchain4j-tools component in the catalog. Configure langchain4j-tools, click on the All tab and search Endpoint properties. Set Tool Id with value productsbycategoryandcolor.Set Tags with store (The same as in the main route).Set Description with value Query database products by category and color (a brief description of the tool). Add parameters that will be used by the tool: NAME: category, VALUE: stringNAME: color, VALUE: string These parameters will be assigned by the LLM for use in the tool and are passed via header. Add SQL Component to query database, then click on Add Step after the langchain4j-tools component. Search and select SQL component. Configure required SQL properties: Query with the following value. SQL Select name, description, category, size, color, price, stock from products where Lower(category)= Lower (:#category) and Lower(color) = Lower(:#color) Handle parameters to use in the query, then add a Convert Header component to convert parameters to a correct object type. Click on the Add Step button after langchain4j-tools, search, and select Convert Header To transformation in the catalog. Configure required properties for the component: Name with the value categoryType with the value String Repeat the steps with the following values: Name with the value colorType with the value String As a result, this is how the route looks like: Finally, you need to transform the query result into an object that the LLM can handle; in this example, you transform it into JSON. Click the Add Step button after SQL Component, and add the Marshal component. Configure data format properties for the Marshal and select JSon from the list. 7. Configure Quarkus Dev Services for PostgreSQL Add Quarkus extension to provide PostgreSQL for dev purposes, run following command in terminal. Shell ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-jdbc-postgresql" Open application.properties and add the following lines: Properties files #Configuring devservices for Postgresql quarkus.datasource.db-kind=postgresql quarkus.datasource.devservices.port=5432 quarkus.datasource.devservices.init-script-path=db/schema-init.sql quarkus.datasource.devservices.db-name=store Finally, create our SQL script to load the database. Create a folder named db into src/main/resources, and into this folder, create a file named schema-init.sql with the following content. SQL DROP TABLE IF EXISTS products; CREATE TABLE IF NOT EXISTS products ( id SERIAL NOT NULL, name VARCHAR(100) NOT NULL, description varchar(150), category VARCHAR(50), size VARCHAR(20), color VARCHAR(20), price DECIMAL(10,2) NOT NULL, stock INT NOT NULL, CONSTRAINT products_pk PRIMARY KEY (id) ); INSERT INTO products (name, description, category, size, color, price, stock) VALUES ('Blue shirt', 'Cotton shirt, short-sleeved', 'Shirts', 'M', 'Blue', 29.99, 10), ('Black pants', 'Jeans, high waisted', 'Pants', '32', 'Black', 49.99, 5), ('White Sneakers', 'Sneakers', 'Shoes', '40', 'White', 69.99, 8), ('Floral Dress', 'Summer dress, floral print, thin straps.', 'Dress', 'M', 'Pink', 39.99, 12), ('Skinny Jeans', 'Dark denim jeans, high waist, skinny fit.', 'Pants', '28', 'Blue', 44.99, 18), ('White Sneakers', 'Casual sneakers, rubber sole, minimalist design.', 'Shoes', '40', 'White', 59.99, 10), ('Beige Chinos', 'Casual dress pants, straight cut, elastic waist.', 'Pants', '32', 'Beige', 39.99, 15), ('White Dress Shirt', 'Cotton shirt, long sleeves, classic collar.', 'Shirts', 'M', 'White', 29.99, 20), ('Brown Hiking Boots', 'Waterproof boots, rubber sole, perfect for hiking.', 'Shoes', '42', 'Brown', 89.99, 7), ('Distressed Jeans', 'Distressed denim jeans, mid-rise, regular fit.', 'Pants', '30', 'Blue', 49.99, 12); 8. Include our Route to be Loaded by the Quarkus Project Camel Quarkus supports several domain-specific languages (DSLs) in defining Camel Routes. It is also possible to include yaml DSL routes, adding the following line on the application.properties file. Properties files # routes to load camel.main.routes-include-pattern = routes/*.yaml This will be load all routes in the src/main/resources/routes folder. 9. Test the App Run the application using Maven, open a Terminal in Visual Studio code, and run the following command. Shell mvn quarkus:dev Once it has started, Quarkus calls Ollama and runs your LLM locally, opens a terminal, and verifies with the following command. Shell ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5:0.5b a8b0c5157701 1.4 GB 100% GPU 4 minutes from now Also, Quarkus creates a container running PostgreSQL and creates a database and schema. You can connect using psql command. Shell psql -h localhost -p 5432 -U quarkus -d store And query products table: Shell store=# select * from products; id | name | description | category | size | color | price | stock ----+--------------------+----------------------------------------------------+----------+------+-------+-------+------- 1 | Blue shirt | Cotton shirt, short-sleeved | Shirts | M | Blue | 29.99 | 10 2 | Black pants | Jeans, high waisted | Pants | 32 | Black | 49.99 | 5 3 | White Sneakers | Sneakers | Shoes | 40 | White | 69.99 | 8 4 | Floral Dress | Summer dress, floral print, thin straps. | Dress | M | Pink | 39.99 | 12 5 | Skinny Jeans | Dark denim jeans, high waist, skinny fit. | Pants | 28 | Blue | 44.99 | 18 6 | White Sneakers | Casual sneakers, rubber sole, minimalist design. | Shoes | 40 | White | 59.99 | 10 7 | Beige Chinos | Casual dress pants, straight cut, elastic waist. | Pants | 32 | Beige | 39.99 | 15 8 | White Dress Shirt | Cotton shirt, long sleeves, classic collar. | Shirts | M | White | 29.99 | 20 9 | Brown Hiking Boots | Waterproof boots, rubber sole, perfect for hiking. | Shoes | 42 | Brown | 89.99 | 7 10 | Distressed Jeans | Distressed denim jeans, mid-rise, regular fit. | Pants | 30 | Blue | 49.99 | 12 (10 rows) To test the app, send a POST request to localhost:8080/camel/chat with a plain text body input. requesting for some product. The LLM may have hallucinated. Please try again modifying your request slightly. You can see how the LLM uses the tool and gets information from the database using the natural language request provided. LLM identifies the parameters and sends them to the tool. If you look in the request log, you can find the tools and parameters LLM is using to create the answer. Conclusion You've explored how to leverage the power of LLMs within your integration flows using Apache Camel and the LangChain4j component. We've seen how this combination allows you to seamlessly integrate powerful language models into your existing Camel routes, enabling you to build sophisticated applications that can understand, generate, and interact with human language.
While working on the series of tutorial blogs for GET, POST, PUT, PATCH, and DELETE requests for API Automation using Playwright Java. I noticed that there is no logging method provided by the Playwright Java framework to log the requests and responses. In the REST-assured framework, we have the log().all() method available that is used for logging the request as well as the response. However, Playwright does not provide any such method. However, Playwright offers a text() method in the APIResponse interface that could be well used to extract the response text. Playwright currently does not have the feature to access the request body and request headers while performing API Testing. The issue is already raised on GitHub for this feature, please add an upvote to this issue so this feature gets implemented soon in the framework. In this blog, we will learn how to extract the response and create a custom logger to log the response of the API tests using Playwright Java. How to Log Response Details in Playwright Java Before we begin with actual coding and implementation for the logger, let’s discuss the dependencies, configuration, and setup required for logging the response details. Getting Started As we are working with Playwright Java using Maven, we will use the Log4J2 Maven dependency to log the response details. The dependency for Jackson Databind will be used for parsing the JSON response. XML <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api-version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>${log4j-core-version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>${jackson-databind-version}</version> </dependency> As a best practice, the versions of these dependencies will be added in the properties block as it allows users to easily check and update the newer version of dependencies in the project. XML <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <log4j-core-version>2.24.1</log4j-core-version> <log4j-api-version>2.24.1</log4j-api-version> <jackson-databind-version>2.18.0</jackson-databind-version> </properties> The next step is to create a log4j2.xml file in the src/main/resources folder. This file stores the configuration for logs, such as log level, where the logs should be printed — to the console or to the file, the pattern of the log layout, etc. XML <?xml version="1.0" encoding="UTF-8"?> <Configuration status="INFO"> <Appenders> <Console name="LogToConsole" target="SYSTEM_OUT"> <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/> </Console> </Appenders> <Loggers> <Logger name="io.github.mfaisalkhatri" level="info" additivity="false"> <AppenderRef ref="LogToConsole"/> </Logger> <Root level="error"> <AppenderRef ref="LogToConsole"/> </Root> </Loggers> <Loggers> <Logger name="io.github.mfaisalkhatri" level="trace" additivity="false"> <AppenderRef ref="LogToConsole"/> </Logger> <Root level="error"> <AppenderRef ref="LogToConsole"/> </Root> </Loggers> </Configuration> The <Appenders> section contains the information related to log printing and its pattern format to print. The <Loggers> section contains the log-level details and how it should be printed. There can be multiple blocks of <Loggers> in the file, each for different log levels, such as “info,” “debug,” “trace,” etc. Implementing the Custom Logger A new Java class Logger is created to implement the methods for logging the response details. Java public class Logger { private final APIResponse response; private final org.apache.logging.log4j.Logger log; public Logger (final APIResponse response) { this.response = response; this.log = LogManager.getLogger (getClass ()); } //... } This class has the APIResponse interface of Playwright and the Logger interface from Log4j declared at the class level to ensure that we can reuse it in further methods in the same class and avoid duplicate code lines. The constructor of the Logger class is used for creating objects of the implementing classes. The APIResponse interface is added as a parameter as we need the response object to be supplied to this class for logging the respective details. The logResponseDetails() method implements the function to log all the response details. Java public void logResponseDetails () { String responseBody = this.response.text (); this.log.info ("Logging Response Details....\n responseHeaders: {}, \nstatusCode: {},", this.response.headers (), this.response.status ()); this.log.info ("\n Response body: {}", prettyPrintJson (responseBody)); this.log.info ("End of Logs!"); } The responseBody variable will store the response received after executing the API. The next line of code will print the response details, Headers, and Status Code. As the response returned is not pretty printed, meaning the JSON format is shown in String in multiple lines wrapped up, this makes the logs look untidy. Hence, we have created a prettyPrintJson() method that consumes the response in String format and returns it in pretty format. Java private String prettyPrintJson (final String text) { if (StringUtils.isNotBlank (text) && StringUtils.isNotEmpty (text)) { try { final ObjectMapper objectMapper = new ObjectMapper (); final Object jsonObject = objectMapper.readValue (text, Object.class); return objectMapper.writerWithDefaultPrettyPrinter () .writeValueAsString (jsonObject); } catch (final JsonProcessingException e) { this.log.error ("Failed to pretty print JSON: {}", e.getMessage (), e); } } return "No response body found!"; } This method accepts the String in the method parameter where the response object will be supplied. A check is performed using the if() condition to verify that the text supplied is not blank, null and it is not empty. If the condition is satisfied, then the ObjectMapper class from the Jackson Databind dependency is instantiated. Next, the text value of the response is read, and it is converted and returned as the JSON pretty print format using the writerWithDefaultPrettyPrinter() and writeValueAsString() methods of the ObjectMapper class. If the response is null, empty, and blank, it will print the message “No response body found!” and the method will be exited. How to Use the Logger in the API Automation Tests The Logger class needs to be instantiated and its respective methods need to be called in order to get the response details printed while the tests are executed. We need to make sure that we don’t write duplicate code everywhere in the tests to get the response details logged. In order to handle this, we would be using the BaseTest class and creating a new method, logResponse(APIResponse response). This method will accept the APIResponse as parameter, and the logResponseDetails() method will be called after instantiating the Logger class. Java public class BaseTest { //... protected void logResponse (final APIResponse response) { final Logger logger = new Logger (response); logger.logResponseDetails (); } } As the BaseTest class is extended to all the Test classes; it becomes easier to call the methods directly in the test class. The HappyPathTests class that we have used in previous blogs for adding happy scenario tests for testing GET, POST, PUT, PATCH, and DELETE requests already extends the BaseTest class. Let’s print the response logs for the POST and GET API request test. The testShouldCreateNewOrders() verifies that the new orders are created successfully. Let’s add the logResponse() method to this test and get the response printed in the logs. Java public class HappyPathTests extends BaseTest{ @Test public void testShouldCreateNewOrders() { final int totalOrders = 4; for (int i = 0; i < totalOrders; i++) { this.orderList.add(getNewOrder()); } final APIResponse response = this.request.post("/addOrder", RequestOptions.create() .setData(this.orderList)); logResponse (response); //... // Assertion Statements... } } The logResponse() method will be called after the POST request is sent. This will enable us to know what response was received before we start performing assertions. The testShouldGetAllOrders() verifies the GET /getAllOrder API request. Let’s add the logResponse() method to this test and check the response logs getting printed. Java public class HappyPathTests extends BaseTest{ @Test public void testShouldGetAllOrders() { final APIResponse response = this.request.get("/getAllOrders"); logResponse (response); final JSONObject responseObject = new JSONObject(response.text()); final JSONArray ordersArray = responseObject.getJSONArray("orders"); assertEquals(response.status(), 200); assertEquals(responseObject.get("message"), "Orders fetched successfully!"); assertEquals(this.orderList.get(0).getUserId(), ordersArray.getJSONObject(0).get("user_id")); assertEquals(this.orderList.get(0).getProductId(), ordersArray.getJSONObject(0).get("product_id")); assertEquals(this.orderList.get(0).getTotalAmt(), ordersArray.getJSONObject(0).get("total_amt")); } } The logResponse() method is called after the GET request is sent and will print the response logs in the console. Test Execution The tests will be executed in order where POST request will be executed first so new orders are created and then the GET request will be executed. It will be done using the testng-restfulecommerce-postandgetorder.xml file. XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Restful ECommerce Test Suite"> <test name="Testing Happy Path Scenarios of Creating and Updating Orders"> <classes> <class name="io.github.mfaisalkhatri.api.restfulecommerce.HappyPathTests"> <methods> <include name="testShouldCreateNewOrders"/> <include name="testShouldGetAllOrders"/> </methods> </class> </classes> </test> </suite> On executing the above testng-restfulecommerce-postandgetorder.xml file, the POST, as well as GET API requests, are executed, and the response is printed in the console, which can be seen in the screenshots below. POST API Response Logs GET API Response Logs It can be seen from the screenshots that the response logs are printed correctly in the console and can now help us know the exact results of the test execution. Summary Adding a custom logger in the project can help in multiple ways. It will provide us with the details of the test data that was processed along with the final output, giving us control over the tests. It also helps in debugging the issue for failed tests and finding a fix quickly for it. If the response data is readily available we can quickly find the pattern of the issue and try for a quick fix. As Playwright does not provide any method for logging the response details, we can add our own custom logger that can help us in fetching the required details. Happy testing!
Event-driven architecture facilitates systems to reply to real-life events, such as when the user's profile is updated. This post illustrates building reactive event-driven applications that handle data loss by combining Spring WebFlux, Apache Kafka, and Dead Letter Queue. When used together, these provide the framework for creating fault-tolerant, resilient, and high-performance systems that are important for large applications that need to handle massive volumes of data efficiently. Features Used in this Article Spring Webflux: It provides a Reactive paradigm that depends on non-blocking back pressure for the simultaneous processing of events.Apache Kafka: Reactive Kafka producers and consumers help in building competent and adaptable processing pipelines.Reactive Streams: They do not block the execution of Kafka producers and consumers' streams.Dead Letter Queue (DLQ): A DLQ stores messages temporarily that could not have been processed due to various reasons. DLQ messages can be later used to reprocess messages to prevent data loss and make event processing resilient. Reactive Kafka Producer A Reactive Kafka Producer pushes messages in parallel and does not block other threads while publishing. It is beneficial where large data to be processed. It blends well with Spring WebFlux and handles backpressure within microservices architectures. This integration helps in not only processing large messages but also managing cloud resources well. The reactive Kafka producer shown above can be found on GitHub. Reactive Kafka Consumer Reactive Kafka Consumer pulls Kafka messages without blocking and maintains high throughput. It also supports backpressure handling and integrates perfectly with WebFlux for real-time data processing. The reactive consumer pipeline manages resources well and is highly suited for applications deployed in the cloud. The reactive Kafka consumer shown above can be found on GitHub. Dead Letter Queue (DLQ) A DLQ is a simple Kafka topic that stores messages sent by producers and fails to be processed. In real time, we need systems to be functional without blockages and failure, and this can be achieved by redirecting such messages to the Dead Letter Queue in the event-driven architecture. Benefits of Dead Letter Queue Integration It provides a fallback mechanism to prevent interruption in the flow of messages.It allows the retention of unprocessed data and helps to prevent data loss.It stores metadata for failure, which eventually aids in analyzing the root cause.It provides as many retries to process unprocessed messages.It decouples error handling and makes the system resilient. Failed messages can be pushed to DLQ from the producer code as shown below: A DLQ Handler needs to be created in the reactive consumer as shown below: Conclusion The incorporation of DLQ with a reactive producer and consumer helps build resilient, fault-tolerant, and efficient event-driven applications. Reactive producers ensure nonblocking message publication; on the other hand, Reactive consumers process messages with backpressure, improving responsiveness. The DLQ provides a fallback mechanism preventing disruptions and prevents data loss. The above architecture ensures isolation of system failure and helps in debugging which can further be addressed to improve applications. The above reference code can be found in the GitHub producer and GitHub consumer. More details regarding the reactive producer and consumer can be found at ReactiveEventDriven. Spring Apache Kafka documents more information regarding DLQ.
AWS Lambda is enhancing the local IDE experience to make developing Lambda-based applications more efficient. These new features enable developers to author, build, debug, test, and deploy Lambda applications seamlessly within their local IDE using Visual Studio Code (VS Code). Overview The improved IDE experience is part of the AWS Toolkit for Visual Studio Code. It includes a guided setup walkthrough that helps developers configure their local environment and install necessary tools. The toolkit also includes sample applications that demonstrate how to iterate on your code both locally and in the cloud. Developers can save and configure build settings to accelerate application builds and generate configuration files for setting up a debugging environment. With these enhancements, you can sync local code changes quickly to the cloud or perform full application deployments, enabling faster iteration. You can test functions locally or in the cloud and create reusable test events to streamline the testing process. The toolkit also provides quick action buttons for building, deploying, and invoking functions locally or remotely. Additionally, it integrates with AWS Infrastructure Composer, allowing for a visual application-building experience directly within the IDE. If anyone has worked with AWS Lambda, you will find IDE is not developer-friendly and has poor UI. It's hard to make code changes in the code and test from the present IDE. On top of that, if you don't want to use AWS-based CI/CD services, automated deployment can be a bit challenging for a developer. You can use Terraform or Github actions now, but AWS came up with another better option to deploy and test code AWS Lambda code. Considering these challenges, AWS Lambda recently announced the Visual Studio integration feature, which is a part of the AWS toolkit. It will make it easier for the developers to push, build, test, and deploy the code. This integration feature option uses Visual Studio. Although it still has restrictions on using 50 MB code size, it now provides a better IDE experience similar to what Visual Studio will be like on your local host. This includes dependencies installation with extension, split screen layout, writing code and running test events without opening new windows, and live logs from CloudWatch for efficient debugging. In addition, Amazon Q's in the console can be used as a coding assistant similar to a co-pilot. This provides a better developer experience. To start using Visual Studio for AWS Lambda: 1. You should have Visual Studio locally installed. After that, install the AWS Toolkit from the marketplace. You will see that the webpage will redirect to Visual Studio and open this tab. You can go ahead and install this. 2. After installing the AWS Toolkit, you will see the AWS logo on the left sidebar under extensions. Click on that. 3. Now, select the option to connect with your AWS account. 3. After a successful connection, you will get a tab to invoke the Lambda function locally. As you can see below, this option requires AWS SAM installed to invoke Lambda locally. After login, it will also pull all your Lambda functions from your AWS account. If you want to update those, you can right-click on the Lambda function and select Upload Lambda. It will ask you for the zip file of the Lambda function. Alternatively, you can select samples from the explorer option in the left sidebar. If you want to go with remote invoke, you can click on any Lambda functions visible to you from the sidebar. 4. If you want to create your own Lambda function and test the integration, you can click on the Application Builder option and select AWS CLI or SAM. If you want the Lambda code to deploy to the AWS account, you can select the last option, as shown in the above screenshot. After that, if you log into your AWS account, you will be asked to log in. Then, it will let you deploy AWS code. This way, you can easily deploy AWS code from your IDE, which can be convenient for developer testing. Conclusion Lambda is enhancing the local development experience for Lambda-based applications by integrating with the VS Code IDE and AWS Toolkit. This upgrade simplifies the code-test-deploy-debug workflow. A step-by-step walkthrough helps you set up your local environment and explore Lambda functionality through sample applications. With intuitive icon shortcuts and the Command Palette, you can build, debug, test, and deploy Lambda applications seamlessly, enabling faster iteration without the need to switch between multiple tools.
Multi-tenancy has become an important feature for modern enterprise applications that need to serve multiple clients (tenants) from a single application instance. While an earlier version of Hibernate had support for multi-tenancy, its implementation required significant manual configuration and custom strategies to handle tenant isolation, which resulted in higher complexity and slower processes, especially for applications with a number of tenants. The latest version of Hibernate 6.3.0, which was released on December 15, 2024, addressed the above limitations with enhanced multi-tenancy support through better tools for tenant identification, schema resolution, and enhanced performance for handling tenant-specific operations. This article talks about how Hibernate 6.3.0 enhanced the traditional multi-tenancy implementation significantly. Traditional Multi-Tenancy Implementation Before Hibernate 6.3.0 was released, multi-tenancy required developers to set up tenant strategies manually. For example, the developers needed to implement some custom logic for schema or database resolution and use the the Hibernate-provided CurrentTenantIdentifierResolver interface to identify the current tenant, which was not only error-prone but also added significant operational complexity and performance overhead. Below is an example of how multi-tenancy was configurated traditionally: Java public class CurrentTenantIdentifierResolverImpl implements CurrentTenantIdentifierResolver { @Override public String resolveCurrentTenantIdentifier() { return TenantContext.getCurrentTenant(); // Custom logic for tenant resolution } @Override public boolean validateExistingCurrentSessions() { return true; } } SessionFactory sessionFactory = new Configuration() .setProperty("hibernate.multiTenancy", "SCHEMA") .setProperty("hibernate.tenant_identifier_resolver", CurrentTenantIdentifierResolverImpl.class.getName()) .buildSessionFactory(); Output: VB.NET INFO: Resolving tenant identifier INFO: Current tenant resolved to: tenant_1 INFO: Setting schema for tenant: tenant_1 Improved Multi-Tenancy in Hibernate 6.3.0 Hibernate 6.3.0 added significant improvements to simplify and enhance multi-tenancy management, and the framework now offers: 1. Configurable Tenant Strategies Developers can use built-in strategies or extend them to meet any specific application needs. For example, a schema-based multi-tenancy strategy can be implemented without any excessive boilerplate code. Example of the new configuration: Java @Configuration public class HibernateConfig { @Bean public MultiTenantConnectionProvider multiTenantConnectionProvider() { return new SchemaBasedMultiTenantConnectionProvider(); // Built-in schema-based provider } @Bean public CurrentTenantIdentifierResolver tenantIdentifierResolver() { return new CurrentTenantIdentifierResolverImpl(); } @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder) { return builder .dataSource(dataSource()) .properties(hibernateProperties()) .packages("com.example.app") .persistenceUnit("default") .build(); } } Log output: VB.NET INFO: Multi-tenant connection provider initialized INFO: Tenant resolved: tenant_2 INFO: Schema switched to: tenant_2 2. Performance Optimization In earlier versions, switching between tenant schemas could have resulted in latencies, especially for frequent tenant-specific queries. Hibernate 6.3.0 optimized schema switching at the database connection level, which resulted in faster query execution and improved performance in multi-tenant environments. Example output: VB.NET DEBUG: Connection switched to tenant schema: tenant_3 DEBUG: Query executed in 15ms on schema: tenant_3 3. Improved API Support Hibernate 6.3.0 introduces new APIs that allow developers to manage tenant-specific sessions and transactions more effectively. For example, developers can programmatically switch tenants within a session using short API calls. Java Session session = sessionFactory.withOptions() .tenantIdentifier("tenant_4") .openSession(); Transaction transaction = session.beginTransaction(); // Perform tenant-specific operations transaction.commit(); session.close(); The above snippet makes it easy to handle multi-tenant operations dynamically, as the framework ensures proper schema management behind the scenes. Conclusion The improvements in Hibernate 6.3.0 address many of the existing challenges that developers faced with earlier implementations, and by simplifying tenant identification and schema resolution, the framework reduced the development effort required for scalable multi-tenancy setup. Additionally, the performance optimizations ensure that tenant-specific operations such as schema switching or query execution are faster, more reliable, and more efficient.
"Event-driven business architecture" is the idea of a software architecture that centers business applications on events and processes. In the following, I will explain its core concepts and describe how building business applications around events can enhance architectural flexibility. What Is an Event? When we talk about "event-driven business architecture," one of the first questions that naturally arises is: what exactly is an event? Let’s examine the characteristics of an event in detail. First of all, an event is something that happens, either unexpectedly or as a planned occurrence. Events are omnipresent in our business world — from a new customer request to the sending of an invoice or the receipt of a payment. When these events occur regularly and follow a defined sequence, we speak of a process. For example, the three events mentioned could be part of an order process. From a technical perspective, an event has several properties that describe its nature. The most fundamental property is the timestamp indicating when the event occurred. Another crucial property is the actor who initiated the event — this could be a customer submitting a new request, an employee creating an invoice, or a technical system such as a transport system’s control unit sending tracking information. Last but not least, we have the kind of event that describes the business context in which it occurred. This typically relates to what we call a "business process" and defines the formal framework the event is bound to. Architectural Impact of Event-Driven Thinking What does an event mean in the context of software architecture? Modern software architectures are already knowing various kinds of events. The most common use of events in software systems is monitoring a system state. Logging and monitoring systems are fundamentally based on events. Even message broker systems like Kafka operate on an event-driven paradigm. However, in business architecture, an event is tightly coupled to a business process that defines specific business goals. This represents the key distinction between technical events and events in a business context. Consider an outgoing invoice process in a company: you have an event when the invoice is created, another when it is sent to the customer, and potentially several events when payments are received. All these events are bound to the context of the "Invoice Process" — the business process that defines and orchestrates these activities. In contrast to the typical data-centric approach to building business applications, an event-driven business architecture focuses on business processes and their events. This does not mean that we ignore data — events are inherently bound to data. The key difference is that the primary focus lies on the event itself — something that has happened as an immutable fact. We cannot change these facts; we can only react to them. This paradigm leads to a fundamentally different approach to software design, where we focus on managing and responding to events rather than merely modeling real-world objects. This shift in perspective fundamentally changes how we design our systems. In a traditional data-centric architecture, we typically model our system around data entities that can be created, updated, and deleted (CRUD). However, in an event-driven architecture, we capture a sequence of immutable facts — events that have occurred and cannot be changed or deleted. For example, instead of updating a customer’s address in a database, we record an AddressChanged event with the new address details. If we need to know a customer’s current address, we look at the most recent address change event. Another illustrative example is the budget approval process: instead of simply storing the final approval status of a budget request, we capture each approval step from different stakeholders. This shows us not only whether a budget was approved, but also who approved it, in which sequence, and how long each approval step took. This approach allows us to understand not just the current state but also the significant business events that led to it. We gain insights into not just "what is" but also "how did we get here?". This immutability principle has profound implications for system design. We move from thinking about state management to thinking about state derivation through event streams. Our systems become more resilient, easier to audit, and better at capturing the true nature of business processes. Real-World Applications of Event-Driven Architecture The principles of event-driven business architecture can be found in various modern business applications. One prominent example is workflow engines based on BPMN (Business Process Model and Notation). These systems naturally implement the event-driven paradigm through their handling of BPMN catch-and-throw events, making them an excellent demonstration of event-based principles in practice. However, event-driven thinking extends beyond workflow engines. Consider customer relationship management systems that track customer interactions, supply chain management systems responding to inventory events, or financial systems processing payment events. Each of these domains benefits from focusing on business events as the core architectural concept. Let’s look more closely at workflow engines, as they provide a particularly clear illustration of event-driven principles. In a BPMN engine, each task represents a specific business state, while events trigger transitions between these states. These events capture not just the state change itself, but also essential business context such as the initiator, the timestamp, and the business process context. For instance, in an invoice processing workflow, such a system captures events like invoice creation, submission for approval, and payment receipt. Each event not only moves the process forward but also maintains a clear record of who did what and when. This aligns perfectly with an event-driven architecture, where business events serve as the primary drivers of process flow and system behavior. An example, that demonstrates how business processes can be modeled entirely through events is the Imixs-Workflow engine which is an event-driven engine focusing on human-centric workflows. Conclusion Event-driven business architecture represents a paradigm shift in how we design and implement business applications. By focusing on events rather than just data, we create systems that better reflect the dynamic nature of business processes. This approach not only provides greater flexibility in handling business workflows but also enables better traceability and understanding of business operations. As demonstrated by workflow engines and other business applications, event-driven thinking is already proving its value in real-world scenarios. The key benefits — improved process transparency, better auditability, and more natural alignment with business operations — make it a compelling architectural choice for modern business applications. As businesses continue to digitalize and automate their processes, the importance of event-driven architectures will likely grow. The ability to capture, process, and react to business events in real-time becomes increasingly crucial for maintaining a competitive advantage in today’s dynamic business environment.
For years, developers have dreamed of having a coding buddy who would understand their projects well enough to automatically create intelligent code, not just pieces of it. We've all struggled with the inconsistent naming of variables across files, trying to recall exactly what function signature was defined months ago, and wasted valuable hours manually stitching pieces of our codebase together. This is where large language models (LLMs) come in — not as chatbots, but as strong engines in our IDEs, changing how we produce code by finally grasping the context of our work. Traditional code generation tools, and even basic features of IDE auto-completion, usually fall short because they lack a deep understanding of the broader context; hence, they usually operate in a very limited view, such as only the current file or a small window of code. The result is syntactically correct but semantically inappropriate suggestions, which need to be constantly manually corrected and integrated by the developer. Think about suggesting a variable name that is already used at some other crucial module with a different meaning — a frustrating experience we've all encountered. LLMs now change this game entirely by bringing a much deeper understanding to the table: analyzing your whole project, from variable declarations in several files down to function call hierarchies and even your coding style. Think of an IDE that truly understands not just the what of your code but also the why and how in the bigger scheme of things. That is a promise of LLM-powered IDEs, and it's real. Take, for example, a state-of-the-art IDE using LLMs, like Cursor. It's not simply looking at the line you're typing; it knows what function you are in, what variables you have defined in this and related files, and the general structure of your application. That deep understanding is achieved by some fancy architectural components. This is built upon what's called an Abstract Syntax Tree, or AST. An IDE will parse your code into a tree-like representation of the grammatical constructs in that code. This gives the LLM at least an elementary understanding of code, far superior to simple plain text. Secondly, in order to properly capture semantics between files, a knowledge graph has been generated. It interlinks all of the class-function-variable relationships throughout your whole project and builds an understanding of these sorts of dependencies and relationships. Consider a simplified JavaScript example of how context is modeled: JavaScript /* Context Model based on an edited single document and with external imports */ function Context(codeText, lineInfo, importedDocs) { this.current_line_code = codeText; // Line with active text selection this.lineInfo = lineInfo; // Line number, location, code document structure etc. this.relatedContext = { importedDocs: importedDocs, // All info of imported or dependencies within text }; // ... additional code details ... } This flowchart shows how information flows when a developer changes his/her code. Markdown graph LR A[Editor(User Code Modification)] --> B(Context Extractor); B --> C{AST Structure Generation}; C --> D[Code Graph Definition Creation ]; D --> E( LLM Context API Input) ; E --> F[LLM API Call] ; F --> G(Generated Output); style A fill:#f9f,stroke:#333,stroke-width:2px style F fill:#aaf,stroke:#333,stroke-width:2px The Workflow of LLM-Powered IDEs 1. Editor The process starts with a change that you, as the developer, make in the code using the code editor. Perhaps you typed some new code, deleted some lines, or even edited some statements. This is represented by node A. 2. Context Extractor That change you have just made triggers the Context Extractor. This module essentially collects all information around your modification within the code — somewhat like an IDE detective looking for clues in the environs. This is represented by node B. 3. AST Structure Generation That code snippet is fed to a module called AST Structure Generation. AST is the abbreviation for Abstract Syntax Tree. This module will parse your code, quite similar to what a compiler would do. Then, it begins creating a tree-like representation of the grammatical structure of your code. For LLMs, such a structured view is important for understanding the meaning and the relationships among the various parts of the code. This is represented by node C, provided within the curly braces. 4. Creation of Code Graph Definition Next, the creation of the Code Graph Definition will be done. This module will take the structured information from the AST and build an even greater understanding of how your code fits in with the rest of your project. It infers dependencies between files, functions, classes, and variables and extends the knowledge graph, creating a big picture of the general context of your codebase. This is represented by node D. 5. LLM Context API Input All the context gathered and structured — the current code, the AST, and the code graph — will finally be transformed into a particular input structure. This will be done so that it is apt for the large language model input. Then, finally, this input is sent to the LLM through a request, asking for either code generation or its completion. This is represented by node E. 6. LLM API Call It is now time to actually call the LLM. At this moment, the well-structured context is passed to the API of the LLM. This is where all the magic has to happen: based on its training material and given context, the LLM should give suggestions for code. This is represented with node F, colored in blue to indicate again that this is an important node. 7. Generated Output The LLM returns its suggestions, and the user sees them inside the code editor. This could be code completions, code block suggestions, or even refactoring options, depending on how well the IDE understands the current context of your project. This is represented by node G. So, how does this translate to real-world improvements? We've run benchmarks comparing traditional code completion methods with those powered by LLMs in context-aware IDEs. The results are compelling: Metric Baseline (Traditional Methods) LLM-Powered IDE (Context Aware) Improvement Accuracy of Suggestions (Score 0-1) 0.55 0.91 65% Higher Average Latency (ms) 20 250 Acceptable for Benefit Token Count in Prompt Baseline **~ 30% Less (Optimized Context)** Optimized Prompt Size Graph: Comparison of suggestion accuracy scores across 10 different code generation tasks. A higher score indicates better accuracy. Markdown graph LR A[Test Case 1] -->|Baseline: 0.5| B(0.9); A -->|LLM IDE: 0.9| B; C[Test Case 2] -->|Baseline: 0.6| D(0.88); C -->|LLM IDE: 0.88| D; E[Test Case 3] -->|Baseline: 0.7| F(0.91); E -->|LLM IDE: 0.91| F; G[Test Case 4] -->|Baseline: 0.52| H(0.94); G -->|LLM IDE: 0.94| H; I[Test Case 5] -->|Baseline: 0.65| J(0.88); I -->|LLM IDE: 0.88| J; K[Test Case 6] -->|Baseline: 0.48| L(0.97); K -->|LLM IDE: 0.97| L; M[Test Case 7] -->|Baseline: 0.58| N(0.85); M -->|LLM IDE: 0.85| N; O[Test Case 8] -->|Baseline: 0.71| P(0.90); O -->|LLM IDE: 0.90| P; Q[Test Case 9] -->|Baseline: 0.55| R(0.87); Q -->|LLM IDE: 0.87| R; S[Test Case 10] -->|Baseline: 0.62| T(0.96); S -->|LLM IDE: 0.96| T; style B fill:#ccf,stroke:#333,stroke-width:2px style D fill:#ccf,stroke:#333,stroke-width:2px Let's break down how these coding tools performed, like watching a head-to-head competition. Imagine each row in our results table as a different coding challenge (we called them "Test Case 1" through "Test Case 10"). For each challenge, we pitted two approaches against each other: The Baseline: Think of this as the "old-school" method, either using standard code suggestions or a basic AI that doesn't really "know" the project inside and out. You'll see an arrow pointing from the test case (like 'Test Case 1', which we labeled Node A) to its score — that's how well the baseline did.The LLM IDE: This is the "smart" IDE we've built, the one with a deep understanding of the entire project, like it's been studying it for weeks. Another arrow points from the same test case to the same score, but this time, it tells you how the intelligent IDE performed. Notice how the result itself (like Node B) is highlighted in light blue? That's our visual cue to show where the smart IDE really shined. Take Test Case 1 (that's Node A) as an example: The arrow marked 'Baseline: 0.5' means the traditional method got it right about half the time for that task.But look at the arrow marked 'LLM IDE: 0.9'! The smart IDE, because it understands the bigger picture of the project, nailed it almost every time. If you scan through each test case, you'll quickly see a pattern: the LLM-powered IDE consistently and significantly outperforms the traditional approach. It's like having a super-knowledgeable teammate who always seems to know the right way to do things because they understand the entire project. The big takeaway here is the massive leap in accuracy when the AI truly grasps the context of your project. Yes, there's a tiny bit more waiting time involved as the IDE does its deeper analysis, but honestly, the huge jump in accuracy and the fact that you'll spend way less time fixing errors makes it a no-brainer for developers. But it's more than just the numbers. Think about the actual experience of coding. Engineers who've used these smarter IDEs say it feels like a weight has been lifted. They're not constantly having to keep every tiny detail of the project in their heads. They can focus on the bigger, more interesting problems, trusting that their IDE has their back on the details. Even tricky stuff like reorganizing code becomes less of a headache, and getting up to speed on a new project becomes much smoother because the AI acts like a built-in expert, helping you connect the dots. These LLM-powered IDEs aren't just about spitting out code; they're about making developers more powerful. By truly understanding the intricate connections within a project, these tools are poised to change how software is built. They'll make us faster and more accurate and, ultimately, allow us to focus on building truly innovative things. The future of coding assistance is here, and it's all about having that deep contextual understanding.
The Component Object Model (COM) is a design pattern that allows you to structure tests in test automation projects. Inspired by the popular Page Object Model (POM), COM goes beyond handling entire pages and focuses on specific UI components, such as buttons, text fields, dropdown menus, or other reusable elements. In this tutorial, we will explain how to implement COM to test a web application with Selenium WebDriver and Cucumber and how this approach can make your tests more flexible, modular, and easier to maintain. What Is the Component Object Model? The Component Object Model (COM) is an evolution of the POM model. Instead of modeling an entire page as an object with methods interacting with all the page elements, COM breaks the user interface into individual components, such as: ButtonsText fieldsDropdown menusSearch barsTables Each component is encapsulated in a Java class, and its interactions are managed by specific methods, allowing each element to be maintained and tested independently. This improves code reusability, maintenance, and flexibility in tests. Why Use COM Instead of POM? 1. Increased Modularity With COM, each component (like a button or text field) is an independent entity. If a component changes (for example, a button's text changes), you only need to modify the component class without affecting other components or pages. This allows for high modularity and prevents the need to modify entire pages for minor adjustments. 2. Reusability and Flexibility Components can be reused across multiple pages of the application, reducing code duplication. For example, a ButtonComponent can be used on any page where the button appears, reducing redundancy and increasing test flexibility. 3. Simplified Maintenance Maintenance is easier with COM. If a component changes (for example, a button or text field), you only need to modify the class representing that component. All tests using that component will be automatically updated without having to revisit each test scenario. 4. Adapted to Dynamic Interfaces Modern applications are often dynamic, with interfaces that change frequently. COM is ideal for such applications because it allows the testing of components independent of the interface. You can modify or add components without affecting tests for other parts of the application. 5. Faster Test Automation COM speeds up test automation. By centralizing interactions with components into reusable classes, testers don’t need to redefine actions for every page or component. For instance, a single-step definition for the action "click a button" can be used across all tests in the project, significantly reducing the time needed to automate tests. 6. Avoiding Repetition of Work Between Testers With COM, testers no longer need to repeat the same work for every test. Centralized step definitions for common actions, such as "click a button" or "enter text," can be used by all testers in the project. This ensures consistency in tests while avoiding unnecessary repetition, improving efficiency and collaboration among testers. COM Project Architecture The architecture of a COM-based project is structured around three main elements: components, step definitions, and the runner. 1. Components Each component represents a UI element, such as a button, text field, or dropdown menu. These classes encapsulate all possible interactions with the element. Here’s an example of the ButtonComponent class: Plain Text public class ButtonComponent { private WebDriver driver; public ButtonComponent(WebDriver driver) { this.driver = driver; } public void clickButton(String buttonText) { WebElement button = driver.findElement(By.xpath("//button[text()='" + buttonText + "']")); button.click(); } } 2. Step Definitions Step definitions link the steps defined in the Gherkin files to the methods in the components. They are responsible for interacting with the components and implementing the actions specified in the test scenario. Here’s an example of ButtonStepDefinition: Plain Text public class ButtonStepDefinition { private ButtonComponent buttonComponent; public ButtonStepDefinition(WebDriver driver) { this.buttonComponent = new ButtonComponent(driver); } @When("I click on the button {string}") public void iClickOnTheButton(String buttonText) { buttonComponent.clickButton(buttonText); } } 3. Runner The runner class is responsible for running the tests using JUnit or TestNG. It configures Cucumber to load the test scenarios defined in the .feature files and run them using the step definitions. Here’s an example of TestRunner: Plain Text @RunWith(Cucumber.class) @CucumberOptions( features = "src/test/resources/features", glue = "com.componentObjectModel.stepDefinitions", plugin = {"pretty", "html:target/cucumber-reports.html"} ) public class TestRunner { } Writing and Explaining a Gherkin Scenario One of the essential elements of automation with Cucumber is using the Gherkin language to write test scenarios. Gherkin allows you to describe tests in a readable and understandable way, even for non-technical people. Let’s consider a scenario where we want to test the interaction with a button using the ButtonComponent we defined earlier. Here’s how it might be written in Gherkin: Scenario: User clicks on the "Submit" button Given I am on the login page When I click on the button "Submit" Then I should be redirected to the homepage Explanation of the Scenario Scenario This scenario describes the action where a user clicks the 'Submit' button on the login page and ensures they are redirected to the homepage after clicking. Given I am on the login page: The initial state of the test is that the user is on the login page.When I click on the button "Submit": The action performed in the test is clicking the 'Submit' button.Then, I should be redirected to the homepage: The expected verification is that the user is redirected to the homepage after clicking the button. Link With COM Each step in this scenario is mapped to a step definition in ButtonStepDefinition, where the click action is handled by the ButtonComponent, making the test modular and easy to maintain. Additional Explanation Notice that the steps take the displayed text on the buttons (or placeholders in input fields, etc.) as parameters. This makes the scenarios more readable and generic. For example, in the scenario above, the text of the button "Submit" is used directly in the step "When I click on the button 'Submit.'" In this way, the same step definition could be reused to test another button, such as "Login," by simply changing the text in the Gherkin scenario. This improves the reusability of the test code while making the scenarios more intuitive and flexible. Reusability of Steps With COM One of the key advantages of COM is the reusability of step definitions for different buttons. For example, the same step When I click on the button {string} can be used for all buttons, regardless of the text displayed on the button. The COM approach allows you to dynamically parameterize the click action based on the button text. Let’s consider another scenario with a different button: Scenario Plain Text User clicks on the "Login" button Given I am on the login page When I click on the button "Login" Then, I should be redirected to the dashboard In both cases, the same clickButton method in the ButtonComponent will be used, but the button text will change depending on the scenario. This demonstrates how COM allows reusing the same component and step definition, making tests flexible and modular. How COM Improves Test Automation COM improves test automation in several ways: Reduction of code duplication: By using reusable components, you reduce code duplication in tests. For example, a ButtonComponent used on multiple pages of the application eliminates the need to write repetitive tests.More readable and modifiable tests: Tests are clearer and easier to understand as they are decoupled from page-specific implementation details. You can focus on interacting with components without worrying about the underlying page structure.Ease of maintenance: Any modification to a component (e.g., a button text change) only affects the component class, not the tests. This makes maintenance much simpler and faster.More flexible tests: Tests can easily be adapted to changing UIs, as components are independent of each other. You can test new components or replace existing ones without affecting other tests. When Is the COM Design Pattern Recommended? This design pattern is recommended if the web application being tested uses a unified design system for all components. For example, if all buttons are declared in the same way, with a consistent structure for all UI elements. In such cases, using COM allows you to centralize interactions with each type of component (such as buttons, text fields, etc.), making tests more modular, reusable, and easier to maintain. Reusing Step Definitions Across Multiple Products If you are developing and testing multiple products that use the same design system, you can create a library that encompasses all the step definitions and actions (or almost all) for each component. This allows testers to focus solely on writing Gherkin scenarios, and the tests will be automated. Since all web elements are written in the same way (HTML), components like buttons, text fields, and other UI elements will behave consistently across all projects. As a result, testers no longer need to repeat the same tasks — defining step definitions and actions for identical components across multiple projects. This approach saves time and improves maintainability. If a web element is modified (for example, if the XPath changes), you only need to update that element in the shared library, and the modification will be automatically applied to all automation projects. This reduces the risk of errors and makes updates more efficient across different products. Conclusion The Component Object Model (COM) is a powerful design pattern for organizing test automation. By focusing on reusable components, COM allows you to create more modular, maintainable, and scalable tests. This pattern is particularly useful for modern applications, where the user interface changes quickly, and where interacting with independent components is essential. With reusable tests, simplified maintenance, and a flexible architecture, COM is the ideal solution for test automation in Selenium and Cucumber projects.
John Vester
Senior Staff Engineer,
Marqeta
Alexey Shepelev
Senior Full-stack Developer,
BetterUp
Thomas Jardinet
IT Architect,
Rhapsodies Conseil