DevOps isn't dead, it has just evolved. And the developer experience has been a driving force.Tell us how your DevOps journey has changed.
Stop letting outdated access processes hold your team back. Learn how to build smarter, faster, and more secure DevOps infrastructures.
Software design and architecture focus on the development decisions made to improve a system's overall structure and behavior in order to achieve essential qualities such as modifiability, availability, and security. The Zones in this category are available to help developers stay up to date on the latest software design and architecture trends and techniques.
Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.
Performance refers to how well an application conducts itself compared to an expected level of service. Today's environments are increasingly complex and typically involve loosely coupled architectures, making it difficult to pinpoint bottlenecks in your system. Whatever your performance troubles, this Zone has you covered with everything from root cause analysis, application monitoring, and log management to anomaly detection, observability, and performance testing.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
A Practical Guide to Creating a Spring Modulith Project
Azure Web Apps: Seamless Deployments With Deployment Slots
Kubernetes has grown to become the go-to platform for container orchestration. While the flexibility and scalability that make Kubernetes interesting also present substantial security challenges, perimeter-based security has become outdated in these organizations, and thus, they are willing to switch to the zero-trust security paradigm. In this article, we will explore how to implement Zero Trust Security in Kubernetes to provide DevOps teams with actionable best practices on how to fortify their environment with respect to emerging threats. Understanding Zero Trust Security Zero Trust Security is a strategic framework for security that adheres to the applicable consideration: never trust, consistently affirm. Unlike traditional security models that base security on a clear perimeter, zero-trust-based security is on the assumption that threats can come from within and outside the perimeter. As a result, it focuses on rigorous identity verification, contextual fine-grained access controls to resources, and continuous auditing and monitoring of any activity within the system. The Imperative for Zero Trust in Kubernetes One of the main characteristics of Kubernetes environments is that they are dynamic by nature — containers are often created, scaled, and terminated constantly. This dynamism, along with the always interconnected nature of microservices, expands the attack surface and complicates security management. However, traditional security measures (that is, those aimed at securing the perimeter) are not enough in such environments. Strict access controls of zero trust highlight resources to protect and outline a robust framework to meet Kubernetes' needs by enforcing authentication and authorization of each component, such as user, device, or service, before accessing resources. Best Practices for Implementing Zero Trust in Kubernetes 1. Embrace Micro-Segmentation With micro-segmentation, we divide the Kubernetes cluster into smaller segmented areas. The use of Kubernetes namespaces and Network Policies enables DevOps teams to dictate how traffic flows between pods, such that only inbound traffic is accepted from those pods you have whitelisted. As a result, the lateral movement of potential attackers is restricted, limiting engagement to confined sections and limiting overall risk. 2. Strengthen Identity and Access Management (IAM) The cornerstone approach of zero trust is robust IAM. RBAC is implemented to grant users and service accounts just the right permissions. Do not use the default accounts, embed external identity providers such as OAuth or LDAP as data sources in order to centralize management. This ensures that each player receives only minimum trust in order to diminish the privilege escalation potential. 3. Implement Continuous Monitoring and Logging This reveals the criticality of visibility into cluster activities for fast detection and instance of threats in real time. Use centralized logging solutions such as ELK stack (Elasticsearch, Logstash, and Kibana) or Fluentd and monitoring solutions such as Prometheus or Grafana to track performance and security events. Enabling Kubernetes audit logs additionally supports the tracing and analysis of suspicious activities enabling us to rapidly respond to incidents. 4. Ensure Comprehensive Encryption and Data Protection It needs to protect data at rest and in transit. Define TLS for in-cluster communications so the client is unable to do any unauthorized access and tampering. Sensitive data can be managed in Kubernetes Secrets or other external tools like HashiCorp Vault. Furthermore, make sure that persistent storage volumes have encryption, comply with data protection regulations as well as protect from data breaches. 5. Automate Security Policies In the case of automation, consistent security policies are enforced across the Kubernetes environment. With tools such as Open Policy Agent (OPA) define policies as code and integrate them in Kubernetes Admission Controllers. Real-time automated remediation tools can address these violations with no manual interventions or human errors. 6. Adopt the Principle of Least Privilege By restricting access to users and services to the bare minimum necessary, the worst that can occur if an account is compromised is much reduced. Pods can only access fine-grained RBAC roles combined with Pod Security Policies (PSPs) to restrict the capabilities and resources that pods can access. But don't grant too broad privileges, and monitor access controls regularly to stay secure. 7. Secure the Software Supply Chain The integrity of the software supply chain must be protected. You can also implement image scanning with Clair or Trivy before deployment to detect vulnerabilities. And use immutable infrastructure practices and private, trusted, and tightly controlled container repositories to forbid unauthorized changes from running containers. 8. Integrate Security into CI/CD Pipelines Additionally, by embedding security into the continuous integration and continuous deployment (CI/CD) pipeline, we are able to fix a vulnerability very quickly as soon as a vulnerability is detected. Use static code analysis, automated security testing, and deployed gateways that force security checks prior to a promotion to production. Streamlining secure deployments with a proactive appetite for onboarding new technologies doesn’t slow down the development velocity. 9. Leverage Kubernetes Security Tools Increase Kubernetes security by leveraging specialized tools such as service meshes (e.g., Istio or Linkerd) to handle secure service-to-service communications, runtime security tools (e.g., Falco) to detect threats in real-time, and configuration management tools (e.g., Helm) to help create consistent and secure deployments. These tools form a complete defense strategy that extends Kubernetes’ native security capabilities. Addressing Dynamic Policy Enforcement Dynamic policy enforcement is one of the most complex challenges when implementing zero trust in Kubernetes. Recognizing the massively dynamic nature of Kubernetes, where workloads and configurations are continually changing, you’ll need security policies that evolve in real time without admin intervention. Solution: Policy-Driven Automation Framework Adopting a policy-driven automation framework is pivotal in addressing this challenge. Here's how to implement it effectively: 1. Policy as Code With OPA Integrate Open Policy Agent (OPA) with Kubernetes to define and enforce policies programmatically. Develop dynamic policies that consider contextual data such as pod labels, namespaces, and resource usage, allowing policies to adapt to the changing environment. 2. Real-Time Monitoring and Feedback Loops Utilize Kubernetes’ event-driven architecture to trigger policy evaluations whenever there are resource changes. Implement feedback mechanisms that provide real-time alerts and automate remediation actions when policy violations occur. 3. Service Mesh Integration Incorporate service meshes like Istio or Linkerd to manage and enforce network policies dynamically. These meshes facilitate secure communications between services, dynamically adjusting to the evolving state of the cluster. 4. Continuous Validation and Testing Embed continuous validation of policies within CI/CD pipelines to ensure their effectiveness against emerging threats. Regularly perform simulated attacks to test the resilience and adaptability of the dynamic policy enforcement mechanisms. Implementation Steps Define comprehensive policies: Outline security requirements and translate them into OPA policies, covering aspects like access control, resource usage, and network segmentation.Integrate OPA with Kubernetes: Deploy OPA as an admission controller to intercept and evaluate requests against defined policies, ensuring dynamic policy decisions based on real-time data.Set up real-time monitoring: Deploy monitoring tools such as Prometheus to track Kubernetes events and resource states, configuring alerts for policy violations and integrating them with incident response systems.Automate remediation: Develop scripts or use Kubernetes Operators to automatically address policy violations, such as scaling down compromised pods or revoking access tokens.Continuous improvement: Regularly review and update policies to address new threats, incorporating feedback from monitoring and audits, and provide ongoing training for DevOps teams to stay updated with best practices. Benefits Scalability: Automatically adapts policies to the dynamic Kubernetes environment, ensuring consistent security without manual overhead.Consistency: Uniformly enforces policies across all cluster components and services, maintaining a secure environment.Resilience: Enhances the cluster's ability to detect and respond to security threats in real time, minimizing potential damage from breaches. Conclusion Zero Trust Security in Kubernetes is an approach for securing applications that changes the security model from a perimeter focus to an identity-aware one. When it comes to DevOps teams, implementing zero trust means they’re committed to implementing a robust identity and access management solution, leveraged with continuous monitoring and automated policy enforcement, using the right security tools. Following these best practices would go a really long way forward in making the Kubernetes environments of organizations more secure, and managing them in a way to be resilient against advanced threats. Kubernetes is a dynamic and connected environment that requires a forward-looking and responsive approach to security. Not only does zero trust mitigate current risk, but it lays the groundwork that scales to meet future challenges. With Kubernetes growing as the underlying platform for modern application deployment, integrating Zero Trust Security will allow organizations to safely harness the full promise of Kubernetes for innovation and business continuity. Adopting zero trust is not a technical evolution but a cultural change, embracing a security-first mentality across development and operation teams. With continuous verification, minimal access privileges, and automated security controls that DevOps teams can introduce into their Kubernetes environments, they make those environments secure, reliable, and efficient as well as resulting in success for the organization.
Sluggish build times and bloated node_modules folders are issues that many developers encounter but often overlook. Why does this happen? The answer lies in the intricate web of npm dependencies. With every npm install, your project inherits not only the packages you need but also their dependencies, leading to exponential growth in your codebase. As a result, it can slow down your daily workflow, making it ineffective and introducing security vulnerabilities. In this piece, we’ll examine practical methods for auditing and refining your npm packages. By the end, you’ll have a clearer understanding of how to keep your project efficient and secure. Understand Dependencies vs. DevDependencies In the context of a project, "dependencies" refer to third-party libraries and other utilities necessary for your project to run in a production or testing environment. Including libraries in the right section is important to optimize your production build. For this, let's dig deeper into dependencies and devDependencies. Dependencies: The Core of Your Project Dependencies are the packages or external libraries that your project relies on to operate successfully in a production environment. When you install a package/library as a dependency, you're saying that your project requires this package to function effectively, not only during development but also when it's deployed and used by others. For instance, if you're working on a React project, you would include React and ReactDOM as your core dependencies because they are essential for your React components to render in the browser. Every library you specify in your 'dependencies' section must and will be included in the build generated for the production environment. They affect everything from the size of your application to its performance and security. JavaScript "dependencies": { "react": "^18.3.1", "react-dom": "^18.3.1", }, DevDependencies: Tools for Development, But Not for Production Unlike regular dependencies required to run your application in production, devDependencies are used for tasks like testing, code analysis, local development, and building. They are not included when your project is deployed to a production environment. The development workflow often involves compiling source code, running tests, and linting to ensure code quality. For example, if you're developing a React application, you might use Babel to transpile JSX into browser-readable JavaScript, and often, to build unit tests around your code, you will use Jest or Storybook. These tools are vital for your development process but are unnecessary when your application is in production. JavaScript "devDependencies": { "storybook": "^8.3.5", "eslint-plugin-storybook": "^0.9.0", "webpack": "^5.94.0" } Reduce Production Load by Isolating DevDependencies By isolating devDependencies, you can reduce the load on your production environment. When you install packages for production using the npm install --production command, npm will not install packages listed under devDependencies. This results in a smaller footprint for your application, which can lead to faster deployment times and reduced bandwidth usage, both of which are important in a production setting. Optimizing Third-Party Library Inclusions One of the simplest ways to reduce build times is to optimize third-party libraries and include only what you need. Install smaller Node modules over larger ones where possible. For instance, in a recent project, I used date-fns to format dates in a specific style. While the library offers over 200 date-related utility functions and includes around 5,000 files totaling 22MB, I avoided importing the entire library. Instead, I included only the specific file required for my date formatting utility, significantly reducing unnecessary bloat. Similarly, lodash is a widely-used utility library offering hundreds of functionalities. In one project, I utilized the _compact function from lodash. Rather than importing the entire library, I imported just the specific library like this: "import compact from 'lodash/compact''' to use the necessary function. This approach helped to streamline the build and keep the dependency footprint minimal. As projects grow, keeping a close eye on third-party dependencies and their impact on the build is essential. Whenever feasible, favor native JavaScript operations over third-party libraries to further minimize overhead and maintain a leaner codebase. Tree Shaking Tree shaking is an aptly named technique that works much like shaking a tree to remove dead leaves — it identifies and eliminates unused code by leveraging the static structure of ES2015 module syntax. Enabling tree shaking is straightforward if you use Webpack, as it can be activated by setting the build mode to 'production.' However, before deploying, it's crucial to test it thoroughly in the development environment. To do this, set the mode to 'development' and enable optimization.usedExports by setting it to true in your Webpack configuration. When you run the build, Webpack will generate files with comments highlighting unused code (e.g., “/* unused harmony export square */"). Once you’ve reviewed the unused exports, switch the mode back to 'production' in your Webpack configuration. This enables usedExports: true and minimize: true, effectively marking and removing unused code from the production bundle. Structuring your code into modular units with clear export and import statements allows bundlers like Webpack to analyze the dependency graph efficiently and exclude unnecessary exports, resulting in a leaner and faster build. Removing Unused Dependencies Unused dependencies are equivalent to unused code, and this presents another easy opportunity to reduce the npm build size for larger projects. As the project evolves, many libraries become obsolete or vulnerable. Unless flagged by a security tool or when the build starts failing, these dependencies often go unnoticed. As the saying goes, "If it ain't broke, don’t fix it." Depcheck is a great tool for analyzing the dependencies in a Node project and finding unused dependencies. It’s simple to run once you have npm installed using npx, which is a package runner bundled in npm: npx depcheck. ESLint is another commonly used linter that comes with built-in plugins and rules for catching unused imports. Both @typescript-eslint and eslint-plugin-unused-imports will highlight unused variables in your codebase. ESLint can be integrated into your CI/CD pipeline to ensure that every new commit is tested. This reduces the long-term overhead of managing unused dependencies and helps maintain a clean and efficient project structure. Minification and Compression Minification and compression are key techniques for reducing the size of your build, and Webpack offers excellent tools to achieve this. Minification and compression work together to shrink the size of text-based assets by up to 70%. Minification removes unnecessary characters from code files without altering their functionality. In Webpack's production mode, minification is applied automatically using the TerserPlugin for JavaScript. For CSS, you can use plugins like css-minimizer-webpack-plugin to achieve similar results. These practices can drastically reduce bundle sizes, especially when combined with thoughtful coding strategies. I added a separate section in this article to highlight some coding practices to help you further reduce your build size when minified. Analyzing Module Size With Cost-of-Modules Understanding the size impact of your installed modules is critical to keeping your project efficient. The CLI tool cost-of-modules provides a straightforward way to analyze the size of the libraries listed in your package.json. By installing and running this tool within your project, you can identify large dependencies that might be contributing to unnecessary bloat. To use cost-of-modules, simply run the following command: npx cost-of-modules This will generate a report that breaks down the size of each module, helping you pinpoint oversized packages. With this information, you can make informed decisions about replacing heavy dependencies with lighter alternatives or refactoring parts of your code. Coding Practices Optimizing using the following coding practices can significantly impact the size of your npm build package. Incorporating the following techniques into your daily workflow can help ensure a lean and efficient codebase: Remove Any Code Repetition This is a general practice that should be followed for any programming language. If you notice repeated functionality, extract it into a separate reusable function. This not only reduces the overall size of your code but also improves readability and maintainability. Example Before Optimization JavaScript export const greetUser1 = () => { console.log("Hello, Jack!"); } export const greetUser2 = () => { console.log("Hello, Jill!"); } export const greetUser3 = () => { console.log("Hello, Bob!"); } greetUser1(); greetUser2(); greetUser3(); Minified (Output: 256 bytes): JavaScript export const greetUser1=()=>{console.log("Hello, Jack!")};export const greetUser2=()=>{console.log("Hello, Jill!")};export const greetUser3=()=>{console.log("Hello, Bob!")};console.log("Hello, Jack!"),console.log("Hello, Jill!"),console.log("Hello, Bob!"); Example After Optimization JavaScript export const greetUser = (name) => { console.log("Hello, " + name + "!"); } greetUser("Jack"); greetUser("Jill"); greetUser("Bob"); Minified (Output: 110 bytes): JavaScript export const greetUser=e=>{console.log("Hello, "+e+"!")};greetUser("Jack"),greetUser("Jill"),greetUser("Bob"); Optimize Object Properties There are some interesting things with objects. When frequently accessing the same object field, destructure the object and assign it to a variable. This reduces redundancy and helps minifiers compress the code more effectively. Example Before Optimization JavaScript import { Acme } from 'acme'; export const func = () => { const obj = new Acme(); console.log(obj.subObject.field1); console.log(obj.subObject.field2); console.log(obj.subObject.field3); }; Minified (Output: 160 bytes): JavaScript import{Acme}from"acme";export const func=()=>{const e=new Acme;console.log(e.subObject.field1),console.log(e.subObject.field2),console.log(e.subObject.field3)}; Example After Optimization JavaScript import {Acme} from 'acme-lib'; export const func = () => { const obj = new Acme(); const subObj = obj.subObject; console.log(subObj.field1); console.log(subObj.field2); console.log(subObj.field3); }; Minified (Output: 146 bytes): JavaScript import{Acme}from"acme-lib";export const func=()=>{const o=(new Acme).subObject;console.log(o.field1),console.log(o.field2),console.log(o.field3)}; Leverage Arrow Functions With ES6 Arrow functions allow for concise syntax and can help reduce the overall code size when minified. When declaring arrow functions in a row via const or let, all subsequent const or let except the first one are shortened. Next, arrow functions can return values without using the return keyword. Example Before Optimization JavaScript export function function1() { return 1; } export function function2() { console.log(2); return 2; } Minified (Output: 89 bytes): JavaScript export function function1(){return 1}export function function2(){return console.log(2),2} Example After Optimization JavaScript export const function1 = () => 1; export const function2 = () => { console.log(2); return 2; } Minified (Output: 75 bytes): JavaScript export const function1=()=>1;export const function2=()=>(console.log(2),2); Stop Creating Variables in Functions Though the minifier can do inline code insertion, trying to reduce the number of variables is a normal idea for optimization. Example Before Optimization JavaScript export const SomeFunction = (x, y) => { const z = x + y; console.log(z); return z; }; Minified (Output: 71 bytes): JavaScript export const SomeFunction=(o,n)=>{const t=o+n;return console.log(t),t}; Example With Optimization JavaScript export const SomeFunction = (x, y, z=x+y) => { console.log(z); return z; }; Minified (Output: 58 bytes): JavaScript export const SomeFunction=(o,n,c=o+n)=>(console.log(c),c); Conclusion Efficiently managing npm dependencies is essential for ensuring that your projects remain maintainable, secure, and performant. By adopting best practices as mentioned above, you can significantly optimize your build processes. Tools like cost-of-modules and depcheck, paired with thoughtful coding practices, provide actionable ways to reduce bloat and improve code quality. The strategies outlined in this guide — whether minimizing unused dependencies, optimizing object properties, or leveraging modern JavaScript features — contribute to a leaner codebase and faster build times. As you integrate these techniques into your workflow, you'll not only enhance your project’s efficiency but also create a scalable foundation for future development. The journey toward streamlined npm builds begins with consistent and mindful optimization.
Over the past one and a half years, I was involved in designing and developing a multi-tenant treasury management system. In this article, I will share our approaches to the data isolation aspect of our multi-tenant solution and the learnings from it. Background and Problem Regarding Data Isolation Before going into the problem that I will focus on today, I must first give some background into our architecture for storage and data in our system. When it comes to data partitioning for SaaS systems, at the extreme far right end, we have the approach of using dedicated databases for each tenant (silo model), and on the other side of the spectrum is the shared database model (pool model). Shared database model (Pool model) Dedicated database model (Silo model) For obvious reasons such as reduced management overhead and lower cost, it was decided that our solution would be in the shared database model, and with that comes the drawback of lower levels of inherent data isolation. In most cases with the shared database model, the data isolation depends on developers implementing the correct WHERE clauses in every SQL statement, but this is, of course, error-prone. We wanted an approach to enforce the data isolation from a separate layer and in comes the concept of Row Level Security (RLS). Kotlin interface TransactionRepository : JpaRepository<Transaction, Long> { // trivial approach fun findAllByTenantId(): List<Transaction> // what we want to achieve override fun findAll(): List<Transaction> } Solution: Row Level Security (RLS) In general terms, Row Level Security (in RDBMS) refers to mechanisms that allow controlling access to rows in a database table based on some context (for example, tenant_id). To my knowledge, this feature has been around on PostgreSQL DB since 2020, and it is available in SQL Server, in Amazon Aurora for PostgreSQL, and RDS for PostgreSQL at the time of writing. PLSQL ALTER TABLE gtms_payment.TRANSACTION ENABLE ROW LEVEL SECURITY; CREATE POLICY tenant_isolation_policy ON gtms_payment.TRANSACTION USING (business_group_code = current_setting('app.current_tenant')::VARCHAR); Other Solutions: Sping Post Filter, Hibernate Filters With Spring Aspects To give more details from my analysis of the possible solutions, let me share some brief info on other possible approaches as well. One of the options was Spring PostFilter, but that would mean all data (across multiple tenants) would be returned to the backend from the db query before they are filtered and sent out, and this was not ideal for our use case. Another option (which is specific to our backend tech stack — Spring, Hibernate, and Java/Kotlin) was to use Hibernate Filters with a Spring Aspect. This option was quite good but not as strong as having the enforcement done on the DB layer. (We did use this approach to solve a similar use case, and I hope to cover this in a later blog) Going Ahead With RLS Our standard database solution throughout the organization was PostgreSQL (for use cases like ours), and this helped in our decision to go ahead with RLS as the solution, but if you are trying to solve a similar problem with RLS, do take note that not all of the relational databases support Row Level Security yet so you might have to be aware of being locked into a subset of databases that support RLS. However, switching to another relational DB that supports RLS should not be too difficult as there is little dependency on the DB type on the code level itself when using RLS. Implementation Going into the implementation details, once RLS is enabled on the DB, we should set the parameters (in our case, app.current_tenant) for the DB to run the policy on for each query. In our case, we do that on the DataSource implementation as follows. In our flow, we maintain the Tenant information on Spring Security Context (again, which I hope to cover in a separate blog), and the getTenantId() which is abstracted out here is retrieving the Tenant Id from the Spring Security Context. Kotlin class TenantAwareHikariDataSource : HikariDataSource() { override fun getConnection(): Connection? { val connection = super.getConnection() return createConnection(connection) } override fun getConnection(username: String, password: String): Connection? { val connection = super.getConnection(username, password) return createConnection(connection) } private fun createConnection(connection: Connection): Connection? { connection.createStatement().use { sql -> sql.execute("SET app.current_tenant = '${getTenantId()}'") } return connection } } Note that if the parameters required for the RLS policy are not set, we will run into errors as follows: Plain Text [2024-08-25 22:53:50] [42704] ERROR: unrecognized configuration parameter "app.current_tenant" Supporting Use Cases That Do Not Require a Tenant Id In all multi-tenant systems, there will always be some use cases, such as tenant onboarding, where you will have to run DB queries without a Tenant Id. So, can we support such use cases? The answer is yes! In PostgreSQL functionality, the schema owner bypasses RLS policies. Hence, in our solution we have, we maintain two sets of DB configs (Tenant DB Config and Non Tenant DB Config) and two sets of Spring repositories tied to each DB config with tenant config as primary. Thus, in the few use cases that need to run queries without a Tenant Id, we use the Non-Tenant configs. We also have to use PostgreSQL schema owner user for the database change management tool (in our case, Liquibase). Pros and Cons of the RLS Approach to Sum Things Up Pros RLS provides data isolation on the DB layer. Hence, it provides a stronger isolation than that of the common WHERE clause approach.Most of the data isolation logic (Tenant-aware data source, Tenant and Non-Tenant DB configs, etc.) can be on a separate library and shared across different micro-services. Hence, developers can worry less about tenant data isolation while developing application-level business logic. Cons Since not all relational databases support RLS yet, there will be a DB lock-in, as explained in previous sections. I hope you all have fun with RLS! :-) References Multi-tenant data isolation with PostgreSQL Row Level SecurityArchitectural approaches for storage and data in multitenant solutions
Let’s face it: frontend security often gets overlooked. With so much focus on UI/UX and performance, it’s easy to assume that back-end APIs and firewalls are taking care of all the heavy lifting. But the reality is that your beautiful React or Vue app could be a ticking time bomb if you’re not paying attention to security. Having spent years building front-end applications and learning (sometimes the hard way), I’ve picked up a few essential practices that every developer should follow to keep their apps secure. Here are some practical, battle-tested tips to secure your frontend and sleep better at night. 1. Sanitize User Inputs (No, Seriously) Let’s start with an oldie but goodie. User input is one of the most common attack vectors, and it’s your responsibility to sanitize everything coming into your app. Whether it’s a form submission or a query parameter, assume it’s malicious until proven otherwise. What to Do Use libraries like DOMPurify to sanitize inputs before rendering them in the DOM.Validate inputs on both the frontend and backend. Think of front-end validation as a convenience for users and back-end validation as your safety net. Example JavaScript import DOMPurify from 'dompurify'; const sanitizedInput = DOMPurify.sanitize(userInput); document.getElementById('output').innerHTML = sanitizedInput; 2. Use Content Security Policy (CSP) A well-configured Content Security Policy can be your best friend against Cross-Site Scripting (XSS) attacks. CSP allows you to specify which sources are trusted for loading scripts, styles, and other resources. What to Do Configure your CSP headers in your server or CDN.Use tools like CSP Evaluator to test your policy. Example CSP Header Pro tip: Start with report-only mode to see what’s breaking before enforcing it. 3. Secure Your API Calls Your frontend likely interacts with a bunch of APIs. Securing these calls is crucial, especially if sensitive data is being exchanged. What to Do Always use HTTPS. It’s non-negotiable.Store API keys securely (hint: not in your frontend code).Implement token-based authentication (e.g., JWTs) and refresh tokens securely. Example JavaScript fetch('https://api.example.com/data', { headers: { Authorization: `Bearer ${accessToken}`, }, }); Never hardcode sensitive tokens in your frontend. Use environment variables and keep them safe. 4. Avoid Storing Sensitive Data in the Frontend Your frontend is not a secure place to store sensitive data. Anything in localStorage, sessionStorage, or cookies can potentially be accessed or tampered with by malicious actors. What to Do Use cookies with the HttpOnly and Secure flags for storing authentication tokens.Minimize what you store on the client side. Example Cookie Config Plain Text Set-Cookie: token=abc123; HttpOnly; Secure; SameSite=Strict; 5. Keep Your Dependencies Up to Date Old dependencies can introduce vulnerabilities to your app. Attackers often exploit known vulnerabilities in outdated libraries. What to Do Regularly audit your dependencies with tools like npm audit or Snyk.Lock your dependencies to avoid unexpected changes. Example Audit Command Shell npm audit fix 6. Don’t Trust the Frontend (Yes, Your Own Code) Even if you’re confident in your code, don’t trust it entirely. Assume attackers can bypass any validation you’ve added. What to Do Perform all critical validations and authorization checks on the backend.Limit what data your APIs expose to the frontend. Example JavaScript // Backend check if (user.role !== 'admin') { throw new Error('Unauthorized'); } 7. Protect Against Clickjacking Clickjacking tricks users into clicking something they didn’t intend to. Imagine someone loading your app in an invisible iframe and stealing actions. What to Do Add X-Frame-Options headers or Content-Security-Policy frame directives to prevent your app from being embedded in iframes. Example Header Plain Text X-Frame-Options: DENY 8. Test for Vulnerabilities Regularly Even with best practices, security is an ongoing effort. Regular testing is crucial to catch vulnerabilities before attackers do. What to Do Use tools like OWASP ZAP for security testing.Perform penetration tests on critical applications. Conclusion Securing your frontend doesn’t have to be overwhelming. By following these practices, you’re already ahead of the curve. Remember, security isn’t a one-time task; it’s a continuous process. So, keep learning, keep testing, and keep your users safe.
Configuration files control how applications, systems, and security policies work, making them crucial for keeping systems reliable and secure. If these files are changed accidentally or without permission, it can cause system failures, security risks, or compliance issues. Manually checking configuration files takes a lot of time, is prone to mistakes, and isn’t reliable, especially in complex IT systems. Event-driven Ansible offers a way to automatically monitor and manage configuration files. It reacts to changes as they happen, quickly detects them, takes automated actions, and works seamlessly with the tools and systems you already use. In this article, I will demonstrate how to use Ansible to monitor the Nginx configuration file and trigger specific actions if the file is modified. In the example below, I use the Ansible debug module to print the message to the host. However, this setup can be integrated with various Ansible modules depending on the organization's requirements. About the Module The ansible.eda.file_watch module is a part of event-driven Ansible and is used to monitor changes in specified files or directories. It can detect events such as file creation, modification, or deletion and trigger automated workflows based on predefined rules. This module is particularly useful for tasks like configuration file monitoring and ensuring real-time responses to critical file changes. Step 1 To install Nginx on macOS using Homebrew, run the command brew install nginx, which will automatically download and install Nginx along with its dependencies. By default, Homebrew places Nginx in the directory /usr/local/Cellar/nginx/ and configures it for use on macOS systems. After installation, edit the configuration file at /usr/local/etc/nginx/nginx.conf to set the listen directive to listen 8080;, then start the Nginx service with brew services start nginx. To confirm that Nginx is running, execute the command curl http://localhost:8080/ in the terminal. If Nginx is properly configured, you will receive an HTTP response indicating that it is successfully serving content on port 8080. Step 2 In the example below, the configwatch.yml playbook is used to monitor the Nginx configuration file at /usr/local/etc/nginx/nginx.conf. It continuously observes the file for any changes. When a modification is detected, the rule triggers an event that executes the print-console-message.yaml playbook. YAML --- - name: Check if the nginx config file is modified hosts: localhost sources: - name: file_watch ansible.eda.file_watch: path: /usr/local/etc/nginx/nginx.conf recursive: true rules: - name: Run the action if the /usr/local/etc/nginx/nginx.conf is modified condition: event.change == "modified" action: run_playbook: name: print-console-message.yml This second playbook performs a task to print a debug message to the console. Together, these playbooks provide automated monitoring and instant feedback whenever the configuration file is altered. YAML --- - name: Playbook for printing the message in console hosts: localhost connection: local gather_facts: false tasks: - name: Error message in the console debug: msg: "Server config altered" Demo To monitor the Nginx configuration file for changes, execute the command ansible-rulebook -i localhost -r configwatch.yml, where -i localhost specifies the inventory as the local system, and -r configwatch.yml points to the rulebook file that defines the monitoring rules and actions. This command will initiate the monitoring process, enabling Ansible to continuously watch the specified Nginx configuration file for any modifications. When changes are detected, the rules in the configwatch.yml file will trigger the action to run the print-console-message.yaml playbook. Check the last modified time of /usr/local/etc/nginx/nginx.conf by running the ls command. Use the touch command to update the last modified timestamp, followed by the ls command to display the output in the console. The output of the ansible-rulebook -i localhost -r configwatch.yml command, it detected the file timestamp modification change and triggered the corresponding action. Benefits of Event-Driven Ansible for Configuration Monitoring Event-driven Ansible simplifies configuration monitoring by instantly detecting changes and responding immediately. Organizations can extend the functionality to automatically fix issues without manual intervention, enhancing security by preventing unauthorized modifications. It also supports compliance by maintaining records and adhering to regulations while efficiently managing large and complex environments. Use Cases The Event-Driven Ansible File Watch module can serve as a security compliance tool by monitoring critical configuration files, such as SSH or firewall settings, to ensure they align with organizational policies. It can also act as a disaster recovery solution, automatically restoring corrupted or deleted configuration files from predefined backups. Additionally, it can be used as a multi-environment management tool, ensuring consistency across deployments by synchronizing configurations. Conclusion Event-driven Ansible is a reliable and flexible tool for monitoring configuration files in real time. It automatically detects, helping organizations keep systems secure and compliant. As systems become more complex, it offers a modern and easy-to-adapt way to manage configurations effectively. Note: The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
In the software development lifecycle (SDLC), testing is one of the important stages where we ensure that the application works as expected and meets end-user requirements. Among the various techniques that we use for testing, mocking plays a crucial role in testing different components of a system, especially when the external services that the application is dependent on are not yet ready or deployed. With that being said, let’s try to understand what mocking is and how it helps in integration testing and end-to-end (E2E) testing. What Is Mocking? Mocking is the process of simulating the behavior of real objects or services that an application interacts with. In other words, when you mock something, you are creating a fake version of the real-world entity that behaves like the real thing but in a controlled way. For example, imagine you are building an e-commerce application. The application might be dependent on a payment gateway to process the payments. However, during testing, it might not be feasible to use the actual payment gateway service due to various factors like costs, service unavailability, not being able to control the response, etc. Here comes the concept of mocking, which we can use to test our application in a controllable way. Mocks can be used to replace dependencies (API, databases, etc.) and test our application in isolation. The Importance of Mocking Faster tests: Most of the time, when we interact with external services, tests usually tend to be either flaky or long-running due to external service either being unavailable or taking a longer time to respond. However, when we consider mocks, they are usually fast and reliable, which helps in faster execution of tests.Ability to test edge cases: When we use mocks, we have complete control over the response that a service can return. This is helpful when we want to test edge cases like exception scenarios, time out, errors, etc.,Isolation: With mocking, we can test specific functionality in an isolated way. For instance, if the application relies on a database, we can mock the database response in case we have challenges in setting up specific test data.Eliminate dependencies: If the application depends on a lot of external services that can make our tests unreliable and flaky, we can use mocks, which helps make our tests reliable. How to Mock an API? Now, let’s look at an example of how to mock an API call. For illustration purposes, we will use Java, Maven, Junit4, and Wiremock. 1. Add WireMock as a dependency to your project: Java <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <version>3.10.0</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.26.3</version> <scope>test</scope> </dependency> 2. Add the WireMock rule: Java import static com.github.tomakehurst.wiremock.client.WireMock.*; 3. Set up WireMock: Java @Rule public WireMockRule wireMockRule = new WireMockRule(8089); // No-args constructor defaults to port 8080 4. Mock an API response: Java import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; ... @Test public void exampleTest() { // Setup the WireMock mapping stub for the test stubFor(post("/my/resource") .withHeader("Content-Type", containing("xml")) .willReturn(ok() .withHeader("Content-Type", "text/xml") .withBody("<response>SUCCESS</response>"))); // Setup HTTP POST request (with HTTP Client embedded in Java 11+) final HttpClient client = HttpClient.newBuilder().build(); final HttpRequest request = HttpRequest.newBuilder() .uri(wiremockServer.url("/my/resource")) .header("Content-Type", "text/xml") .POST().build(); // Send the request and receive the response final HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); // Verify the response (with AssertJ) assertThat(response.statusCode()).as("Wrong response status code").isEqualTo(200); assertThat(response.body()).as("Wrong response body").contains("<response>SUCCESS</response>"); } Best Practices Use Mocks only when required: Mocks help isolate the external services and test the application in a controlled way. However, overusing the mock can cause bugs in production if not tested with real services in staging environments.Mock External Services only: Only external services should be mocked and not the business logic.Always Update Mocks with the latest system contracts: Whenever there is a change in the real service contract/response, make sure the mock is also updated accordingly. Otherwise, we might be testing inaccurately. Conclusion Mocking comes in very handy when it comes to integration and end-to-end testing. Specifically, in tight deadlines, when the external services code changes are not ready for testing in the staging environments, mocking helps to get started with testing early and discover potential bugs in the application. However, we always need to ensure that the application is tested with real service before deploying to production.
LLMs need to connect to the real world. LangChain4j tools, combined with Apache Camel, make this easy. Camel provides robust integration, connecting your LLM to any service or API. This lets your AI interact with databases, queues, and more, creating truly powerful applications. We'll explore this powerful combination and its potential. Setting Up the Development Environment Ollama: Provides a way to run large language models (LLMs) locally. You can run many models, such as LLama3, Mistral, CodeLlama, and many others on your machine, with full CPU and GPU support.Visual Studio Code: With Kaoto, Java, and Quarkus plugins installed.OpenJDK 21MavenQuarkus 3.17Quarkus Dev Services: A feature of Quarkus that simplifies the development and testing of applications the development and testing of applications that rely on external services such as databases, messaging systems, and other resources. You can download the complete code at the following GitHub repo. The following instructions will be executed on Visual Studio Code: 1. Creating the Quarkus Project Shell mvn io.quarkus:quarkus-maven-plugin:3.17.6:create \ -DprojectGroupId=dev.mikeintoch \ -DprojectArtifactId=camel-agent-tools \ -Dextensions="camel-quarkus-core,camel-quarkus-langchain4j-chat,camel-quarkus-langchain4j-tools,camel-quarkus-platform-http,camel-quarkus-yaml-dsl" 2. Adding langchain4j Quarkus Extensions Shell ./mvnw quarkus:add-extension -Dextensions="io.quarkiverse.langchain4j:quarkus-langchain4j-core:0.22.0" ./mvnw quarkus:add-extension -Dextensions="io.quarkiverse.langchain4j:quarkus-langchain4j-ollama:0.22.0" 3. Configure Ollama to Run Ollama LLM Open the application.properties file and add the following lines: Properties files #Configure Ollama local model quarkus.langchain4j.ollama.chat-model.model-id=qwen2.5:0.5b quarkus.langchain4j.ollama.chat-model.temperature=0.0 quarkus.langchain4j.ollama.log-requests=true quarkus.langchain4j.log-responses=true quarkus.langchain4j.ollama.timeout=180s Quarkus uses Ollama to run LLM locally and also auto wire configuration for the use in Apache camel components in the following steps. 4. Creating Apache Camel Route Using Kaoto Create a new folder named route in the src/main/resources folder. Create a new file in the src/main/resources/routes folder and name route-main.camel.yaml, and Visual Studio Code opens the Kaoto visual editor. Click on the +New button and a new route will be created. Click on the circular arrows to replace the timer component. Search and select platform-http component from the catalog. Configure required platform-http properties: Set path with the value /camel/chat By default, platform-http will be serving on port 8080. Click on the Add Step Icon in the arrow after the platform-http component. Search and select the langchain4j-tools component in the catalog. Configure required langchain4j-tools properties: Set Tool Id with value my-tools.Set Tags with store (Defining tags is for grouping the tools to use with the LLM). You must process the user input message to the langchain4j-tools component able to use, then click on the Add Step Icon in the arrow after the platform-http component. Search and select the Process component in the catalog. Configure required properties: Set Ref with the value createChatMessage. The process component will use the createChatMessage method you will create in the following step. 5. Create a Process to Send User Input to LLM Create a new Java Class into src/main/java folder named Bindings.java. Java import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.HashMap; import org.apache.camel.BindToRegistry; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.builder.RouteBuilder; import dev.langchain4j.data.message.ChatMessage; import dev.langchain4j.data.message.SystemMessage; import dev.langchain4j.data.message.UserMessage; public class Bindings extends RouteBuilder{ @Override public void configure() throws Exception { // Routes are loading in yaml files. } @BindToRegistry(lazy=true) public static Processor createChatMessage(){ return new Processor() { public void process(Exchange exchange) throws Exception{ String payload = exchange.getMessage().getBody(String.class); List<ChatMessage> messages = new ArrayList<>(); String systemMessage = """ You are an intelligent store assistant. Users will ask you questions about store product. Your task is to provide accurate and concise answers. In the store have shirts, dresses, pants, shoes with no specific category %s If you are unable to access the tools to answer the user's query, Tell the user that the requested information is not available at this time and that they can try again later. """; String tools = """ You have access to a collection of tools You can use multiple tools at the same time Complete your answer using data obtained from the tools """; messages.add(new SystemMessage(systemMessage.formatted(tools))); messages.add(new UserMessage(payload)); exchange.getIn().setBody(messages); } }; } } This class helps create a Camel Processor to transform the user input into an object that can handle the langchain4j component in the route. It also gives the LLM context for using tools and explains the Agent's task. 6. Creating Apache Camel Tools for Using With LLM Create a new file in the src/main/resources/routes folder and name it route-tool-products.camel.yaml, and in Visual Studio Code, open the Kaoto visual editor. Click on the +New button, and a new route will be created. Click on the circular arrows to replace the timer component. Search and select the langchain4j-tools component in the catalog. Configure langchain4j-tools, click on the All tab and search Endpoint properties. Set Tool Id with value productsbycategoryandcolor.Set Tags with store (The same as in the main route).Set Description with value Query database products by category and color (a brief description of the tool). Add parameters that will be used by the tool: NAME: category, VALUE: stringNAME: color, VALUE: string These parameters will be assigned by the LLM for use in the tool and are passed via header. Add SQL Component to query database, then click on Add Step after the langchain4j-tools component. Search and select SQL component. Configure required SQL properties: Query with the following value. SQL Select name, description, category, size, color, price, stock from products where Lower(category)= Lower (:#category) and Lower(color) = Lower(:#color) Handle parameters to use in the query, then add a Convert Header component to convert parameters to a correct object type. Click on the Add Step button after langchain4j-tools, search, and select Convert Header To transformation in the catalog. Configure required properties for the component: Name with the value categoryType with the value String Repeat the steps with the following values: Name with the value colorType with the value String As a result, this is how the route looks like: Finally, you need to transform the query result into an object that the LLM can handle; in this example, you transform it into JSON. Click the Add Step button after SQL Component, and add the Marshal component. Configure data format properties for the Marshal and select JSon from the list. 7. Configure Quarkus Dev Services for PostgreSQL Add Quarkus extension to provide PostgreSQL for dev purposes, run following command in terminal. Shell ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-jdbc-postgresql" Open application.properties and add the following lines: Properties files #Configuring devservices for Postgresql quarkus.datasource.db-kind=postgresql quarkus.datasource.devservices.port=5432 quarkus.datasource.devservices.init-script-path=db/schema-init.sql quarkus.datasource.devservices.db-name=store Finally, create our SQL script to load the database. Create a folder named db into src/main/resources, and into this folder, create a file named schema-init.sql with the following content. SQL DROP TABLE IF EXISTS products; CREATE TABLE IF NOT EXISTS products ( id SERIAL NOT NULL, name VARCHAR(100) NOT NULL, description varchar(150), category VARCHAR(50), size VARCHAR(20), color VARCHAR(20), price DECIMAL(10,2) NOT NULL, stock INT NOT NULL, CONSTRAINT products_pk PRIMARY KEY (id) ); INSERT INTO products (name, description, category, size, color, price, stock) VALUES ('Blue shirt', 'Cotton shirt, short-sleeved', 'Shirts', 'M', 'Blue', 29.99, 10), ('Black pants', 'Jeans, high waisted', 'Pants', '32', 'Black', 49.99, 5), ('White Sneakers', 'Sneakers', 'Shoes', '40', 'White', 69.99, 8), ('Floral Dress', 'Summer dress, floral print, thin straps.', 'Dress', 'M', 'Pink', 39.99, 12), ('Skinny Jeans', 'Dark denim jeans, high waist, skinny fit.', 'Pants', '28', 'Blue', 44.99, 18), ('White Sneakers', 'Casual sneakers, rubber sole, minimalist design.', 'Shoes', '40', 'White', 59.99, 10), ('Beige Chinos', 'Casual dress pants, straight cut, elastic waist.', 'Pants', '32', 'Beige', 39.99, 15), ('White Dress Shirt', 'Cotton shirt, long sleeves, classic collar.', 'Shirts', 'M', 'White', 29.99, 20), ('Brown Hiking Boots', 'Waterproof boots, rubber sole, perfect for hiking.', 'Shoes', '42', 'Brown', 89.99, 7), ('Distressed Jeans', 'Distressed denim jeans, mid-rise, regular fit.', 'Pants', '30', 'Blue', 49.99, 12); 8. Include our Route to be Loaded by the Quarkus Project Camel Quarkus supports several domain-specific languages (DSLs) in defining Camel Routes. It is also possible to include yaml DSL routes, adding the following line on the application.properties file. Properties files # routes to load camel.main.routes-include-pattern = routes/*.yaml This will be load all routes in the src/main/resources/routes folder. 9. Test the App Run the application using Maven, open a Terminal in Visual Studio code, and run the following command. Shell mvn quarkus:dev Once it has started, Quarkus calls Ollama and runs your LLM locally, opens a terminal, and verifies with the following command. Shell ollama ps NAME ID SIZE PROCESSOR UNTIL qwen2.5:0.5b a8b0c5157701 1.4 GB 100% GPU 4 minutes from now Also, Quarkus creates a container running PostgreSQL and creates a database and schema. You can connect using psql command. Shell psql -h localhost -p 5432 -U quarkus -d store And query products table: Shell store=# select * from products; id | name | description | category | size | color | price | stock ----+--------------------+----------------------------------------------------+----------+------+-------+-------+------- 1 | Blue shirt | Cotton shirt, short-sleeved | Shirts | M | Blue | 29.99 | 10 2 | Black pants | Jeans, high waisted | Pants | 32 | Black | 49.99 | 5 3 | White Sneakers | Sneakers | Shoes | 40 | White | 69.99 | 8 4 | Floral Dress | Summer dress, floral print, thin straps. | Dress | M | Pink | 39.99 | 12 5 | Skinny Jeans | Dark denim jeans, high waist, skinny fit. | Pants | 28 | Blue | 44.99 | 18 6 | White Sneakers | Casual sneakers, rubber sole, minimalist design. | Shoes | 40 | White | 59.99 | 10 7 | Beige Chinos | Casual dress pants, straight cut, elastic waist. | Pants | 32 | Beige | 39.99 | 15 8 | White Dress Shirt | Cotton shirt, long sleeves, classic collar. | Shirts | M | White | 29.99 | 20 9 | Brown Hiking Boots | Waterproof boots, rubber sole, perfect for hiking. | Shoes | 42 | Brown | 89.99 | 7 10 | Distressed Jeans | Distressed denim jeans, mid-rise, regular fit. | Pants | 30 | Blue | 49.99 | 12 (10 rows) To test the app, send a POST request to localhost:8080/camel/chat with a plain text body input. requesting for some product. The LLM may have hallucinated. Please try again modifying your request slightly. You can see how the LLM uses the tool and gets information from the database using the natural language request provided. LLM identifies the parameters and sends them to the tool. If you look in the request log, you can find the tools and parameters LLM is using to create the answer. Conclusion You've explored how to leverage the power of LLMs within your integration flows using Apache Camel and the LangChain4j component. We've seen how this combination allows you to seamlessly integrate powerful language models into your existing Camel routes, enabling you to build sophisticated applications that can understand, generate, and interact with human language.
Multi-tenancy has become an important feature for modern enterprise applications that need to serve multiple clients (tenants) from a single application instance. While an earlier version of Hibernate had support for multi-tenancy, its implementation required significant manual configuration and custom strategies to handle tenant isolation, which resulted in higher complexity and slower processes, especially for applications with a number of tenants. The latest version of Hibernate 6.3.0, which was released on December 15, 2024, addressed the above limitations with enhanced multi-tenancy support through better tools for tenant identification, schema resolution, and enhanced performance for handling tenant-specific operations. This article talks about how Hibernate 6.3.0 enhanced the traditional multi-tenancy implementation significantly. Traditional Multi-Tenancy Implementation Before Hibernate 6.3.0 was released, multi-tenancy required developers to set up tenant strategies manually. For example, the developers needed to implement some custom logic for schema or database resolution and use the the Hibernate-provided CurrentTenantIdentifierResolver interface to identify the current tenant, which was not only error-prone but also added significant operational complexity and performance overhead. Below is an example of how multi-tenancy was configurated traditionally: Java public class CurrentTenantIdentifierResolverImpl implements CurrentTenantIdentifierResolver { @Override public String resolveCurrentTenantIdentifier() { return TenantContext.getCurrentTenant(); // Custom logic for tenant resolution } @Override public boolean validateExistingCurrentSessions() { return true; } } SessionFactory sessionFactory = new Configuration() .setProperty("hibernate.multiTenancy", "SCHEMA") .setProperty("hibernate.tenant_identifier_resolver", CurrentTenantIdentifierResolverImpl.class.getName()) .buildSessionFactory(); Output: VB.NET INFO: Resolving tenant identifier INFO: Current tenant resolved to: tenant_1 INFO: Setting schema for tenant: tenant_1 Improved Multi-Tenancy in Hibernate 6.3.0 Hibernate 6.3.0 added significant improvements to simplify and enhance multi-tenancy management, and the framework now offers: 1. Configurable Tenant Strategies Developers can use built-in strategies or extend them to meet any specific application needs. For example, a schema-based multi-tenancy strategy can be implemented without any excessive boilerplate code. Example of the new configuration: Java @Configuration public class HibernateConfig { @Bean public MultiTenantConnectionProvider multiTenantConnectionProvider() { return new SchemaBasedMultiTenantConnectionProvider(); // Built-in schema-based provider } @Bean public CurrentTenantIdentifierResolver tenantIdentifierResolver() { return new CurrentTenantIdentifierResolverImpl(); } @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder) { return builder .dataSource(dataSource()) .properties(hibernateProperties()) .packages("com.example.app") .persistenceUnit("default") .build(); } } Log output: VB.NET INFO: Multi-tenant connection provider initialized INFO: Tenant resolved: tenant_2 INFO: Schema switched to: tenant_2 2. Performance Optimization In earlier versions, switching between tenant schemas could have resulted in latencies, especially for frequent tenant-specific queries. Hibernate 6.3.0 optimized schema switching at the database connection level, which resulted in faster query execution and improved performance in multi-tenant environments. Example output: VB.NET DEBUG: Connection switched to tenant schema: tenant_3 DEBUG: Query executed in 15ms on schema: tenant_3 3. Improved API Support Hibernate 6.3.0 introduces new APIs that allow developers to manage tenant-specific sessions and transactions more effectively. For example, developers can programmatically switch tenants within a session using short API calls. Java Session session = sessionFactory.withOptions() .tenantIdentifier("tenant_4") .openSession(); Transaction transaction = session.beginTransaction(); // Perform tenant-specific operations transaction.commit(); session.close(); The above snippet makes it easy to handle multi-tenant operations dynamically, as the framework ensures proper schema management behind the scenes. Conclusion The improvements in Hibernate 6.3.0 address many of the existing challenges that developers faced with earlier implementations, and by simplifying tenant identification and schema resolution, the framework reduced the development effort required for scalable multi-tenancy setup. Additionally, the performance optimizations ensure that tenant-specific operations such as schema switching or query execution are faster, more reliable, and more efficient.
Human Capital Management (HCM) cloud systems, such as Oracle HCM and Workday, are vital for managing core HR operations. However, migrating to these systems and conducting necessary testing can be complex. Robotic Process Automation (RPA) provides a practical solution to streamline these processes. Organizations can accelerate the implementation and operationalization of HCM cloud applications through data migration, managing multi-factor authentication (MFA), assigning roles post-deployment, and conducting User Acceptance Testing (UAT). This article offers practical guidance for finance and IT teams on leveraging RPA tools to enhance HCM cloud implementation. By sharing best practices and real-world examples, we aim to present a roadmap for effectively applying RPA across various HCM platforms to overcome common implementation challenges. Introduction Through our work with HCM cloud systems, we’ve witnessed their importance in managing employee data, payroll, recruitment, and compliance. However, transitioning from legacy systems presents challenges such as complex data migration, secure API integrations, and multi-factor authentication (MFA). Additionally, role-based access control (RBAC) adds compliance complexities. Robotic Process Automation (RPA) can automate these processes, reducing manual efforts and errors while improving efficiency. This white paper explores how RPA tools, especially UiPath, can address these challenges, showcasing use cases and practical examples to help organizations streamline their HCM cloud implementations. Role of RPA in HCM Cloud Implementation and Testing RPA provides a powerful means to streamline repetitive processes, reduce manual efforts, and enhance operational efficiency. Below is the list of areas where RPA plays a key role in HCM cloud implementation and testing. 1. Automating Data Migration and Validation Migrating employee data from legacy systems to HCM cloud platforms can be overwhelming, especially with thousands of records to transfer. In several migration projects we managed, ensuring accuracy and consistency was critical to avoid payroll or compliance issues. Early on, we realized that manual efforts were prone to errors and delays, which is why we turned to RPA tools like UiPath to streamline these processes. In one project, we migrated employee data from a legacy payroll system to Oracle HCM. Our bot read records from Excel files, validated missing IDs and job titles, and flagged errors for quick resolution. This automation reduced a two-week manual effort to just a few hours, ensuring an accurate and smooth transition. Without automation, these discrepancies would have caused delays or disrupted payroll, but the bot gave our HR team confidence by logging and isolating issues for easy correction. Lessons from Experience Token refresh for API access: To prevent disruptions, we implemented automatic token refresh logic, ensuring smooth uploads.Batch processing for efficiency: In high-volume migrations, batch processing avoided API rate limits and system timeouts.Comprehensive error logging: Detailed logs allowed us to pinpoint and resolve issues without needing full reviews.Validation at key stages: Bots validated data both before and after migration, ensuring compliance and data integrity. Seeing firsthand how automation reduced errors, saved time, and gave HR teams peace of mind has been deeply rewarding. These experiences have confirmed my belief that RPA isn’t just a tool — it’s essential for ensuring seamless, reliable HCM transitions. 2. Handling Multi-Factor Authentication (MFA) and Secure Login Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. Example: Automating Login to HCM Cloud With MFA Handling In one of our projects, we automated the login process for an HCM cloud platform using UiPath, ensuring smooth OTP retrieval and submission. The bot launched the HCM portal, entered the username and password, retrieved the OTP from a connected SMS service, and completed the login process. This approach ensured that critical workflows were executed without manual intervention, even when MFA was enabled. Best Practices from Experience Secure credential management: Stored user credentials in vaults to protect sensitive data.Seamless OTP integration: Integrated bots with external OTP services, ensuring secure and real-time code retrieval.Validation and error handling: Bots were designed to log each login attempt for easy tracking and troubleshooting. This method not only ensured secure access but also improved operational efficiency by eliminating the need for manual logins. Our collaborative efforts using RPA have enabled businesses to navigate MFA challenges smoothly, reducing downtime and maintaining continuity in critical processes. 3. Automating Role-Based Access Control (RBAC) Setup It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. Example: Automating Role Assignment Using UiPath In one of our initiatives, we automated the role assignment process by reading role assignments from an Excel file and executing API calls to update user roles in the HCM cloud. The bot efficiently processed the data and assigned the appropriate roles based on the entries in the spreadsheet. The automation workflow involved reading the role assignments, iterating through each entry, and sending HTTP requests to the HCM cloud API to assign roles. This streamlined approach not only improved accuracy but also minimized the risk of human error in role assignments. Best Practices from Experience Secure credential management: We utilized RPA vaults or secret managers, such as HashiCorp Vault, to securely manage bot credentials, ensuring sensitive information remains protected.Audit logging: Implementing comprehensive audit logs allowed us to track role changes effectively, providing a clear history of modifications and enhancing accountability. By automating role assignments, we ensured that users maintained the appropriate access levels throughout their career transitions, aligning with compliance requirements and enhancing overall security within the organization. Our collaborative efforts in implementing RPA have significantly improved the management of user roles, contributing to a more efficient and secure operational environment. 4. Automated User Acceptance Testing (UAT) User Acceptance Testing (UAT) is a critical phase in ensuring that HCM cloud systems meet business requirements before going live. To streamline this process, we implemented RPA bots capable of executing predefined UAT scenarios, comparing expected and actual results, and automatically logging the test results. This automation not only accelerates the testing phase but also ensures that any issues are identified and resolved before the system goes live. In one of our initiatives, we developed a UiPath workflow that executed UAT scenarios from an Excel sheet, capturing the outcomes of each test. By systematically verifying each functionality, we ensured that the system performed as intended, significantly reducing the risk of post-deployment issues. Best Practices from Experience Automate end-to-end scenarios: We ensured higher test coverage by automating comprehensive end-to-end scenarios, providing confidence that the system meets all functional requirements.Report generation for UAT results: By implementing automated report generation for UAT results, we maintained clear documentation of test outcomes, facilitating transparency and accountability within the team. Through our collaborative efforts in automating UAT, we significantly improved the testing process, allowing for a smooth and successful go-live experience. 5. API Rate Limits and Error Handling With Exponential Backoff Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly. In our projects, we established a retry mechanism using UiPath that intelligently handled API requests. By incorporating an exponential backoff strategy, the bot could wait progressively longer between retries when encountering rate limit errors, thereby reducing the likelihood of being locked out. Best Practices from Experience Implement retry logic: We incorporated structured retry logic to handle API requests, allowing the bot to efficiently manage rate limits while ensuring successful execution.Logging and monitoring: By logging attempts and outcomes during the retry process, we maintained clear visibility into the bot's activities, which facilitated troubleshooting and optimization. By effectively managing API rate limits and implementing error-handling strategies, our collaborative efforts have enhanced the reliability of our automation initiatives, ensuring seamless integration with HCM systems and maintaining operational efficiency. Conclusion RPA tools significantly accelerate the implementation and testing of Human Capital Management (HCM) cloud systems by automating complex and repetitive tasks. This includes data migration, multifactor authentication (MFA) handling, role-based access setup, user acceptance testing (UAT) execution, and error handling. By automating these processes, organizations can complete them more quickly, without the need for human intervention, resulting in fewer errors. Organizations that adopt RPA for HCM cloud projects can achieve several key benefits: Faster deployment timelines: Automation reduces the time required for implementation and testing, allowing organizations to go live more swiftly.Improved data accuracy: Automated processes minimize the risk of human error during data migration and other critical tasks, ensuring that information remains accurate and reliable.Better compliance: RPA helps organizations adhere to security protocols and regulations by consistently managing tasks that require strict compliance measures. To fully realize the benefits of RPA in scaling HCM cloud implementations and maintaining operational efficiency over time, organizations should follow best practices. These include secure credential management, effective exception handling, and comprehensive reporting. By doing so, enterprises can leverage RPA to optimize their HCM cloud systems effectively.
For years, developers have dreamed of having a coding buddy who would understand their projects well enough to automatically create intelligent code, not just pieces of it. We've all struggled with the inconsistent naming of variables across files, trying to recall exactly what function signature was defined months ago, and wasted valuable hours manually stitching pieces of our codebase together. This is where large language models (LLMs) come in — not as chatbots, but as strong engines in our IDEs, changing how we produce code by finally grasping the context of our work. Traditional code generation tools, and even basic features of IDE auto-completion, usually fall short because they lack a deep understanding of the broader context; hence, they usually operate in a very limited view, such as only the current file or a small window of code. The result is syntactically correct but semantically inappropriate suggestions, which need to be constantly manually corrected and integrated by the developer. Think about suggesting a variable name that is already used at some other crucial module with a different meaning — a frustrating experience we've all encountered. LLMs now change this game entirely by bringing a much deeper understanding to the table: analyzing your whole project, from variable declarations in several files down to function call hierarchies and even your coding style. Think of an IDE that truly understands not just the what of your code but also the why and how in the bigger scheme of things. That is a promise of LLM-powered IDEs, and it's real. Take, for example, a state-of-the-art IDE using LLMs, like Cursor. It's not simply looking at the line you're typing; it knows what function you are in, what variables you have defined in this and related files, and the general structure of your application. That deep understanding is achieved by some fancy architectural components. This is built upon what's called an Abstract Syntax Tree, or AST. An IDE will parse your code into a tree-like representation of the grammatical constructs in that code. This gives the LLM at least an elementary understanding of code, far superior to simple plain text. Secondly, in order to properly capture semantics between files, a knowledge graph has been generated. It interlinks all of the class-function-variable relationships throughout your whole project and builds an understanding of these sorts of dependencies and relationships. Consider a simplified JavaScript example of how context is modeled: JavaScript /* Context Model based on an edited single document and with external imports */ function Context(codeText, lineInfo, importedDocs) { this.current_line_code = codeText; // Line with active text selection this.lineInfo = lineInfo; // Line number, location, code document structure etc. this.relatedContext = { importedDocs: importedDocs, // All info of imported or dependencies within text }; // ... additional code details ... } This flowchart shows how information flows when a developer changes his/her code. Markdown graph LR A[Editor(User Code Modification)] --> B(Context Extractor); B --> C{AST Structure Generation}; C --> D[Code Graph Definition Creation ]; D --> E( LLM Context API Input) ; E --> F[LLM API Call] ; F --> G(Generated Output); style A fill:#f9f,stroke:#333,stroke-width:2px style F fill:#aaf,stroke:#333,stroke-width:2px The Workflow of LLM-Powered IDEs 1. Editor The process starts with a change that you, as the developer, make in the code using the code editor. Perhaps you typed some new code, deleted some lines, or even edited some statements. This is represented by node A. 2. Context Extractor That change you have just made triggers the Context Extractor. This module essentially collects all information around your modification within the code — somewhat like an IDE detective looking for clues in the environs. This is represented by node B. 3. AST Structure Generation That code snippet is fed to a module called AST Structure Generation. AST is the abbreviation for Abstract Syntax Tree. This module will parse your code, quite similar to what a compiler would do. Then, it begins creating a tree-like representation of the grammatical structure of your code. For LLMs, such a structured view is important for understanding the meaning and the relationships among the various parts of the code. This is represented by node C, provided within the curly braces. 4. Creation of Code Graph Definition Next, the creation of the Code Graph Definition will be done. This module will take the structured information from the AST and build an even greater understanding of how your code fits in with the rest of your project. It infers dependencies between files, functions, classes, and variables and extends the knowledge graph, creating a big picture of the general context of your codebase. This is represented by node D. 5. LLM Context API Input All the context gathered and structured — the current code, the AST, and the code graph — will finally be transformed into a particular input structure. This will be done so that it is apt for the large language model input. Then, finally, this input is sent to the LLM through a request, asking for either code generation or its completion. This is represented by node E. 6. LLM API Call It is now time to actually call the LLM. At this moment, the well-structured context is passed to the API of the LLM. This is where all the magic has to happen: based on its training material and given context, the LLM should give suggestions for code. This is represented with node F, colored in blue to indicate again that this is an important node. 7. Generated Output The LLM returns its suggestions, and the user sees them inside the code editor. This could be code completions, code block suggestions, or even refactoring options, depending on how well the IDE understands the current context of your project. This is represented by node G. So, how does this translate to real-world improvements? We've run benchmarks comparing traditional code completion methods with those powered by LLMs in context-aware IDEs. The results are compelling: Metric Baseline (Traditional Methods) LLM-Powered IDE (Context Aware) Improvement Accuracy of Suggestions (Score 0-1) 0.55 0.91 65% Higher Average Latency (ms) 20 250 Acceptable for Benefit Token Count in Prompt Baseline **~ 30% Less (Optimized Context)** Optimized Prompt Size Graph: Comparison of suggestion accuracy scores across 10 different code generation tasks. A higher score indicates better accuracy. Markdown graph LR A[Test Case 1] -->|Baseline: 0.5| B(0.9); A -->|LLM IDE: 0.9| B; C[Test Case 2] -->|Baseline: 0.6| D(0.88); C -->|LLM IDE: 0.88| D; E[Test Case 3] -->|Baseline: 0.7| F(0.91); E -->|LLM IDE: 0.91| F; G[Test Case 4] -->|Baseline: 0.52| H(0.94); G -->|LLM IDE: 0.94| H; I[Test Case 5] -->|Baseline: 0.65| J(0.88); I -->|LLM IDE: 0.88| J; K[Test Case 6] -->|Baseline: 0.48| L(0.97); K -->|LLM IDE: 0.97| L; M[Test Case 7] -->|Baseline: 0.58| N(0.85); M -->|LLM IDE: 0.85| N; O[Test Case 8] -->|Baseline: 0.71| P(0.90); O -->|LLM IDE: 0.90| P; Q[Test Case 9] -->|Baseline: 0.55| R(0.87); Q -->|LLM IDE: 0.87| R; S[Test Case 10] -->|Baseline: 0.62| T(0.96); S -->|LLM IDE: 0.96| T; style B fill:#ccf,stroke:#333,stroke-width:2px style D fill:#ccf,stroke:#333,stroke-width:2px Let's break down how these coding tools performed, like watching a head-to-head competition. Imagine each row in our results table as a different coding challenge (we called them "Test Case 1" through "Test Case 10"). For each challenge, we pitted two approaches against each other: The Baseline: Think of this as the "old-school" method, either using standard code suggestions or a basic AI that doesn't really "know" the project inside and out. You'll see an arrow pointing from the test case (like 'Test Case 1', which we labeled Node A) to its score — that's how well the baseline did.The LLM IDE: This is the "smart" IDE we've built, the one with a deep understanding of the entire project, like it's been studying it for weeks. Another arrow points from the same test case to the same score, but this time, it tells you how the intelligent IDE performed. Notice how the result itself (like Node B) is highlighted in light blue? That's our visual cue to show where the smart IDE really shined. Take Test Case 1 (that's Node A) as an example: The arrow marked 'Baseline: 0.5' means the traditional method got it right about half the time for that task.But look at the arrow marked 'LLM IDE: 0.9'! The smart IDE, because it understands the bigger picture of the project, nailed it almost every time. If you scan through each test case, you'll quickly see a pattern: the LLM-powered IDE consistently and significantly outperforms the traditional approach. It's like having a super-knowledgeable teammate who always seems to know the right way to do things because they understand the entire project. The big takeaway here is the massive leap in accuracy when the AI truly grasps the context of your project. Yes, there's a tiny bit more waiting time involved as the IDE does its deeper analysis, but honestly, the huge jump in accuracy and the fact that you'll spend way less time fixing errors makes it a no-brainer for developers. But it's more than just the numbers. Think about the actual experience of coding. Engineers who've used these smarter IDEs say it feels like a weight has been lifted. They're not constantly having to keep every tiny detail of the project in their heads. They can focus on the bigger, more interesting problems, trusting that their IDE has their back on the details. Even tricky stuff like reorganizing code becomes less of a headache, and getting up to speed on a new project becomes much smoother because the AI acts like a built-in expert, helping you connect the dots. These LLM-powered IDEs aren't just about spitting out code; they're about making developers more powerful. By truly understanding the intricate connections within a project, these tools are poised to change how software is built. They'll make us faster and more accurate and, ultimately, allow us to focus on building truly innovative things. The future of coding assistance is here, and it's all about having that deep contextual understanding.