For Java apps, containerization helps solve the majority of challenges related to portability and consistency. See how.
Far too many vulnerabilities have been introduced into software products. Don't treat your supply chain security as an afterthought.
Dev Home and Dev Boxes: Revolutionizing Developer Environments
Formulating a Robust Strategy for Storage in Amazon Relational Database Service PostgreSQL Deployments
Observability and Performance
The dawn of observability across the software ecosystem has fully disrupted standard performance monitoring and management. Enhancing these approaches with sophisticated, data-driven, and automated insights allows your organization to better identify anomalies and incidents across applications and wider systems. While monitoring and standard performance practices are still necessary, they now serve to complement organizations' comprehensive observability strategies. This year's Observability and Performance Trend Report moves beyond metrics, logs, and traces — we dive into essential topics around full-stack observability, like security considerations, AIOps, the future of hybrid and cloud-native observability, and much more.
Java Application Containerization and Deployment
Software Supply Chain Security
Think back to those days when you met the love of your life. The feeling was mutual. The world seemed like a better place, and you were on an exciting journey with your significant other. You were both “all-in” as you made plans for a life together. Life was amazing... until it wasn’t. When things don’t work out as planned, then you’ve got to do the hard work of unwinding the relationship. Communicating with each other and with others. Sorting out shared purchases. Moving on. Bleh. Believe it or not, our relationship with technology isn’t all that different. Breaking Up With a Service There was a time when you decided to adopt a service — maybe it was a SaaS, or a PaaS, or something more generic. Back in the day, did you make the decision while also considering the time when you would no longer plan to use the service anymore? Probably not. You were just thinking of all the wonderful possibilities for the future. But what happens when going with that service is no longer in your best interest? Now, you’re in for a challenge, and it’s called service abdication. While services can be shut down with a reasonable amount of effort, getting the underlying data can be problematic. This often depends on the kind of service and the volume of data owned by that service provider. Sometimes, the ideal unwinding looks like this: Stop paying for the service, but retain access to the data source for some period of time. Is this even a possibility? Yes, it is! The Power of VPC Peering Leading cloud providers have embraced the virtual private cloud (VPC) network as the de facto approach to establishing connectivity between resources. For example, an EC2 instance on AWS can access a data source using VPCs and VPC end-point services. Think of it as a point-to-point connection. VPCs allow us to grant access to other resources in the same cloud provider, but we can also use them to grant access to external services. Consider a service that was recently abdicated but with the original data source left in place. Here’s how it might look: This concept is called VPC peering, and it allows for a private connection to be established from another network. A Service Migration Example Let’s consider a more concrete example. In your organization, a business decision was made to streamline how it operates in the cloud. While continuing to leverage some AWS services, your organization wanted to optimize how it builds, deploys, and manages its applications by terminating a third-party, cloud-based service running on AWS. They ran the numbers and concluded that internal software engineers could stand up and support a new auto-scaled service on Heroku for a fraction of the cost that they had been paying the third-party provider. However, because of a long tenure with the service provider, migrating the data source is not an option anytime soon. You don’t want the service, and you can’t move the data, but you still want access to the data. Fortunately, the provider has agreed to a new contract to continue hosting the data and provide access — via VPC peering. Here’s how the new arrangement would look: VPC Peering With Heroku In order for your new service (a Heroku app) to access the original data source in AWS, you’ll first need to run your app within a Private Space. For more information, you can read about secure cloud adoption and my discovery of Heroku Private Spaces. Next, you’ll need to meet the following simple network requirements: The VPC must use a compatible IPv4 CIDR block in its network configuration.The VPC must use an RFC1918 CIDR block (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16).The VPC’s CIDR block must not overlap with the CIDR ranges for your Private Space. The default ranges are 10.0.0.0/16, 10.1.0.0/16, and 172.17.0.0/16. With your Private Space up and running, you’ll need to retrieve its peering information: Shell $ heroku spaces:peering:info our-new-app === our-new-app Peering Info AWS Account ID: 647xxxxxx317 AWS Region: us-east-1 AWS VPC ID: vpc-e285ab73 AWS VPC CIDR: 10.0.0.0/16 Space CIDRs: 10.0.128.0/20, 10.0.144.0/20, 10.0.0.0/20, 10.0.16.0/20 Unavailable CIDRs: 10.1.0.0/16 Copy down the AWS Account ID (647xxxxxx317) and AWS VPC ID (vpc-e285ab73). You’ll need to give that information to the third-party provider who controls the AWS data source. From there, they can use either the AWS Console or CLI to create a peering connection. Their operation would look something like this: Shell $ aws ec2 create-vpc-peering-connection \ --vpc-id vpc-e527bb17 \ --peer-vpc-id vpc-e285ab73 \ --peer-owner-id 647xxxxxx317 { "VpcPeeringConnection": { "Status": { "Message": "Initiating Request to 647xxxxxx317", "Code": "initiating-request" }, "Tags": [], "RequesterVpcInfo": { "OwnerId": "714xxxxxx214", "VpcId": "vpc-e527bb17", "CidrBlock": "10.100.0.0/16" }, "VpcPeeringConnectionId": "pcx-123abc456", "ExpirationTime": "2025-04-23T22:05:27.000Z", "AccepterVpcInfo": { "OwnerId": "647xxxxxx317", "VpcId": "vpc-e285ab73" } } } This creates a request to peer. Once the provider has done this, you can view the pending request on the Heroku side: Shell $ heroku spaces:peerings our-new-app In the screenshot below, we can see the pending-acceptance status for the peering connection. From here, you can accept the peering connection request: Shell $ heroku spaces:peerings:accept pcx-123abc456 --space our-new-app Accepting and configuring peering connection pcx-123abc456 We check the request status a second time: Shell $ heroku spaces:peerings our-new-app We see that the peer connection is active. At this point, the app running in our Heroku Private Space will be able to access the AWS data source without any issues. Conclusion An unfortunate truth in life is that relationships can be unsuccessful just as often as they can be long-lasting. This applies to people, and it applies to technology. When it comes to technology decisions, sometimes changing situations and needs drive us to move in different directions. Sometimes, things just don’t work out. And in these situations, the biggest challenge is often unwinding an existing implementation — without losing access to persistent data. Fortunately, Heroku provides a solution for slowly migrating away from existing cloud-based solutions while retaining access to externally hosted data. Its easy integration for VPC peering with AWS lets you access resources that still need to live in the legacy implementation, even if the rest of you have moved on. Taking this approach will allow your new service to thrive without an interruption in service to the consumer.
Organizations adopting Infrastructure as Code (IaC) on AWS often struggle with ensuring that their infrastructure is not only correctly provisioned but also functioning as intended once deployed. Even minor misconfigurations can lead to costly downtime, security vulnerabilities, or performance issues. Traditional testing methods — such as manually inspecting resources or relying solely on static code analysis — do not provide sufficient confidence for production environments. There is a pressing need for an automated, reliable way to validate AWS infrastructure changes before they go live. Solution Terratest provides an automated testing framework written in Go, designed specifically to test infrastructure code in real-world cloud environments like AWS. By programmatically deploying, verifying, and destroying resources, Terratest bridges the gap between writing IaC (e.g., Terraform) and confidently shipping changes. Here’s how it works: Below is a detailed guide on how to achieve AWS infrastructure testing using Terratest with Terraform, along with sample code snippets in Go. This workflow will help you provision AWS resources, run tests against them to ensure they work as intended, and then tear everything down automatically. Prerequisites Install Terraform Download and install Terraform from the official site. Install Go Terratest is written in Go, so you’ll need Go installed. Download Go from the official site. Set Up AWS Credentials Ensure your AWS credentials are configured (e.g., via ~/.aws/credentials or environment variables like AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). Initialize a Go Module In your project directory, run: Shell go mod init github.com/yourusername/yourproject go mod tidy Add Terratest to Your go.mod In your project/repo directory, run: Shell go get github.com/gruntwork-io/terratest/modules/terraform go get github.com/stretchr/testify/assert Sample Terraform Configuration Create a simple Terraform configuration that launches an AWS EC2 instance. Put the following files in a directory named aws_ec2_example (or any name you prefer). Save it as main.tf for reference. Shell terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.0" } } required_version = ">= 1.3.0" } provider "aws" { region = var.aws_region } resource "aws_instance" "example" { ami = var.ami_id instance_type = "t2.micro" tags = { Name = "Terratest-Example" } } output "instance_id" { value = aws_instance.example.id } Next, variables.tf: Shell variable "aws_region" { type = string default = "us-east-1" } variable "ami_id" { type = string default = "ami-0c55b159cbfafe1f0" # Example Amazon Linux AMI (update as needed) } Terratest Code Snippet Create a Go test file in a directory named test (or you can name it anything, but test is conventional). For example, aws_ec2_test.go: Shell package test import ( "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert" ) func TestAwsEC2Instance(t *testing.T) { // Define Terraform options to point to the Terraform folder terraformOptions := &terraform.Options{ TerraformDir: "../aws_ec2_example", // Optional: pass variables if you want to override defaults Vars: map[string]interface{}{ "aws_region": "us-east-1", "ami_id": "ami-0c55b159cbfafe1f0", }, } // At the end of the test, destroy the resources defer terraform.Destroy(t, terraformOptions) // Init and apply the Terraform configuration terraform.InitAndApply(t, terraformOptions) // Fetch the output variable instanceID := terraform.Output(t, terraformOptions, "instance_id") // Run a simple assertion to ensure the instance ID is not empty assert.NotEmpty(t, instanceID, "Instance ID should not be empty") } What This Test Does Initializes and applies the Terraform configuration in ../aws_ec2_example.Deploys an EC2 instance with the specified AMI in us-east-1.Captures the instance_id Terraform output.Verifies that the instance ID is not empty using Testify’s assert library.Destroys the resources at the end of the test to avoid incurring ongoing costs. Running the Tests Navigate to the directory containing your Go test file (e.g., test directory).Run the following command: Shell go test -v Observe the output: You’ll see Terraform initializing and applying your AWS infrastructure.After the test assertions pass, Terraform will destroy the resources. Conclusion By following these steps, you can integrate Terratest into your AWS IaC workflow to: Provision AWS resources using Terraform.Test them programmatically with Go-based tests.Validate that your infrastructure is configured properly and functioning as expected.Tear down automatically, ensuring that you’re not incurring unnecessary AWS costs and maintaining a clean environment for repeated test runs.
In the world of distributed systems, few things are more frustrating to users than making a change and then not seeing it immediately. Try to change your status on your favorite social network site and reload the page only to discover your previous status. This is where Read Your Own Writes (RYW) consistency becomes quite important; this is not a technical need but a core expectation from the user's perspective. What Is Read Your Own Writes Consistency? Read Your Own Writes consistency is an assurance that once a process, usually a user, has updated a piece of data, all subsequent reads by that same process will return the updated value. It is a specific category of session consistency along the lines of how the user interacts with their own data modification. Let's look at these real-world scenarios where RYW consistency is important: 1. Social Media Updates When you tweet or update your status on your social media," is that you expect to see the tweet or status update as soon as the feed is reloaded. Without RYW consistency, content may seem to “vanish” for a brief period of time and subsequently, the same to appear multiple time, confusing your audience and duplication occurs. 2. Document Editing In systems that involve collaborative document editing, such as Google Docs, the user must see their own changes immediately, though there might be some slight delay in the updates of other users. 3. E-commerce Inventory Management If a seller updates his product inventory, he must immediately see the correct numbers in order to make informed business decisions. Common Challenges in Implementing RYW 1. Caching Complexities One of the biggest challenges comes from caching layers. When data is cached at different levels (browser, CDN, application server), it is important to have a suitable cache invalidation or update strategy so as to deliver the latest write to a client, i.e., the user. 2. Load Balancing In systems by means of multiple replicas and load balancers, requests from the same user can possibly be routed to different servers. This can break RYW consistency if not handled properly. 3. Replication Lag In primary-secondary distribution databases, writes are directed to the primary and reads can be sourced from the secondaries. All this could lead to the generation of a window where recent writes are no longer visible. Implementation Strategies 1. Sticky Sessions Python # Example load balancer configuration class LoadBalancer: def route_request(self, user_id, request): # Route to the same server for a given user session server = self.session_mapping.get(user_id) if not server: server = self.select_server() self.session_mapping[user_id] = server return server 2. Write-Through Caching Python class CacheLayer: def update_data(self, key, value): # Update database first self.database.write(key, value) # Immediately update cache self.cache.set(key, value) # Attach version information self.cache.set_version(key, self.get_timestamp()) 3. Version Tracking Python class SessionManager: def track_write(self, user_id, resource_id): # Record the latest write version for this user timestamp = self.get_timestamp() self.write_versions[user_id][resource_id] = timestamp def validate_read(self, user_id, resource_id, data): # Ensure read data is at least as fresh as user's last write last_write = self.write_versions[user_id].get(resource_id) return data.version >= last_write if last_write else True Best Practices 1. Use Timestamps or Versions Attach version information to all writesCompare versions during reads to ensure consistencyConsider using logical clocks for better ordering 2. Implement Smart Caching Strategies Use cache-aside pattern with careful invalidationConsider write-through caching for critical updatesImplement cache versioning 3. Monitor and Alert Track consistency violationsMeasure read-write latenciesAlert on abnormal patterns Conclusion Read Your Own Writes consistency may appear like a rather boring request. However, its proper implementation in a distributed system requires careful consideration of caching, routing, and data replication design issues. By being aware of the challenges involved and implementing adequate solutions, we will be able to design systems that make the experience smooth and intuitive for users. By the way, there are a lot of consistency models in distributed systems, and RYW consistency is often non-essential in the case of user experience. There is still room for users to accept eventual consistency when observing updates from other users, but they do so by expecting that their own changes will be reflected immediately.
Security comes down to trust. In DevOps and our applications, it really is a question of "should this entity be allowed to do that action?" In an earlier time in IT, we could assume that if something was inside a trusted perimeter, be it in our private network or on a specific machine, then we could assume entities were trustworthy and naturally should be able to access resources and data. However, as applications became more complex, spanning not just machines but also different data centers and continents, and reliance on third-party services via APIs became the norm, we could no longer rely on trusted perimeters. We replaced the trusted perimeter with a model based on "never trust, always verify" and "the principle of least privilege." We have come to call that model of security "zero trust," and the type of infrastructure we create using this principle "zero trust architecture." Much of the focus in zero trust discussions centers on human identities, which do need to be considered, but the challenges around securing non-human identities (NHI) should be addressed. The full scope of the NHI trust issue becomes very concerning when you consider the sheer volume involved. According to research from CyberArk, in 2022, the number of NHIs outnumbered human identities at an enterprise by a factor of 45 to 1. Some estimates put this as high as 100 to 1 in 2024 and are predicted to keep increasing well into the future. Implementing security for all our identities and leaning into zero trust has never been more important. Fortunately, you are not alone in this fight to make your applications more secure and adopt a zero-trust posture. One governing body that has put out a lot of guidance on this issue is the National Institute of Standards and Technology (NIST). In this article, we will take a closer look at achieving zero trust architecture for hour NHIs based on NIST's advice. Defining Zero Trust Architecture and NHIs Starting with an agreed-upon definition is always a good idea when contemplating any new approach or term. NIST Special Publication 800-207 gives us a formal definition of zero trust: "Zero trust (ZT) provides a collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised. Zero trust architecture (ZTA) is an enterprise’s cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies. Therefore, a zero trust enterprise is the network infrastructure (physical and virtual) and operational policies that are in place for an enterprise as a product of a zero trust architecture plan." Non-human identities are machine-based credentials that allow API integrations and automated workflows, which require machine-to-machine communication. These include API keys, service accounts, certificates, tokens, and roles, which collectively enable the scalability and efficiency required in modern cloud-native and hybrid environments. However, their mismanagement introduces significant security risks, making them an important component of any robust zero-trust strategy. What Can Go Wrong? Poorly managed NHIs pose significant security challenges. Secrets sprawl, leaking hardcoded API keys and tokens, often exposes sensitive credentials in codebases or logs, creating an easy target for attackers. Given the staggering number of hardcoded credentials added to public repos on GitHub alone, over 12.7 million in 2023, the majority of which were for machine identities, the full scope of this problem starts to come into focus. Adding to the issue is over-permissioned NHIs, which utilize only a fraction of their granted access, greatly expand the attack surface and heighten the risk of privilege escalation. When an attacker does find a leaked secret, they are often able to use it to laterally move throughout your systems and escalate privileges. Inadequate lifecycle management leaves stale credentials like unused service accounts and outdated certificates vulnerable. This is how the problem of "zombie leaks," when a secret is exposed but not revoked, happens in so many codebases, project management systems, and communication platforms. For example, a commit author may believe that deleting the commit or repository is sufficient, overlooking the crucial revocation step and, therefore, not completing the needed end-of-life step for managing an NHI. What NIST Has to Say About Securing NHIs NIST publishes many documents with guidance on properly securing credentials, but most of their publications focus on human identities, such as user accounts. They use the term non-person entities (NPE) in some of their work, but across the current enterprise landscape, these are much more commonly called NHI. We will stick with that current naming convention for this article. Non-human identity security consists of multiple strategies. The following points should be seen as a partial list of recommendations. Eliminate Long-Lived Credentials NIST SP 800-207: Zero Trust Architecture covers ZTA policies and emphasizes the equal treatment of NHIs and human users when it comes to authentication, authorization, and access control. One of the significant recommendations is the elimination of all long-lived credentials. By automatically expiring after a short duration, short-lived credentials reduce the risk of unauthorized access and force regular re-authentication. This ensures that any stolen or exposed credential has limited utility for attackers. Keep An Eye Out for Anomalous Activity SP 800-207 also calls for continuous monitoring of NHI Activities. Teams should strive to collect and analyze logs to detect unusual or unauthorized behavior around API calls, service account usage, or token operations. According to NIST, ZTA is especially critical in highly automated environments, such as DevOps pipelines and cloud-native architectures, where machine-to-machine interactions outnumber human actions by an increasing factor. Don't Trust for Very Long NIST SP 800-207A: "A Zero Trust Architecture Model for Access Control in Cloud-Native Applications in Multi-Cloud Environments" gives even more pointed advice. When discussing service authentication, it says, "Each service should present a short-lived cryptographically verifiable identity credential to other services that are authenticated per connection and reauthenticated regularly."Mature teams can also consider routes for replacing credentials with automatically rotated certificates. Teams already embracing service meshes can easily adopt systems like SPIFFE/SPIRE. For teams that have not already looked at PKI for machine identities, there are a lot of benefits in investigating this route. Least Privilege for NHI SP 800-207A also encourages embracing the "principle of least privilege." This ensures that NHIs operate with only the permissions necessary for their specific tasks. By minimizing access scope, organizations can significantly reduce the attack surface, limiting potential damage if an account is compromised. This requires regular audits of permissions to identify unused or excessive privileges and a continuous effort to enforce access restrictions in alignment with actual operational needs. Least privilege is particularly critical for service accounts, which often have elevated permissions by default, creating unnecessary risks in automated environments. Centralized Secrets Management Referred to in both NIST publications is a clear call for managing secrets in a centralized secrets management platform. Enterprise secret management tools such as HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault offer secure storage, rotation, and access control for sensitive credentials. These platforms ensure secrets are encrypted, accessed only by authorized entities, and logged for auditing purposes. By centralizing secrets management, organizations reduce the risks of secrets sprawl and mismanagement while enabling streamlined rotation policies that maintain system integrity. Securing NHIs Together NIST has provided invaluable guidance for organizations striving to adopt a zero-trust architecture and secure NHIs. Their meticulous research and recommendations, such as eliminating long-lived credentials, enforcing least privilege, and advocating for centralized secrets management, have set a strong foundation for tackling the growing complexities of securing NHIs in modern infrastructures.
We’re all familiar with the principles of DevOps: building small, well-tested increments, deploying frequently, and automating pipelines to eliminate the need for manual steps. We monitor our applications closely, set up alerts, roll back problematic changes, and receive notifications when issues arise. However, when it comes to databases, we often lack the same level of control and visibility. Debugging performance issues can be challenging, and we might struggle to understand why databases slow down. Schema migrations and modifications can spiral out of control, leading to significant challenges. Overcoming these obstacles requires strategies that streamline schema migration and adaptation, enabling efficient database structure changes with minimal downtime or performance impact. It’s essential to test all changes cohesively throughout the pipeline. Let’s explore how this can be achieved. Automate Your Tests Databases are prone to many types of failures, yet they often don’t receive the same rigorous testing as applications. While developers typically test whether applications can read and write the correct data, they often overlook how this is achieved. Key aspects like ensuring the proper use of indexes, avoiding unnecessary lazy loading, or verifying query efficiency often go unchecked. For example, we focus on how many rows the database returns but neglect to analyze how many rows it had to read. Similarly, rollback procedures are rarely tested, leaving us vulnerable to potential data loss with every change. To address these gaps, we need comprehensive automated tests that detect issues proactively, minimizing the need for manual intervention. We often rely on load tests to identify performance issues, and while they can reveal whether our queries are fast enough for production, they come with significant drawbacks. First, load tests are expensive to build and maintain, requiring careful handling of GDPR compliance, data anonymization, and stateful applications. Moreover, they occur too late in the development pipeline. When load tests uncover issues, the changes are already implemented, reviewed, and merged, forcing us to go back to the drawing board and potentially start over. Finally, load tests are time-consuming, often requiring hours to fill caches and validate application reliability, making them less practical for catching issues early. Schema migrations often fall outside the scope of our tests. Typically, we only run test suites after migrations are completed, meaning we don’t evaluate how long they took, whether they triggered table rewrites, or whether they caused performance bottlenecks. These issues often go unnoticed during testing and only become apparent when deployed to production. Another challenge is that we test with databases that are too small to uncover performance problems early. This reliance on inadequate testing can lead to wasted time on load tests and leaves critical aspects, like schema migrations, entirely untested. This lack of coverage reduces our development velocity, introduces application-breaking issues, and hinders agility. The solution to these challenges lies in implementing database guardrails. Database guardrails evaluate queries, schema migrations, configurations, and database designs as we write code. Instead of relying on pipeline runs or lengthy load tests, these checks can be performed directly in the IDE or developer environment. By leveraging observability and projections of the production database, guardrails assess execution plans, statistics, and configurations, ensuring everything will function smoothly post-deployment. Build Observability Around Databases When we deploy to production, system dynamics can change over time. CPU load may spike, memory usage might grow, data volumes could expand, and data distribution patterns may shift. Identifying these issues quickly is essential, but it's not enough. Current monitoring tools overwhelm us with raw signals, leaving us to piece together the reasoning. For example, they might indicate an increase in CPU load but fail to explain why it happened. The burden of investigating and identifying root causes falls entirely on us. This approach is outdated and inefficient. To truly move fast, we need to shift from traditional monitoring to full observability. Instead of being inundated with raw data, we need actionable insights that help us understand the root cause of issues. Database guardrails offer this transformation. They connect the dots, showing how various factors interrelate, pinpointing the problem, and suggesting solutions. Instead of simply observing a spike in CPU usage, guardrails help us understand that a recent deployment altered a query, causing an index to be bypassed, which led to the increased CPU load. With this clarity, we can act decisively, fixing the query or index to resolve the issue. This shift from "seeing" to "understanding" is key to maintaining speed and reliability. The next evolution in database management is transitioning from automated issue investigation to automated resolution. Many problems can be fixed automatically with well-integrated systems. Observability tools can analyze performance and reliability issues and generate the necessary code or configuration changes to resolve them. These fixes can either be applied automatically or require explicit approval, ensuring that issues are addressed immediately with minimal effort on your part. Beyond fixing problems quickly, the ultimate goal is to prevent issues from occurring in the first place. Frequent rollbacks or failures hinder progress and agility. True agility is achieved not by rapidly resolving issues but by designing systems where issues rarely arise. While this vision may require incremental steps to reach, it represents the ultimate direction for innovation. Metis empowers you to overcome these challenges. It evaluates your changes before they’re even committed to the repository, analyzing queries, schema migrations, execution plans, performance, and correctness throughout your pipelines. Metis integrates seamlessly with CI/CD workflows, preventing flawed changes from reaching production. But it goes further — offering deep observability into your production database by analyzing metrics and tracking deployments, extensions, and configurations. It automatically fixes issues when possible and alerts you when manual intervention is required. With Metis, you can move faster and automate every aspect of your CI/CD pipeline, ensuring smoother and more reliable database management. Everyone Needs to Participate Database observability is about proactively preventing issues, advancing toward automated understanding and resolution, and incorporating database-specific checks throughout the development process. Relying on outdated tools and workflows is no longer sufficient; we need modern solutions that adapt to today’s complexities. Database guardrails provide this support. They help developers avoid creating inefficient code, analyze schemas and configurations, and validate every step of the software development lifecycle within our pipelines. Guardrails also transform raw monitoring data into actionable insights, explaining not just what went wrong but how to fix it. This capability is essential across all industries, as the complexity of systems will only continue to grow. To stay ahead, we must embrace innovative tools and processes that enable us to move faster and more efficiently.
API testing has gained a lot of momentum these days. As UI is not involved, it is a lot easier and quicker to test. This is the reason why API testing is considered the first choice for performing end-to-end testing of the system. Integrating the automated API Tests with the CI/CD pipelines allows teams to get faster feedback on the builds. In this blog, we'll discuss and learn about DELETE API requests and how to handle them using Playwright Java for automation testing, covering the following points: What is a DELETE request?How do you test DELETE APIs using Playwright Java? Getting Started It is recommended that you check out the earlier tutorial blog to learn about the details related to prerequisites, setup, and configuration. Application Under Test We will be using the free-to-use RESTful e-commerce APIs that offer multiple APIs related to order management functionality, allowing us to create, retrieve, update, and delete orders. This application can be set up locally using Docker or NodeJS. What Is a DELETE Request? A DELETE API request deletes the specified resource from the server. Generally, there is no response body in the DELETE requests. The resource is specified by a URI, and the server permanently deletes it. DELETE requests are neither considered safe nor idempotent, as they may cause side effects on the server, like removing data from a database. The following are some of the limitations of DELETE requests: The data deleted using a DELETE request is not reversible, so it should be handled carefully.It is not considered to be a safe method as it can directly delete the resource from the database, causing conflicts in the system.It is not an idempotent method, meaning calling it multiple times for the same resource may result in different states. For example, in the first instance, when DELETE is called, it will return Status Code 204 stating that the resource has been deleted, and if DELETE is called again on the same resource, it may give a 404 NOT FOUND as the given resource is already deleted. The following is an example of the DELETE API endpoint from the RESTful e-commerce project. DELETE /deleteOrder/{id} : Deletes an Order By ID This API requires the order_id to be supplied as Path Parameter in order to delete respective order from the system. There is no request body required to be provided in this DELETE API request. However, as a security measure, the token is required to be provided as a header to delete the order. Once the API is executed, it deletes the specified order from the system and returns Status Code 204. In case where the order is not found, or the token is not valid or not provided, it will accordingly show the following response: Status CodeDescription400 Failed to authenticate the token404 No order with the given order_id is found in the system403Token is missing in the request How to Test DELETE APIs Using Playwright Java Testing DELETE APIs is an important step in ensuring the stability and reliability of the application. Correct implementation of the DELETE APIs is essential to check for unintended data loss and inconsistencies, as the DELETE APIs are in charge of removing the resources from the system. In this demonstration of testing DELETE APIs using Playwright Java, we'll be using the /deleteOrder/{id} for deleting an existing order from the system. Test Scenario 1: Delete a Valid Order Start the RESTful e-commerce service.Using a POST request, create some orders in the system.Delete the order with order_id “1” using DELETE request.Check that the Status Code 204 is returned in the response. Test Implementation The following steps are required to be performed to implement the test scenario: Add new orders using the POST request.Hit the /auth API to generate token.Hit the /deleteOrder/ API endpoint with the token and the order_id to delete the order.Check that the Status Code 204 is returned in the response. A new test method, testShouldDeleteTheOrder(), is created in the existing test class HappyPathTests. This test method implements the above three steps to test the DELETE API. Java @Test public void testShouldDeleteTheOrder() { final APIResponse authResponse = this.request.post("/auth", RequestOptions.create().setData(getCredentials())); final JSONObject authResponseObject = new JSONObject(authResponse.text()); final String token = authResponseObject.get("token").toString(); final int orderId = 1; final APIResponse response = this.request.delete("/deleteOrder/" + orderId, RequestOptions.create() .setHeader("Authorization", token)); assertEquals(response.status(), 204); } The POST /auth API endpoint will be hit first to generate the token. The token received in response is stored in the token variable to be used further in the DELETE API request. Next, new orders will be generated using the testShouldCreateNewOrders() method, which is already discussed in the previous tutorial, where we talked about testing POST requests using Playwright Java. After the orders are generated, the next step is to hit the DELETE request with the valid order_id that would delete the specific order. We'll be deleting the order with the order_id “1” using the delete() method provided by Playwright framework. After the order is deleted, the Status Code 204 is returned in response. An assertion will be performed on the Status Code to verify that the Delete action was successful. Since no request body is returned in the response, this is the only thing that can be verified. Test Execution We'll be creating a new testng.xml named testng-restfulecommerce-deleteorders.xml to execute the tests in the order of the steps that we discussed in the test implementation. XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Restful ECommerce Test Suite"> <test name="Testing Happy Path Scenarios of Creating and Updating Orders"> <classes> <class name="io.github.mfaisalkhatri.api.restfulecommerce.HappyPathTests"> <methods> <include name="testShouldCreateNewOrders"/> <include name="testShouldDeleteTheOrder"/> </methods> </class> </classes> </test> </suite> First, the testShouldCreateNewOrders() test method will be executed, and it will create new orders. Next, the testShouldDeleteTheOrder() test method order will be executed to test the delete order API. The following screenshot of the test execution performed using IntelliJ IDE shows that the tests were executed successfully. Now, let’s verify that the order was correctly deleted by writing a new test that will call the GET /getOrder API endpoint with the deleted order_id. Test Scenario 2: Retrieve the Deleted Order Delete a valid order with order_id “1.”Using GET /getOrder API, try retrieving the order with order_id “1.”Check that the Status Code 404 is returned with the message “No Order found with the given parameters!” in the response. Test Implementation Let’s create a new test method, testShouldNotRetrieveDeletedOrder(), in the existing class HappyPathTests. Java @Test public void testShouldNotRetrieveDeletedOrder() { final int orderId = 1; final APIResponse response = this.request.get("/getOrder", RequestOptions.create().setQueryParam("id", orderId)); assertEquals(response.status(), 404); final JSONObject jsonObject = new JSONObject(response.text()); assertEquals(jsonObject.get("message"), "No Order found with the given parameters!"); } The test implementation of this scenario is pretty simple. We will be executing the GET /getOrder API and to fetch the deleted order with order_id “1.” An assertion is applied next to verify that the GET API should return the Status Code 404 in the response with the message “No Order found with the given parameters!” This test ensures that the delete order API worked fine and the order was deleted from the system. Test Execution Let’s update the testng.xml file and add this test scenario at the end after the delete test. Java <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Restful ECommerce Test Suite"> <test name="Testing Happy Path Scenarios of Creating and Updating Orders"> <classes> <class name="io.github.mfaisalkhatri.api.restfulecommerce.HappyPathTests"> <methods> <include name="testShouldCreateNewOrders"/> <include name="testShouldDeleteTheOrder"/> <include name="testShouldNotRetrieveDeletedOrder"/> </methods> </class> </classes> </test> </suite> Now, all three tests should run in sequence. The first one will create orders; the second one will delete the order with order_id “1”; and the last test will hit the GET API to fetch the order with order_id “1” returning Status Code 404. The screenshot above shows that all three tests were executed successfully, and the DELETE API worked fine as expected. Summary DELETE API requests allow the deletion of the resource from the system. As delete is an important CRUD function, it is important to test it and verify that the system is working as expected. However, it should be noted that DELETE is an irreversible process, so it should always be used with caution. As per my experience, it is a good approach to hit the GET API after executing the DELETE request to check that the specified resource was deleted from the system successfully. Happy testing!
TL; DR: Three Data Points Pointing to the Decline of the Scrum Master’s Role If you hang out in the “Agile” bubble on LinkedIn, the dice have already been cast: Scrum is out (and the Scrum Master), and the new kid on the block is [insert your preferred successor framework choice here.] I’m not entirely certain about that, but several data points on my side suggest a decline in the role of the Scrum Master. Read on and learn more about whether the Scrum Master is a role at risk. My Data Points: Downloads, Survey Participants, Scrum Master Class Students Here are my three data points regarding the development: Decline in Download Numbers of the Scrum Master Interview Questions Guide Years ago, I created the Scrum Master Interviews Question Guide on behalf of a client to identify suitable candidates for open Scrum Master positions. It has since grown to 83 questions and has been downloaded over 28,000 times. Interestingly, the number of downloads in 2022 (2,428) and 2024 (1,236) practically halved. I would have expected the opposite, with newly unemployed Scrum Masters preparing for new rounds of job interviews. Unless, of course, the number of open positions also drops significantly, and fewer candidates need to brush up their Scrum knowledge before an interview. Decline in the Number of Participants in the Scrum Master Salary Report Since 2017, I have published the Scrum Master Salary Report more or less regularly. The statistical model behind the survey is built on a threshold of 1,000 participants, as the survey addresses a global audience. It has never been easy to convince so many people to spend 10 minutes supporting a community effort, but I have managed so far. For the 2024 edition, we had 1,114 participants. In 2023, we had 1,146 participants; in 2022, there were 1,113. But this time, it is different. Before an emergency newsletter on December 26, 2024, there were fewer than 400 valid data sets; today, there are still fewer than 650. (There likely won’t be a 2025 edition.) Decline in Scrum Master Class Students As a Professional Scrum Trainer, I run an educational business that offers Scrum.org-affiliated classes, such as those for Scrum Masters. In 2020, the entry-level Scrum Master classes — public and private — represented 49% of my students. In 2021, that number dropped to 26%, but I also offered more different classes. In 2022, the number was stable at 24%, and it fell to 17% in 2023. In 2024, however, that number was less than 5%, and I decided to stop offering these classes as public offerings altogether in 2025. Are those student numbers representative? Of course not. However, they still point to the declining perception of how valuable these classes are from the career perspectives of individuals and corporate training departments. (By the way, the corresponding Product Owner classes fare much better.) Conclusion Of course, in addition to those mentioned above, there are other indicators: Google trends for the search term “Scrum Master,” the number of certifications passed, or job openings on large job sites. Nevertheless, while the jury is still out, it seems that many organizations' love affair with the Scrum Master role has cooled significantly. What is your take: is the Scrum Master a role in decline? Please share your observations with us via the comments.
As a developer learning Rust, I wanted to build a practical project to apply my new skills. With the rise of large language models like Anthropic's Llama 3.2, I thought it would be interesting to create a Rust command line interface (CLI) to interact with the model. In just a couple of minutes, I was able to put together a working CLI using the Ollama Rust library. The CLI, which I call "Jarvis," allows you to chat with Llama 3.2, as well as perform some basic commands like checking the time, date, and listing directory contents. In this post, I'll walk through the key components of the Jarvis CLI and explain how you can use Rust to interface with Llama 3.2 or other large language models. By the end, you'll see how Rust's performance and expressiveness make it a great choice for AI applications. The Jarvis CLI Structure The main components of the Jarvis CLI include: 1. JarvisConfig Struct Defines the available commandsMethods to validate commands and print help text 2. Command Handling Logic in main() Parses command line argumentsInvokes the appropriate function based on the command 3. Functions for Each Command time - Gets current timedate - Gets today's datehello - Prints a customizable greetingls - Lists directory contentschat - Interacts with Llama 3.2 using Ollama lib Here's a condensed version of the code: Plain Text struct JarvisConfig { commands: Vec<&'static str>, } impl JarvisConfig { fn new() -> Self {...} fn print_help(&self) {...} fn is_valid_command(&self, command: &str) -> bool {...} } #[tokio::main] async fn main() { let config = JarvisConfig::new(); let args: Vec<String> = env::args().collect(); match args[1].as_str() { "time" => {...} "date" => {...} "hello" => {...} "ls" => {...} "chat" => { let ollama = Ollama::default(); match ollama .generate(GenerationRequest::new( "llama3.2".to_string(), args[2].to_string(), )) .await { Ok(res) => println!("{}", res.response), Err(e) => println!("Failed to generate response: {}", e), } } _ => { println!("Unknown command: {}", args[1]); config.print_help(); } } } Using Ollama to Chat with Llama 3.2 The most interesting part is the "chat" command, which interfaces with Llama 3.2 using the Ollama Rust library. After adding the Ollama dependency to Cargo.toml, using it is fairly straightforward: 1. Create an Ollama instance with default settings: Plain Text let ollama = Ollama::default(); 2. Prepare a GenerationRequest with the model name and prompt: Plain Text GenerationRequest::new( "llama3.2".to_string(), args[2].to_string() ) 3. Asynchronously send the request using ollama.generate(): Plain Text match ollama.generate(...).await { Ok(res) => println!("{}", res.response), Err(e) => println!("Failed to generate response: {}", e), } That's it! With just a few lines of code, we can send prompts to Llama 3.2 and receive generated responses. Example Usage Here are some sample interactions with the Jarvis CLI: Plain Text $ jarvis hello Hello, World! $ jarvis hello Alice Hello, Alice! $ jarvis time Current time in format (HH:mm:ss): 14:30:15 $ jarvis ls /documents /documents/report.pdf: file /documents/images: directory $ jarvis chat "What is the capital of France?" Paris is the capital and most populous city of France. While Python remains the go-to for AI/ML, Rust is a compelling alternative where maximum performance, concurrency, and/or safety are needed. It's exciting to see Rust increasingly adopted in this space. Conclusion In this post, we learned how to build a Rust CLI to interact with Llama 3.2 using the Ollama library. With basic Rust knowledge, we could put together a useful AI-powered tool in just a couple of minutes. Rust's unique advantages make it well-suited for AI/ML systems development. As the ecosystem matures, I expect we'll see even more adoption. I encourage you to try out Rust for your next AI project, whether it's a simple CLI like this or a more complex system. The performance, safety, and expressiveness may surprise you.
A data fabric is a system that links and arranges data from many sources so that it is simple to locate, utilize, and distribute. It connects everything like a network, guaranteeing that our data is constantly available, safe, and prepared for use. Assume that our data is spread across several "containers" (such as databases, cloud storage, or applications). A data fabric acts like a network of roads and pathways that connects all these containers so we can get what we need quickly, no matter where it is. On the other hand, stream processing is a method of managing data as it comes in, such as monitoring sensor updates or evaluating a live video feed. It processes data instantaneously rather than waiting to gather all of it, which enables prompt decision-making and insights. In this article, we explore how leveraging data fabric can supercharge stream processing by offering a unified, intelligent solution to manage, process, and analyze real-time data streams effectively. Access to Streaming Data in One Place Streaming data comes from many sources like IoT devices, social media, logs, or transactions, which can be a major challenge to manage. Data fabric plays an important role by connecting these sources and providing a single platform to access data, regardless of its origin. An open-source distributed event-streaming platform like Apache Kafka supports data fabric by handling real-time data streaming across various systems. It also acts as a backbone for data pipelines, enabling smooth data movement between different components of the data fabric. Several commercial platforms, such as Cloudera Data Platform (CDP), Microsoft Azure Data Factory, and Google Cloud Dataplex, are designed for end-to-end data integration and management. These platforms also offer additional features, such as data governance and machine learning capabilities. Real-Time Data Integration Streaming data often needs to be combined with historical data or data from other streams to gain meaningful insights. Data fabric integrates real-time streams with existing data in a seamless and scalable way, providing a complete picture instantly. Commercial platforms like Informatica Intelligent Data Management Cloud (IDMC) simplify complex data environments with scalable and automated data integration. They also enable the integration and management of data across diverse environments. Intelligent Processing When working with streamed data, it often arrives unstructured and raw, which reduces its initial usefulness. To make it actionable, it must undergo specific processing steps such as filtering, aggregating, or enriching. Streaming data often contains noise or irrelevant details that don’t serve the intended purpose. Filtering involves selecting only the relevant data from the stream and discarding unnecessary information. Similarly, aggregating combines multiple data points into a single summary value, which helps reduce the volume of data while retaining essential insights. Additionally, enriching adds extra information to the streamed data, making it more meaningful and useful. Data fabric plays an important role here by applying built-in intelligence (like AI/ML algorithms) to process streams on the fly, identifying patterns, anomalies, or trends in real time. Consistent Governance It is difficult to manage security, privacy, and data quality for streaming data because of the constant flow of data from various sources, frequently at fast speeds and in enormous volumes. Sensitive data, such as financial or personal information, may be included in streaming data; these must be safeguarded instantly without affecting functionality. Because streaming data is unstructured or semi-structured, it might be difficult to validate and clean, which could result in quality problems. By offering a common framework for managing data regulations, access restrictions, and quality standards across various and dispersed contexts, data fabric contributes to consistent governance in stream processing. As streaming data moves through the system, it ensures compliance with security and privacy laws like the CCPA and GDPR by enforcing governance rules in real time. Data fabric uses cognitive techniques, such as AI/ML, to monitor compliance, identify anomalies, and automate data classification. Additionally, it incorporates metadata management to give streaming data a clear context and lineage, assisting companies in tracking its usage, changes, and source. Data fabric guarantees that data is safe, consistent, and dependable even in intricate and dynamic processing settings by centralizing governance controls and implementing them uniformly across all data streams. The commercial Google Cloud Dataplex can be used as a data fabric tool for organizing and governing data across a distributed environment. Scalable Analytics By offering a uniform and adaptable architecture that smoothly integrates and processes data from many sources in real time, data fabric allows scalable analytics in stream processing. Through the use of distributed computing and elastic scaling, which dynamically modifies resources in response to demand, it enables enterprises to effectively manage massive volumes of streaming data. By adding historical and contextual information to streaming data, data fabric also improves analytics by allowing for deeper insights without requiring data duplication or movement. In order to ensure fast and actionable insights, data fabric's advanced AI and machine learning capabilities assist in instantly identifying patterns, trends, and irregularities. Conclusion In conclusion, a data fabric facilitates the smooth and effective management of real-time data streams, enabling organizations to make quick and informed decisions. For example, in a smart city, data streams from traffic sensors, weather stations, and public transport can be integrated in real time using a data fabric. It can process and analyze traffic patterns alongside weather conditions, providing actionable insights to traffic management systems or commuters, such as suggesting alternative routes to avoid congestion.
Large language models (LLMs) have drastically advanced natural language processing (NLP) by learning complex language patterns from vast datasets. Yet, when these models are combined with structured knowledge graphs — databases designed to represent relationships between entities — challenges arise. Knowledge graphs can be incredibly useful in providing structured knowledge that enhances an LLM's understanding of specific domains. However, as these graphs grow larger, they often become cumbersome, reducing their efficiency when queried. For example, an LLM tasked with answering questions or making decisions based on knowledge from a graph may take longer to retrieve relevant information if the graph is too large or cluttered with unnecessary details. This can increase computation times and limit the model’s scalability. A promising approach to address this issue is pruning, a method of selectively reducing the size of knowledge graphs while preserving their most relevant and important connections. Pruning graph databases can improve the knowledge representation in LLMs by removing irrelevant data, thus enabling faster and more focused knowledge retrieval. This article discusses the benefits and strategies for pruning knowledge graphs and how they can enhance LLM performance, particularly in domain-specific applications. The Role of Graph Databases in Knowledge Representation Graph databases are designed to store and query data in graph structures consisting of nodes (representing entities) and edges (representing relationships between entities). Knowledge graphs leverage this structure to represent complex relationships, such as those found in eCommerce systems, healthcare, finance, and many other domains. These graphs allow LLMs to access structured, domain-specific knowledge that supports more accurate predictions and responses. However, as the scope and size of these knowledge graphs grow, retrieving relevant information becomes more difficult. Inefficient traversal of large graphs can slow down model inference and increase the computational resources required. As LLMs scale, integrating knowledge graphs becomes a challenge unless methods are employed to optimize their size and structure. Pruning provides a solution to this challenge by focusing on the most relevant nodes and relationships and discarding the irrelevant ones. Pruning Strategies for Graph Databases To improve the efficiency and performance of LLMs that rely on knowledge graphs, several pruning strategies can be applied: Relevance-Based Pruning Relevance-based pruning focuses on identifying and retaining only the most important entities and relationships relevant to a specific application. In an eCommerce knowledge graph, for example, entities such as "product," "category," and "customer" might be essential for tasks like recommendation systems, while more generic entities like "region" or "time of day" might be less relevant in certain contexts and can be pruned. Similarly, edges that represent relationships like "has discount" or "related to" may be removed if they don't directly impact key processes like product recommendations or personalized marketing strategies. By pruning less important nodes and edges, the knowledge graph becomes more focused, improving both the efficiency and accuracy of the LLM in handling specific tasks like generating product recommendations or optimizing dynamic pricing. Edge and Node Pruning Edge and node pruning involves removing entire nodes or edges based on certain criteria, such as nodes with few connections or edges with minimal relevance to the task at hand. For example, if a node in a graph has low importance — such as a product that rarely receives customer interest — it might be pruned, along with its associated edges. Similarly, edges that connect less important nodes or represent weak relationships may be discarded. This method aims to maintain the essential structure of the graph while simplifying it, removing redundant or irrelevant elements to improve processing speed and reduce computation time. Subgraph Pruning Subgraph pruning involves removing entire subgraphs from the knowledge graph if they are not relevant to the task at hand. For instance, in an eCommerce scenario, subgraphs related to "customer support" might be irrelevant for a model tasked with product recommendations, so these can be pruned without affecting the quality of the primary tasks. This targeted pruning helps reduce the size of the graph while ensuring that only pertinent data remains for knowledge retrieval. Impact on LLM Performance Speed and Computational Efficiency One of the most significant advantages of pruning is its impact on the speed and efficiency of LLMs. By reducing the size of the knowledge graph through pruning, the graph becomes easier to traverse and query. This results in faster knowledge retrieval, which directly translates to reduced inference times for LLM-based applications. For example, if a graph contains thousands of irrelevant relationships, pruning those out allows the model to focus on the most relevant data, speeding up decision-making processes in real-time applications like personalized product recommendations. Accuracy in Domain-Specific Tasks Pruning irrelevant information from a graph also helps improve the accuracy of LLMs in domain-specific tasks. By focusing on the most pertinent knowledge, LLMs can generate more accurate responses. In an eCommerce setting, this means better product recommendations, more effective search results, and an overall more optimized customer experience. Moreover, pruning ensures that the model’s focus is on high-quality, relevant data, reducing the chances of confusion or misinterpretation of less relevant details. Conclusion Pruning techniques offer a practical and effective approach to optimizing the integration of graph databases in large language models. By selectively reducing the complexity and size of knowledge graphs, pruning helps improve the retrieval speed, accuracy, and overall efficiency of LLMs. In domain-specific applications, such as eCommerce, healthcare, or finance, pruning can significantly enhance performance by allowing LLMs to focus on the most relevant data for their tasks. As LLMs continue to evolve, the ability to integrate vast amounts of structured knowledge while maintaining computational efficiency will be crucial. Pruning is a valuable tool in this process, enabling LLMs to scale without sacrificing performance.
Should Programmers Solve Business Problems?
January 10, 2025 by
Top 5 Books to Enhance Your Software Design Skills in 2025
January 10, 2025 by CORE
Top Mistakes Made by IT Architects
January 9, 2025 by
Revolutionizing Catalog Management for Data Lakehouse With Polaris Catalog
January 10, 2025 by
Optimizing SQL Server Performance With AI: Automating Query Optimization and Predictive Maintenance
January 10, 2025 by CORE
Top 5 Books to Enhance Your Software Design Skills in 2025
January 10, 2025 by CORE
Low-Maintenance Backend Architectures for Scalable Applications
January 10, 2025 by
Building a Sample Kubernetes Operator on Minikube: A Step-by-Step Guide
January 10, 2025 by CORE
Optimizing SQL Server Performance With AI: Automating Query Optimization and Predictive Maintenance
January 10, 2025 by CORE
Metaprogramming With Proxies and Reflect in JavaScript
January 10, 2025 by
Mastering macOS Client-Server Application Testing: Tools and Key Differences
January 10, 2025 by
Building a Sample Kubernetes Operator on Minikube: A Step-by-Step Guide
January 10, 2025 by CORE
Mastering macOS Client-Server Application Testing: Tools and Key Differences
January 10, 2025 by
Building a Sample Kubernetes Operator on Minikube: A Step-by-Step Guide
January 10, 2025 by CORE
Top 5 Books to Enhance Your Software Design Skills in 2025
January 10, 2025 by CORE
Metaprogramming With Proxies and Reflect in JavaScript
January 10, 2025 by
Optimizing SQL Server Performance With AI: Automating Query Optimization and Predictive Maintenance
January 10, 2025 by CORE
Maximizing AI Agents for Seamless DevOps and Cloud Success
January 9, 2025 by CORE