2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
DZone's Annual DevOps Research — Join Us! [survey + raffle]
A Practitioner's Guide to Security-First Design
Enterprise Security
This year has observed a rise in the sophistication and nuance of approaches to security that far surpass the years prior, with software supply chains being at the top of that list. Each year, DZone investigates the state of application security, and our global developer community is seeing both more automation and solutions for data protection and threat detection as well as a more common security-forward mindset that seeks to understand the Why.In our 2023 Enterprise Security Trend Report, we dive deeper into the greatest advantages and threats to application security today, including the role of software supply chains, infrastructure security, threat detection, automation and AI, and DevSecOps. Featured in this report are insights from our original research and related articles written by members of the DZone Community — read on to learn more!
API Integration Patterns
Getting Started With Low-Code Development
In the most recent updates to Java, the String class has undergone a series of significant method additions. Certain methods now yield instances of the Stream class, while some of them are Higher Order functions. The intention behind incorporating these methods is to offer a streamlined approach for handling strings in a stream-oriented manner. Handling strings in a stream-oriented manner brings the advantage of simplifying code and enhancing expressiveness. This makes it easier to apply operations like filtering, mapping, reduction, and more. Another advantage is that the Stream API enables parallel processing, allowing the utilization of parallel streams with these methods that return streams. This makes it possible to leverage multicore processors for the efficient handling of large strings. This article delves into a few methods within the String class that enable processing to be conducted in a functional programming manner. # chars (): The 'chars ()' method facilitates effective character management in Java by returning an IntStream. This IntStream represents a sequence of Integer values, each corresponding to the Unicode code point of the characters within the provided string. A code point is a numeric identifier assigned to a character in the Unicode standard, serving the purpose of character encoding. Let’s understand the ‘chars ()’ by an example. Write a program that removes the given character from the string. Let's explore tackling this challenge through an imperative, non-functional approach, avoiding the use of the chars() or stream methodology. private static String removeChar(String input, char c){ StringBuilder sb = new StringBuilder(); char[] charArray = input.toCharArray(); for (char ch : charArray) { if (ch != c) { sb.append(ch); } } return sb.toString(); } Let's compare this with the functional approach: private static String removeChar(String str, char c){ return str.chars() .filter(ch -> ch != c) .mapToObj(ch -> String.valueOf((char) ch)) .collect(Collectors.joining()); } The imperative, non-functional approach involves traditional iteration over the characters using a StringBuilder to build the modified string. On the other hand, the functional approach leverages the chars() method and the Stream API, providing a more concise and expressive solution. # transform ( ): The transform function is a higher-order function that accepts Function as an argument. The transform function offers a more concise and functional way to apply transformations to the string. The transform function can be used in chaining transformations in string for example, consider a scenario where you want to clean and format user input entered in a form. Users might input their names with extra white spaces, mixed capitalization, and unnecessary characters. String chaining can be employed to standardize and clean up this input. String userInput = " JoHN-dOe "; String cleanedInput = userInput .transform(String :: trim) .transform(String :: toLowerCase) .transform(user -> user.replaceAll("-", "")); cleanedInput // johndoe # lines (): The lines function returns a stream of lines extracted from the given string, separated by line terminators such as \n, \r, and \r\n. The Java String lines() method proves advantageous over the split() method due to its lazy element supply and faster detection of line terminators. In cases where the string is empty, the lines’ function returns zero lines. String text = "The lines function returns a stream of lines extracted ,\nThe Java String lines() method proves advantageous ;\nIn cases where the string is empty,\n" + "the lines’ function returns zero lines."; text.lines() .map(String :: toUpperCase) .filter(line -> line.contains("I")) .forEach(System.out::println); The ‘text’ string contains multiple lines of text. We use the lines() method to obtain a stream of lines from the text. We then use the map operation to convert each line to the uppercase. The filter operation is applied to keep only the lines containing the letter 'I', and the forEach operation prints the modified lines. The functions explained provide a powerful and concise way to work with strings. They offer a functional approach by leveraging streams for efficient manipulation and filtering, promoting immutability. Chaining these functions with other stream operations allows for complex yet concise transformations, promoting a cleaner and more functional style.
How do you streamline the complex process of managing modern cloud infrastructure? The answer lies in the innovative realm of Infrastructure as Code (IaC) tools, particularly Terraform and Pulumi. Revolutionizing the way we approach cloud infrastructure, these tools shift the focus from traditional, manual management to a sophisticated, code-based methodology. This shift is not just a trend; it's a fundamental change in managing cloud architecture, offering unparalleled efficiency, consistency, and scalability. By automating infrastructure provisioning and management, IaC tools like Terraform and Pulumi have become essential in modern cloud environments. They foster rapid deployment, version control, and seamless scalability, all while minimizing human error. In this article, we’ll discuss what are Terraform and Pulumi, their advantages, cloud providers' support and community engagement, and modularity and infrastructure reuse. Stay tuned with us! Terraform: A Pioneer in IaC Founded by HashiCorp in 2014, Terraform is a major force in the IaC field. It uses a declarative approach, where you describe your desired infrastructure state, and Terraform does the rest. Core Features of Terraform The core features of Terraform include several key components, including: Open-Source and Platform-Agnostic Terraform stands out as an open-source tool, making it accessible for widespread use and community contributions. Its platform-agnostic nature allows DevOps teams to manage infrastructure across various cloud providers like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and more using a unified, consistent approach. The tool's efficiency is bolstered by being written in Go, which is known for its high performance. Related Tutorial: Setting Up Your Pipeline with Azure DevOps. Declarative Syntax Terraform uses declarative language, meaning you define the desired end state of your infrastructure without specifying the steps to achieve it. This approach allows Terraform to figure out the optimal path to reach the desired state, simplifying the user's task significantly. Agentless Architecture Differing from many IaC tools that require agent software on servers, Terraform operates without needing any software installed on the managed infrastructure. This feature streamlines the setup process and reduces potential complications. Modular Structure Terraform encourages the use of modules, akin to functions in programming, to create reusable components. Modules in Terraform help organize, encapsulate, and reuse configurations across different parts of the infrastructure, promoting efficiency and reducing errors. Large and Active Community Choosing Terraform also means tapping into a vibrant community, which is crucial for learning and troubleshooting. The community plays a pivotal role in contributing to the project, extending its capabilities, and offering support on forums like StackOverflow. HashiCorp further supports larger teams through paid plans, ensuring enterprise-grade support and features. Use Cases of Terraform Adobe: Adopted Terraform to manage their rapidly growing AWS infrastructure. It allowed Adobe to maintain consistent workflows and significantly reduce the effort in managing complex infrastructures. Slack: Used Terraform for its AWS infrastructure, benefiting from its modular approach to ensure faster delivery and improved service reliability. OpenAI: Choose Terraform for managing Azure resources, automating infrastructure provisioning, and enhancing visibility and manageability. These examples from Adobe, Slack, and OpenAI demonstrate Terraform's versatility and effectiveness in managing complex and scalable cloud infrastructures. Its ability to automate, standardize, and simplify infrastructure management makes it an invaluable tool for modern organizations navigating the dynamic landscape of cloud computing. Pulumi: The Modern Challenger Launched in March 2017, Pulumi is rapidly becoming a game-changer in the Infrastructure as Code (IaC) landscape. It stands out by allowing infrastructure to be defined using popular programming languages like JavaScript, TypeScript, Python, and Go. This approach breaks the mold of traditional IaC tools by integrating more closely with the skillsets and tools developers are already familiar with. Key Features of Pulumi Polyglot nature: Pulumi's support for various programming languages enhances its accessibility and attractiveness. It allows developers to use languages they are already proficient in, leading to improved readability of code and more efficient error checking. Broad support for cloud providers: Much like Terraform, Pulumi provides extensive support for major cloud platforms, including AWS, Azure, Google Cloud, and Kubernetes. This broad support ensures that Pulumi can be used in a variety of cloud environments, making it a versatile choice for many developers and organizations. Developer-friendly interface: Pulumi is particularly appealing to those with a background in programming. It offers a more intuitive transition to IaC for developers by allowing them to use familiar programming concepts and structures. Advantages of Pulumi Flexibility and integration: Pulumi’s use of common programming languages allows for greater flexibility in defining infrastructure. Developers can embed IaC code directly within applications, enabling a more integrated approach to infrastructure and application development. This tight integration facilitates better management of dependencies and consistent deployments. Immediate cloud service support: Pulumi's "Native" providers for major cloud services like AWS and Azure offer same-day support for new releases. This feature ensures that Pulumi users can always access the latest capabilities and features offered by these cloud providers. Conversion capability: For teams already using Terraform, Pulumi simplifies the transition by offering the ability to convert existing Terraform HCL configurations into Pulumi templates. This feature eases the migration process, allowing teams to leverage their existing IaC investments while transitioning to Pulumi’s more flexible programming model. Improved Developer Experience: Pulumi places a strong emphasis on the developer experience. It integrates with popular IDEs and provides features like autocompletion and syntax highlighting. Built-in testing and debugging capabilities further enhance the development process, allowing for more robust and reliable infrastructure code. Infrastructure and application code integration: Pulumi promotes a unified approach to infrastructure and application development. By allowing infrastructure resources to be defined alongside application code, it creates a more cohesive development environment. This integration fosters collaboration and enables more streamlined workflows. Community and ecosystem support: Pulumi is supported by a vibrant community and an expanding ecosystem. This community-driven approach contributes to the continuous growth and improvement of Pulumi, providing users with a wealth of resources, shared knowledge, and reusable code. Pulumi is carving out a unique niche in the IaC domain with its innovative approach that blends traditional infrastructure management with modern programming practices. Its flexibility, broad cloud support, and developer-friendly nature make it a compelling option for teams looking to streamline their cloud infrastructure management using familiar programming languages. Pulumi Use Cases Multi-cloud application deployment: Deploying a complex application across AWS, Azure, and Google Cloud. Pulumi manages the infrastructure for each microservice across different clouds using a single codebase. It allows writing infrastructure code in familiar languages like TypeScript and integrates application and infrastructure code for seamless scaling and deployment in DevOps. Automated infrastructure for DevOps pipelines: Automating the creation of dynamic testing and staging environments in a CI/CD pipeline. Pulumi automates the provisioning of cloud resources (VMs, databases, etc.) using JavaScript. It integrates with CI/CD pipelines for automatic setup and teardown of testing environments and enhances resource utilization and cost efficiency. DZone’s previously covered how to setup a CI/CD pipeline. Secure infrastructure management: Managing cloud infrastructure with strict security and compliance needs in the financial sector. Pulumi facilitates infrastructure management using familiar programming languages like Python. It provides secure state management in the Pulumi Cloud with features like history tracking. Pulumi also ensures secure handling and encryption of sensitive data and secrets, adhering to compliance standards. Terraform vs. Pulumi: A Comparative Analysis Understanding the differences between Pulumi and Terraform is crucial for anyone venturing into Infrastructure as Code (IaC). Let's break down how they compare in terms of language, development environment, and their approaches to testing and validation. Here's a comparative analysis between Terraform and Pulumi across various aspects in a tabular format: ASPECT TERRAFORM PULUMI Language Used HashiCorp Configuration Language (HCL) General-purpose languages like JavaScript, TypeScript, Python, Go Learning Curve Learning curve due to unique syntax and style Easier for those familiar with these languages Development Environment Integration revolves around HCL, suitable for infrastructure tasks Seamless IDE integration with features like code completion and debugging Cloud Provider Support Comprehensive support for a wide range of cloud services Supports over 60 major cloud services, including native providers Community Larger community with extensive array of custom providers Smaller but growing community Provider Availability New cloud services may not be immediately available Able to adapt Terraform's custom providers State Management Uses local or remote files for state management with state-locking feature Stores state in Pulumi Cloud, allowing versioning and 'diff' comparisons Secrets Management Manages secrets through Vault but stores them in plain text in state files Ensures secure storage and transmission of secrets, supports encryption Modularity Uses reusable modules to streamline infrastructure setup Leverages common programming constructs for flexible code reuse Library/Registry Public online registry for sharing and accessing modules Searchable registry for finding and using packages Conclusion: Terraform vs. Pulumi Choosing between Terraform and Pulumi depends on your team’s expertise, project requirements, and desired infrastructure control. Terraform stands out for its user-friendly nature and extensive community support, making it a go-to for beginners and large-scale deployments. Pulumi, on the other hand, appeals more to developers familiar with its supported languages, offering greater flexibility and integration possibilities within application development. Ultimately, both Terraform and Pulumi are competent and effective IaC tools capable of meeting diverse infrastructure management needs in the cloud era.
In software development, concurrency refers to the ability of a system to execute multiple tasks simultaneously, enhancing efficiency and responsiveness. There are two main mechanisms for concurrency control. 1. Optimistic Concurrency Control (OCC) OCC is a concurrency control mechanism that allows concurrent execution of transactions without acquiring locks upfront. It assumes that conflicts between transactions are infrequent and transactions proceed optimistically. During the commit phase, conflicts are detected, and if conflicts occur, appropriate actions such as aborting and retrying the transaction are taken. In a distributed system, OCC can be implemented by maintaining version information for each data item. Each transaction reads a consistent snapshot of the database at the beginning, and during the commit phase, it checks if any other transaction has modified the same data items it has read. If conflicts are detected, the transaction is rolled back and retried with a new snapshot. 2. Pessimistic Concurrency Control (PCC) PCC is a concurrency control mechanism that assumes conflicts are likely to occur and takes a pessimistic approach by acquiring locks on resources upfront to prevent conflicts. It ensures that transactions acquire exclusive access to resources, preventing other transactions from modifying or accessing them until the locks are released. In a distributed system, PCC can be implemented by using distributed locks or lock managers. When a transaction wants to access a resource, it requests a lock on that resource from the lock manager. If the lock is available, it is granted, and the transaction proceeds. If the lock is unavailable, the transaction waits until the lock is released. In an e-commerce website like Amazon or Flipkart, a common approach to handle inventory availability during order placement without blocking other users is through Optimistic Concurrency Control (OCC). As there are multiple concurrent HTTP requests, it’s important to ensure a seamless experience for the users. The order placement workflow with OCC is explained below. Check Inventory Availability: The system checks if the desired items are in stock. Allocate Inventory: If available, the system marks the items as allocated for the specific order without immediately decrementing the stock quantity. Set Expiration Time: A temporary expiration time is set for the reservation. If the order isn't completed within this time, the allocated inventory is released back to the available stock. Pay and Confirm the Order: Once allocated, the order processing continues, including payment and confirmation. Update Inventory Quantity: After payment, the reserved inventory is decremented from the available stock, ensuring accurate tracking. Release Expired Reservations: If an order isn't completed within the specified time frame, the allocated inventory is automatically released for others to purchase. Here's the beauty of optimistic concurrency control – there's no explicit lock during the order placement. Instead of blocking others, the approach relies on allocating inventory and setting an expiration time. The inventory isn't exclusively locked until paid for and confirmed. Others can still view and attempt to purchase the items, but the reserved inventory prevents overselling by marking them as reserved. By setting an expiration time, if an order isn't completed within that timeframe, the allocated inventory is released, making it available for others to purchase. Optimistic Concurrency Control strikes a balance, ensuring inventory availability for a specific order while allowing concurrent access without explicit locks. Concurrency Control Techniques: There are various OCC and PCC techniques available. For a video version of this article: The choice between OCC and PCC depends on factors such as the workload characteristics, contention level, and desired level of concurrency and performance. OCC is often favored when conflicts are expected to be infrequent, allowing for greater concurrency, while PCC is preferred when conflicts are anticipated to be frequent, at the cost of potentially more locking and blocking. For instance, an e-commerce solution may opt for OCC under normal conditions and choose to use PCC when there is a burst in demand for an item on sale, i.e., a hot sku, or use PCC only when inventory for an item reaches a certain low threshold.
In software development, where innovation and efficiency intertwine, the unexpected often takes center stage. Picture this: a bustling team, tirelessly working on a cutting-edge software release, with expectations soaring high. The buzz of excitement turns into a tense hush as news arrives—an important bug has been unearthed, and it's not just any bug: it's a showstopper. The gravity of the situation intensifies as the source of this revelation is our most crucial customer, the foundation of our success, and one of the main reasons why we’ve made it so far as a software development company. The Situation The Critical Revelation The email arrives—an urgent message from the major client whose operations hinge on the seamless functioning of our software. A showstopper bug has reared its head, casting a shadow over their operations and demanding immediate attention. Panic ensues as the team grapples with the realization that a flaw threatens not just the software's integrity but also the trust of our most pivotal partner. The Urgent Call A hastily convened virtual meeting brings together developers, testers, and project managers in a virtual war room. The urgency in the customer's voice resonates through the call, emphasizing the magnitude of the situation. It's clear—this is not a bug that can wait for the next release; it demands an immediate remedy. The Grim Reality As the team delves into the details of the reported bug, a grim reality unfolds. Despite the urgency, reproducing the issue becomes an elusive challenge. Attempts to replicate the bug in the existing testing environments yield no success. Frustration mounts as time passes and the pressure to deliver a solution intensifies. The Reflection In the midst of the chaos, a reflective moment emerges. What if our testing environments mirrored the intricacies of the production landscape more closely? Could this gap in reproduction have been bridged if our testing environments faithfully emulated the conditions under which the bug had manifested for our customer? The Hypothetical Solution The contemplation leads to a hypothetical scenario—a world where production-like testing environments are seamlessly integrated into the development workflow. In this parallel reality, the bug, though elusive in the controlled testing environments, would have been unmasked, dissected, and addressed with urgency. The Lesson Learned As the team races against time to find a workaround for the showstopper, the lesson becomes perfectly clear. The importance of production-like testing environments transcends theoretical discussions; it becomes a mandate for resilience and responsiveness. In a landscape where the unexpected is the only constant, the ability to replicate real-world conditions in testing environments emerges as a basis for averting crises and fortifying the reliability of our software releases. Why Production-Like Environments Matter Production-like testing environments play a pivotal role in identifying potential issues before software reaches the production stage, contributing to a more robust and efficient development process. Designed to closely mimic the conditions of the actual production environment, this includes replicating the hardware, software, network configurations, and other parameters that characterize the production setting. By creating an environment that mirrors production, development and testing teams can uncover issues that may not be apparent in isolated or artificial testing setups. Here is a snapshot of what makes such environments important: Improved Software Quality By mirroring the production environment, testing teams can uncover and resolve environment-specific issues that might have gone undetected in other testing phases. This leads to enhanced software quality and a reduced risk of production downtime or performance bottlenecks. Enhanced User Experience Production-like testing environments may allow for thorough user acceptance testing, ensuring that the software meets user expectations and functions seamlessly in real-world scenarios. This translates into a positive user experience and increased customer satisfaction. Early Issue Detection By testing in an environment that closely resembles production, teams can catch potential problems early in the development lifecycle. This reduces the likelihood of disruptive deployments where critical issues occur when the software is deployed to production. This helps organizations maintain a smooth and reliable software release process. Accurate Performance Testing Performance testing is more meaningful when conducted in an environment that replicates the conditions of actual use. This includes factors such as the number of concurrent users, data volume, and network conditions. Implementing Production-Like Testing Environments We can use IaC, containerization and orchestration, effective data management, monitoring and logging to implement production-like testing environments. In what follows we will explore how we can use all the above in more detail. Infrastructure As Code (IaC) IaC is an approach to managing and provisioning computing infrastructure through machine-readable script files. In the context of testing environments, IaC plays a crucial role in automating the setup and configuration of infrastructure, ensuring consistency and repeatability. It involves expressing infrastructure configurations, such as servers, networks, and databases, in code files. These files describe the desired state of the infrastructure. Advantages of IaC in Testing Environments Consistency: IaC ensures consistent deployment and configuration of testing environments, reducing the risk of environment-related issues caused by manual errors or discrepancies. Scalability: Automated provisioning allows for the quick and scalable creation of multiple testing environments, accommodating the needs of diverse testing scenarios. Reproducibility: IaC makes it possible to reproduce identical environments at different stages of the development lifecycle, from development to testing to production. Collaboration and traceability: IaC scripts, stored in version control, facilitate collaboration among team members. Changes can be tracked, reviewed, and rolled back if necessary. IaC fits seamlessly into continuous integration/continuous deployment (CI/CD) workflows. As code changes are made, IaC scripts can be automatically triggered to provision or update testing environments. This integration ensures that testing environments are always aligned with the latest codebase. Embracing concepts like immutable infrastructure, where environments are treated as disposable and replaced rather than modified may enhance consistency and reliability. Containerization and Orchestration Containerization and orchestration have emerged as pivotal strategies for software development and testing. These practices revolutionize the way applications are deployed, managed, and tested. Containerization allows developers to encapsulate applications and their dependencies into a standardized unit (the container). This unit ensures that the application runs consistently across different computing environments. Orchestration involves the coordination and management of multiple containers to ensure they work together seamlessly. Kubernetes is a powerful open-source orchestration platform that automates the deployment, scaling, and management of containerized applications. Benefits for Testing Environments Consistency: Containers ensure consistency between development, testing, and production environments, reducing the "it works on my machine" problem. Isolation: Each container runs in isolation, preventing conflicts between dependencies and ensuring that testing environments are not affected by changes in other parts of the system. Quick deployment: Containers can be spun up and torn down rapidly, facilitating quick and efficient testing cycles. Scalability: Orchestration platforms facilitate the easy scaling of testing environments to accommodate varying workloads. Efficiency and resource utilization: Containers are lightweight and share the host OS kernel, making them more efficient in terms of resource utilization compared to traditional virtual machines. Improved collaboration: Containers and orchestration enhance collaboration between development and testing teams by providing a standardized and reproducible environment. Proper security practices should be implemented to secure containers, especially when dealing with sensitive data or in production environments. Data Management Data management is a critical aspect of implementing production-like testing environments, and it involves carefully handling and manipulating data to replicate real-world scenarios. Whether it's replicating production data or generating synthetic data, this strategy is essential for testing how applications interact with and handle different data volumes while ensuring the integrity of the data. Replicating Production Data This involves using a copy of the actual data from the production environment in the testing environment. A major benefit of this approach is realism. Production data provides a realistic representation of the data the application will process, offering insights into how the system behaves under authentic conditions. Such data often reflects the complexity and diversity of actual usage, helping to uncover issues related to data relationships, structure, and distribution. A major challenge is privacy and security. Handling sensitive or personally identifiable information requires careful consideration to comply with privacy and security regulations. Synthetic Data Generation Generating synthetic data involves creating artificial datasets that closely resemble real-world scenarios. This approach is particularly useful when replicating production data is impractical or poses privacy concerns. One benefit is control. Synthetic data provides control over the characteristics of the dataset, allowing for the creation of specific scenarios, edge cases, and data variations. A challenge of synthetic data is realism. Creating synthetic data that accurately represents the complexity and distribution of real-world data can be challenging. For example, ensuring that synthetic data preserves relationships between different data elements may be crucial for meaningful testing. Data Masking and Anonymization When using real production data, especially in testing environments where privacy is a concern, data masking and anonymization techniques can be applied to protect sensitive information. By obscuring or replacing sensitive information with masked or anonymized equivalents, organizations can navigate legal requirements and privacy standards seamlessly. One of the challenges associated with data masking and anonymization is the need for consistent application to maintain the integrity and relationships within the dataset. In scenarios where databases have intricate relationships, ensuring that masked or anonymized data retains these connections is crucial for meaningful testing. Striking the right balance between obscuring sensitive details and preserving the authenticity of data for realistic testing is another challenge. Monitoring and Logging The objective here is not only to observe metrics for the system under test but also to maintain a dynamic equilibrium that mirrors the real-world conditions of production. Ongoing monitoring and validation of data in the testing environment may ensure that it remains consistent, relevant, and representative of real-world conditions. These practices involve the systematic collection, analysis, and visualization of data related to an application's performance, behavior, and issues. It helps in capturing performance metrics, identifying bottlenecks, and gaining insights into the application's behavior. Monitoring Proximity To keep the testing environment in close proximity to the production environment we may: Align performance metrics: Regularly compare key performance indicators (KPIs) between testing and production, adjusting testing conditions to replicate the expected production behavior. This may allow for a consistent benchmark. Simulate realistic user loads: Simulate user loads in testing that closely mimic the patterns and volumes observed in the production environment. Utilize load testing tools to replicate varying levels of user activity, allowing for the assessment of application performance under conditions reflective of production. Consistently utilize resources: Maintain consistency in resource utilization patterns between testing and production environments. Monitor CPU usage, memory consumption, and other resource metrics during testing to ensure they align with the metrics observed in the production setting. Logging Proximity To align testing environments with production realities we may strive for: Event logging harmony: Check that event logging in testing environments harmonizes with the diversity and significance of events recorded in production. Develop a comprehensive event-logging strategy that mirrors the types of events deemed crucial in production scenarios. This involves capturing events related to user interactions, system processes, and critical transactions. Error logging fidelity: Align error logging in testing with the diversity and fidelity required for effective troubleshooting in production scenarios. Log errors encountered during testing rigorously, capturing not only error messages but also contextual information such as stack traces and data points. This mirrors the thorough error logging essential for root cause analysis in production. Audit logging consistency: Check that audit logging in testing environments is consistent with the recording of actions and transactions in production. Capture and record user actions, system modifications, and other relevant activities during testing. This ensures that the audit trail in testing aligns with the stringent requirements of compliance and accountability observed in production. Wrapping Up Starting with a personal experience from my early years in software development, this article showcases why production-like testing environments can be a critical component for delivering high-quality, reliable software. By closely simulating real-world conditions, teams can detect and address issues early in the development process, leading to more robust and resilient applications. Embracing strategies like IaC, containerization, effective data management, monitoring, and logging, can enhance the effectiveness of production-like testing environments, ultimately contributing to a smoother and more reliable software release cycle.
HAProxy is one of the cornerstones in complex distributed systems, essential for achieving efficient load balancing and high availability. This open-source software, lauded for its reliability and high performance, is a vital tool in the arsenal of network administrators, adept at managing web traffic across diverse server environments. At its core, HAProxy excels in evenly distributing the workload among servers, thereby preventing any single server from becoming a bottleneck. This functionality enhances web applications' overall performance and responsiveness and ensures a seamless user experience. More importantly, HAProxy is critical in upholding high availability — a fundamental requirement in today's digital landscape where downtime can have significant implications. Its ability to intelligently direct traffic and handle failovers makes it indispensable in maintaining uninterrupted service, a key to thriving in the competitive realm of online services. As we delve deeper into HAProxy's functionalities, we understand how its nuanced approach to load balancing and steadfast commitment to high availability make it an irreplaceable component in modern distributed systems. This article will mainly focus on implementing a safe and optimized health check configuration to ensure a robust way to remove unhealthy servers and add healthy servers back to the rotation. Dynamic Server Management in HAProxy One of the standout features of HAProxy is its ability to dynamically manage servers, meaning it can add or remove servers from the network as needed. This flexibility is a game-changer for many businesses. When traffic to a website or application increases, HAProxy can seamlessly bring more servers online to handle the load. Conversely, during quieter periods, it can reduce the number of servers, ensuring resources aren't wasted. This dynamic server management is crucial for two main reasons: scalability and fault tolerance. Scalability refers to the ability of a system to handle increased load without sacrificing performance. With HAProxy, this is done effortlessly. HAProxy scales up the system's capacity as demand grows by adding more servers, ensuring that a sudden spike in users doesn't crash the system. This scalability is vital for businesses that experience fluctuating traffic levels or are growing quickly. Fault tolerance is another critical benefit. In any system, servers can fail for various reasons. HAProxy's dynamic server management means it can quickly remove problematic servers from the rotation and reroute traffic to healthy ones. This ability to immediately respond to server issues minimizes downtime and keeps the application running smoothly, which is crucial for maintaining a reliable online presence. In short, HAProxy's dynamic server management offers a flexible and efficient way to handle varying traffic loads and unexpected server failures, making it an indispensable tool for modern web infrastructure. Sample Architecture depicting HAProxy routing requests The above image shows a typical architecture style of request and response servers. HAProxy is installed and configured in this particular setup on all the servers sending requests. HAProxy is configured here so all the response servers are in rotation and actively respond to the requests. HAProxy handles routing and load-balancing requests to a healthy response server. Practical Scenarios and Use Cases HAProxy's dynamic server management proves its worth in various real-world scenarios, demonstrating its versatility and necessity in modern web infrastructures. Let's explore some critical instances where this feature becomes crucial: Handling Traffic Spikes Imagine an online retail website during a Black Friday sale. The traffic can surge unexpectedly, demanding more resources to handle the influx of users. With HAProxy, the website can automatically scale up by adding more servers to the rotation. This ensures that the website remains responsive and can handle the increased load without crashing, providing a seamless shopping experience for customers. Scheduled Maintenance Periods HAProxy offers a smooth solution for websites requiring regular maintenance. During these periods, servers can be taken down for updates or repairs. HAProxy can reroute traffic to other operational servers, ensuring that the website remains live and users are unaffected by the maintenance activities. Unexpected Server Failures In scenarios where a server unexpectedly fails, HAProxy's health check mechanisms quickly detect the issue and remove the faulty server from the pool. Traffic is then redistributed among the remaining servers, preventing potential service disruptions and maintaining uptime. Media Streaming Services during Major Events Viewer numbers can skyrocket unexpectedly for services streaming live events like sports or concerts. HAProxy helps these services by scaling their server capacity in real-time, ensuring uninterrupted streaming even under heavy load. Optimizing Health Checks for Effective Server Rotation This section will explore implementing a safe and optimized health check configuration to act against unexpected server failures described above. Unexpected server failures are inevitable in network systems, but with HAProxy, the impact of such failures can be significantly mitigated by implementing and optimizing health checks. Health checks are automated tests HAProxy performs to evaluate the status of servers in its pool continually. When a server fails or becomes unresponsive, these checks quickly identify the issue, allowing HAProxy to instantly remove the problematic server from the rotation and reroute traffic to healthy ones. This process is essential for maintaining uninterrupted service and high availability. The code snippet below shows one approach to implementing robust health checks. For more details about syntax and keywords in the HAProxy.cfg file, please refer to the manual page. HAProxy.cfg code snippet for health checks inter - This parameter represents the frequency of time interval between health checks fast fall - represents the number of failed checks before removing the server from rotation rise - represents the number of passing checks before adding the server back to rotation With inter 2s fall 2 rises 10, we are configuring HAProxy to perform health checks every 2 seconds on the provided URI path. If HAProxy encounters two (fall 2) consecutive failing checks on a server, it will be removed from rotation and won't take any traffic. Here, we take an aggressive approach by keeping the threshold for failure very low. Similarly, rise 10 ensures that we take a conservative approach in putting a server back in the rotation by waiting for ten consecutive health checks to pass before adding it back to the rotation. This approach provides the right balance when dealing with unexpected server failures. Conclusion In conclusion, HAProxy's dynamic server management, along with its sophisticated health check mechanisms, plays a vital role in modern-day distributed systems infrastructure stack. By enabling real-time responsiveness to traffic demands and unexpected server issues, HAProxy ensures high availability, seamless user experience, and operational efficiency. The detailed exploration of real-world scenarios and the emphasis on optimizing health checks for server rotation underscore the adaptability and resilience of HAProxy in various challenging environments. This capability not only enhances system reliability but also empowers businesses to maintain continuous service quality, a critical factor in today's digital landscape. Ultimately, HAProxy emerges not just as a tool for load balancing but as a comprehensive solution for robust, resilient systems, pivotal for any organization striving for excellence in online service delivery.
As React Native applications evolve, the need for efficient state management becomes increasingly evident. While Async Storage serves its purpose for local data persistence, transitioning to the Context API with TypeScript brings forth a more organized and scalable approach. This comprehensive guide will walk you through the migration process step by step, leveraging the power of TypeScript. Understanding Async Storage and Context API Async Storage in React Native offers asynchronous, persistent storage for key-value data on the device. As the application scales, managing the state solely through Async Storage might become cumbersome. The Context API, in conjunction with TypeScript, provides a structured means of sharing state across components without prop drilling. It ensures type safety and enhances development efficiency. Why Replace Async Storage With Context API in Typescript? Type safety: TypeScript's strong typing system ensures better code integrity and reduces potential runtime errors. Scalability and maintainability: Context API simplifies state management and promotes scalability by facilitating a more organized codebase. Enhanced development experience: TypeScript's static typing aids in catching errors during development, leading to more robust and maintainable code. Step-By-Step Replacement Process 1. Identify Async Storage Usage Review the codebase to locate sections using Async Storage for reading or writing data. 2. Create a Context With TypeScript TypeScript typescript Copy code import React, { createContext, useContext, useReducer, Dispatch } from 'react'; interface AppState { // Define your application state interface here exampleData: string; } interface AppAction { // Define action types and payload structure here type: string; payload?: any; } const initialState: AppState = { exampleData: '', }; const AppContext = createContext<{ state: AppState; dispatch: Dispatch<AppAction>; }>({ state: initialState, dispatch: () => null, }); const appReducer = (state: AppState, action: AppAction): AppState => { // Implement your reducer logic here based on action types switch (action.type) { case 'UPDATE_DATA': return { ...state, exampleData: action.payload, }; // Add other cases as needed default: return state; } }; const AppProvider: React.FC = ({ children }) => { const [state, dispatch] = useReducer(appReducer, initialState); return ( <AppContext.Provider value={{ state, dispatch }> {children} </AppContext.Provider> ); }; const useAppContext = () => { return useContext(AppContext); }; export { AppProvider, useAppContext }; 3. Refactor Components To Use Context Update components to consume data from the newly created context: TypeScript import React from 'react'; import { useAppContext } from './AppContext'; const ExampleComponent: React.FC = () => { const { state, dispatch } = useAppContext(); const updateData = () => { const newData = 'Updated Data'; dispatch({ type: 'UPDATE_DATA', payload: newData }); }; return ( <div> <p>{state.exampleData}</p> <button onClick={updateData}>Update Data</button> </div> ); }; export default ExampleComponent; 4. Implement Context Provider Wrap your application's root component with the AppProvider: TypeScript import React from 'react'; import { AppProvider } from './AppContext'; import ExampleComponent from './ExampleComponent'; const App: React.FC = () => { return ( <AppProvider> <ExampleComponent /> {/* Other components using the context */} </AppProvider> ); }; export default App; 5. Test and Debug Thoroughly test the application to ensure proper functionality and handle any encountered issues during the migration process.
As the year 2023 winds down, there is time for reflection and looking back. I've done that every year on this blog with year-in-review articles. This year I thought I might take all the new learnings around cloud native observability, add in some insights from all the events I attended related to observability, and try to predict what the biggest changes might be for 2024. In this article, I plan to lay out three top predictions based on my experiences over 2023 in the cloud native domain, with a big focus on the observability arena. This has been my first complete year focused on these topics, and the change in technologies I've been focusing on up to now meant I could approach this with zero bias. I just researched, then went hands-on with open source projects mostly found in the Cloud Native Computing Foundation (CNCF) domain, and went on the road to events to put an ear to the ground. While many predictions you find out in the wild tend to be about the next big technology breakthrough or about the expanded use of an emerging technology in larger organizations, this time around I've found myself thinking about this in a slightly different way. Let's take a look at my top three predictions and the thoughts behind them for 2024. Inside, I think everyone cringes a bit when asked to produce their thoughts and predictions on the new year. You just can't win with these things and committing to them ensures you will be told long into the future how wrong you were! Now on to my top three predictions for 2024. 1. Focus on Cloud Native Burn-Out The number one topic of conversation in the cloud native observability domain in 2023 has been, without a doubt, burnout. This has been in every role, from Sight Reliability Engineers (SREs), DevOps, engineers, developers, and managing any part of the cloud-native engineering experience within an organization. They all resonated with this being the number one theme out there. Where does this come from you might ask? According to research in a 2023 Cloud Native Observability Report over 500 engineers and developers were surveyed and here are a few of the results: They are spending 10 hours on average, per week, trying to triage and understand incidents - that's a quarter of their 40-hour workweek. 88% reported that the amount of time spent on issues negatively impacts them and their careers. 39% admit they are frequently stressed out. 22% said they want to quit. It looks like the issues surrounding the use of cloud-native solutions, managing and maintaining that infrastructure, all will continue to expand on the stress, pressure, and resulting impact on cloud-native resources across the globe and in all kinds of organizations. My prediction is that the attention this topic got in 2023, which was primarily focused on the on-call roles, will expand and deepen into all areas where organizations are trying to grow their cloud-native footprints. In 2024 we will hear more about burn-out-related stress, hear more ideas on how to solve it, and see it become one of the biggest topics of conversation at events, online, and at the coffee machine. 2. More Career Movement As mentioned above, the points of contention in the first prediction make this last prediction less staggering. A quick look at IT roles and the retention rates across all organizations and you will see that it's a rather high number of developers, engineers, DevOps, SREs, and more that are changing employers every year. This is not to say that in 2024 there will be massive layoffs. It's more about the levels of stress, burnout, and pressures that come with cloud-native organizations. According to research posted by Sterling in late 2022, the tech turnover rate was at 13.2%, exceeding all other industries that had an average of 10.5%. LinkedIn research found other sources in their survey that pushed the turnover rate in tech to 18.3%. Either number you choose, this is about cloud-native technical staff having roles that are filled with days of frustration, stress, and problems. They will hit that final wall in 2024 and determine that there is no way to be happy and engaged in their current organizations. My prediction is that a surge of career movement - over 25% of current tech roles - will take the plunge and try to find fulfillment in new roles and new organizations and take on new opportunities. 3. Cloud Native Cost Focus From its initial kickoff in early 2019, to its entry into the Linux Foundation in 2020, and into the future, the FinOps Foundation has become vital to all cloud-native and cloud-using organizations. All through 2022 and 2023, we've seen organizations beginning to realize that they need to get value for every dollar spent on cloud-native services. Along these lines, the FinOps Foundation has become the central gathering place for practitioners in the FinOps role across all kinds of organizations. They support them with training and certifications and are close to releasing The FinOps Cost and Usage Specification (FOCUS) v1.0, which aligns with the open-source community approach to this fairly new space. My prediction is that the continued growth seen in the field of FinOps in 2023 in cloud-native organizations will evolve in 2024 into a permanent value-add for more and more organizations. CIOs, CFOs, and CTOs are going to lean more in 2024 on the FinOps roles, processes, and education to manage their cloud-native spend to ensure the value per dollar spent continues to have an impact on their cloud-native investments. On to 2024 There you have my thoughts or predictions, for what I feel are the impressions that 2023 left on me during my travels, conversations, and research into the cloud native and observability domains. Now it's time to roll on into the new year and see what 2024 brings for you and your organizations.
The advancement of technologies such as artificial intelligence (AI) has enabled modern chatbots to respond to user requests using text, audio, and video, eliminating the need for manual research. Chatbots and assistants are now applicable in a wide range of use-case scenarios, from ordering a pizza to navigating complex B2B sales processes. As a result, chatbots have become an essential part of almost every industry today. This article delves into the world of chatbots and AI assistants, as well as a step-by-step guide to creating a chatbot with Natural Language Processing (NLP) and chatbot frameworks. Understanding the Power of Chatbots and AI Assistants The first ever chatbot was created by MIT professor Joseph Weizenbaum in 1966. It was called ELIZA, and it simulated a conversation by using pattern matching and substitution methodologies. The bot searched for keywords in the user input, then used scripts to apply values to the keywords and transform them into an output. Weizenbaum did not expect ELIZA to amuse users as much as it did, with many people attributing human-like feelings to the program and experts predicting that conversational technologies will take over the world in the future. In the decades that followed, the chatbots continued to evolve, with new ones like Jabberwacky, ALICE, and SmarterChild employing increasingly sophisticated technologies like heuristic pattern matching. The emerging generations of chatbots were gradually gaining the ability to support more conversation modes and provide access to additional services such as weather updates, news alerts, and even simple games. The next revolution occurred in 2011 when Apple introduced Siri, a voice-activated AI assistant integrated into the iPhone 4S. It used advanced machine learning techniques to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services, becoming one of the first mainstream AI assistants. Later, all major tech companies debuted their own AI assistants, such as Google Now (2012), Microsoft's Cortana (2014), Amazon's Alexa (2014), and Google Assistant (2016). The Nuts and Bolts of AI Assistants So, today's AI Assistant is a sophisticated chatbot with AI capabilities that frequently employs machine learning to improve over time. AI assistants are more advanced than traditional chatbots in naturally understanding and responding to human language. They can learn from interactions and perform a wide variety of tasks rather than being restricted to predefined scripts. Capabilities of AI Assistants Voice recognition Natural language processing Task automation (e.g., setting reminders, playing music) Personalized recommendations Ability to integrate with various apps and IoT devices Business Adoption of AI Customer support Routine inquiries Sales Marketing Data analysis The Magic Behind NLP: Unraveling the Basics Natural language processing (NLP) is a subset of artificial intelligence that includes technologies that enable computers to understand, interpret, and respond to human language. Beyond chatbots, it's used in sentiment analysis to gauge public opinion and language translation to bridge communication gaps. To better understand NLP, it is necessary to investigate its fundamental concepts: Tokenization: the process of dividing the text into smaller parts, such as individual words or phrases, known as tokens, to assist machines in analyzing human speech. Part-of-speech tagging: the process of identifying each word's grammatical role in the phrase, which improves a chatbot's understanding of sentence structure. Named entity recognition: The process of detecting names of people, places, and things, which is essential for chatbots to understand context. These ideas are critical for making chatbots smarter and more responsive. Choosing the Right Chatbot Framework In today's modern educational technology landscape, even non-programmers can create a chatbot. The market is brimming with tools and frameworks to make this truly simple. Among the most popular frameworks are: Dialogflow Dialogflow integrates with Google services and has an easy-to-use interface as well as strong NLP capabilities. However, it can be expensive to use on a large scale. Rasa Rasa is open-source and highly customizable. It is suitable for complex bots. This tool has two main components, RASA NLU and RASA Core, which aid in the development of bots capable of handling complex user inquiries. More technical knowledge is required. Microsoft Bot Framework Microsoft Bot Framework is a platform for developing, connecting, publishing and managing intelligent and interactive chatbots. It works well with Microsoft products and has a robust set of features. The learning curve can be quite steep. Consider the following factors when choosing the best platform for your needs: Complexity: Simple tasks may require basic platforms such as Dialogflow, whereas Rasa caters to complex, customizable requirements. Scalability: Make your decision based on expected user volume. Dialogflow and the Microsoft Bot Framework are both scalable. Integration capabilities: Match with existing tech stack. Case Study T-Mobile, the second largest wireless carrier in the United States with 100 million customers, used RASA to create an effective AI assistant that assisted the company with customer support during the COVID-19 pandemic. It reduced wait times and improved customer experience at a time when queues for expert communication could reach over 20,000 people calling at the same time. T-Mobile's virtual assistant reached 10% of messaging customers within months of its launch. Building Your Chatbot: Step-By-Step Guide Step 1: Preparing the Groundwork Set up a development environment, select a framework (such as Dialogflow or Rasa), and understand the needs and language patterns of the target audience. Gather relevant datasets for training the chatbot, making sure they are representative of actual user interactions. Step 2: Crafting Conversational Design Create natural, engaging dialogues that are in line with user expectations. Plan out user flows to cover various conversation paths. To ensure smooth conversations, use simple, clear language and anticipate user queries. Step 3: Developing the Brain Construct the chatbot using the chosen framework. Setting up intents, entities, and responses is part of this. Provide snippets for basic functions such as greeting users and answering frequently asked questions. Emphasise best practices such as modular coding and keeping a clean codebase. Step 4. Testing and Iteration Conduct extensive testing, including user testing, to ensure the chatbot works as expected in various scenarios. Use feedback to iteratively develop the chatbot, constantly refining it based on user interactions and new data. Enhancing Your Chatbot With Advanced NLP Techniques Beyond basic responses, NLP can provide a set of advanced features that allow chatbots to respond more appropriately, handle complex queries, and provide personalized experiences. They are as follows: Sentiment analysis to gauge user emotions; Intent recognition to accurately understand user requests; Entity extraction to identify and use key information from user inputs. NLP techniques can also provide a chatbot with multilingual and multimodal support. Benefits and Drawbacks Adding multilingual support necessitates understanding nuances in various languages, which can be difficult but broadens user reach. Multimodal support (such as voice, text, and images) improves user interaction but necessitates the sophisticated integration of multiple AI technologies. Implementation Guidance For language support, use robust NLP libraries and APIs. Integrate technologies such as speech recognition and image processing for multimodal capabilities and test extensively across languages and modes. Deploying and Scaling Your AI Assistant Deployment options: Chatbots can be integrated into websites, embedded in messaging platforms such as Facebook Messenger, or integrated into mobile apps. In terms of reach and user engagement, each platform has distinct advantages. Security concerns: It is critical to ensure data privacy and security during and after deployment, especially for bots that handle sensitive information. Use encryption and secure authentication methods. Scaling strategies: Improve chatbot performance to handle increased traffic by increasing server capacity and refining AI algorithms for efficiency. Update the bot on a regular basis with new data and features. Ethical Considerations and Future Trends Ethical AI and Privacy The critical issues that arise during AI chatbot development include ensuring unbiased AI algorithms, transparent data usage, and respecting user privacy. A growing emphasis is being placed on preventing AI from perpetuating stereotypes or prejudices. You will need the following to ensure responsible development: Apply ethical AI principles (transparency in data collection, personal information security, obtaining user consent, and providing clear data usage policies). Conduct bias and accuracy audits on a regular basis. Future Trends in Conversational AI The rapid advancement of AI technologies determines the course of technological development. Some of the trends are as follows: Voice-activated AI: The rise of voice-activated AI, such as smart home devices, indicates a trend towards more natural, conversational interactions with technology. AI-human collaboration: Future trends indicate that AI will augment rather than replace human capabilities, resulting in improved customer service and more personalized user experiences through AI-human collaboration. Conclusion In this article, we examined how AI drives the development of chatbots, which gain new capabilities to improve our daily lives on many levels and reshape businesses for the better. We also looked at how to easily create a powerful chatbot using the best tools and platforms the market has to offer.
When building a large production-ready stateless microservices architecture, we always come across a common challenge of preserving request context across services and threads, including context propagation to the child threads. What Is Context Propagation? Context propagation means passing contextual information or states across different components or services in a distributed system where applications are often composed of multiple services running on different machines or containers. These services need to communicate and collaborate to fulfill a user request or perform a business process. Context propagation becomes crucial in such distributed systems to ensure that relevant information about a particular transaction or operation is carried along as it traverses different services. This context may include data such as: User authentication details Request identifiers Distributed Tracing information Other metadata (that helps in understanding the state and origin of a request) Key aspects of context propagation include: Request Context: When a user initiates a request, it often triggers a chain of interactions across multiple services. The context of the initial request, including relevant information like user identity, request timestamp, and unique identifiers, needs to be propagated to ensure consistent behavior and tracking. Distributed Tracing and Logging: Context propagation is closely tied to distributed tracing and logging mechanisms. By propagating context information, it becomes easier to trace the flow of a request through various services, aiding in debugging, performance analysis, and monitoring. Consistency: Maintaining a consistent context across services is essential for ensuring that each service involved in handling a request has the necessary information to perform its tasks correctly. This helps avoid inconsistencies and ensures coherent behavior across the distributed system. Middleware and Framework Support: Many middleware and frameworks provide built-in support for context propagation. For example, in microservices architectures, frameworks like Spring Cloud, Istio, or Zipkin offer tools for managing and propagating context seamlessly. Statelessness: Context propagation is especially important in stateless architectures where each service should operate independently without relying on a shared state. The context helps in providing the necessary information for a service to process a request without needing to store a persistent state. Effective context propagation contributes to the overall reliability, observability, and maintainability of distributed systems by providing a unified view of the state of a transaction as it moves through different services. It also helps in reducing the code. The Usecase Let's say you are building a Springboot Webflux-based Microservices/applications, and you need to ensure that the state of the user (Session Identifier, Request Identifier, LoggedIn Status, etc. ) and client ( Device Type, Client IP, etc.) passed in the originating request should be passed between the services. The Challenges Service-to-service call: For internal service-to-service calls, the context propagation does not happen automatically. Propagating context within classes: To refer to the context within service and/or helper classes, you need to explicitly pass it via the method arguments. This can be handled by creating a class with a static method that stores the context in the ThreadLocal object. Java Stream Operations: Since Java stream functions run in separate executor threads, the Context propagation via ThreadLocal to child threads needs to be done explicitly. Webflux: Similar to Java Stream functions, Context propagation in Webflux needs to be handled via reactor Hooks. The Idea here is how to ensure that context propagation happens automatically in the child threads and to the internal called service using a reactive web client. A similar pattern can be implemented for Non reactive code also. Solution Core Java provides two classes, ThreadLocal and InheritableThreadLocal, to store thread-scoped values. ThreadLocal allows the creation of variables that are local to a thread, ensuring each thread has its own copy of the variable. A limitation of ThreadLocal is that if a new thread is spawned within the scope of another thread, the child thread does not inherit the values of ThreadLocal variables from its parent. Java public class ExampleThreadLocal { private static ThreadLocal<String> threadLocal = new ThreadLocal<>(); public static void main(String[] args) { threadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + threadLocal.get()); // Outputs: Child Thread: null }).start(); System.out.println("Main Thread: " + threadLocal.get()); // Outputs: Main Thread: Main Thread Value } } On the other hand; InheritableThreadLocal extends ThreadLocal and provides the ability for child threads to inherit values from their parent threads. Java public class ExampleInheritableThreadLocal { private static InheritableThreadLocal<String> inheritableThreadLocal = new InheritableThreadLocal<>(); public static void main(String[] args) { inheritableThreadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + inheritableThreadLocal.get()); // Outputs: Child Thread: Main Thread Value }).start(); System.out.println("Main Thread: " + inheritableThreadLocal.get()); // Outputs: Main Thread: Main Thread Value } } Hence, in the scenarios where we need to ensure that context must be propagated between parent and child threads, we can use application-scoped static InheritableThreadLocal variables to hold the context and fetch it wherever needed. Java @Getter @ToString @Builder public class RequestContext { private String sessionId; private String correlationId; private String userStatus; private String channel; } Java public class ContextAdapter { final ThreadLocal<RequestContext> threadLocal = new InheritableThreadLocal<>(); public RequestContext getCurrentContext() { return threadLocal.get(); } public void setContext(tRequestContext requestContext) { threadLocal.set(requestContext); } public void clear() { threadLocal.remove(); } } Java public final class Context { static ContextAdapter contextAdapter; private Context() {} static { contextAdapter = new ContextAdapter(); } public static void clear() { if (contextAdapter == null) { throw new IllegalStateException(); } contextAdapter.clear(); } public static RequestContext getContext() { if (contextAdapter == null) { throw new IllegalStateException(); } return contextAdapter.getCurrentContext(); } public static void setContext(RequestContext requestContext) { if (cContextAdapter == null) { throw new IllegalStateException(); } contextAdapter.setContext(requestContext); } public static ContextAdapter getContextAdapter() { return contextAdapter; } } We can then refer to the context by calling the static method wherever required in the code. Java Context.getContext() This solves for: Propagating context within classes. Java Stream Operations Webflux In order to ensure that context is propagated to external calls via webclient, automatically, we can create a custom ExchangeFilterFunctionto read the context from Context.getContext() and then add the context to the header or query params as required. Java public class HeaderExchange implements ExchangeFilterFunction { @Override public Mono<ClientResponse> filter( ClientRequest clientRequest, ExchangeFunction exchangeFunction) { return Mono.deferContextual(Mono::just) .flatMap( context -> { RequestContext currentContext = Context.getContext(); ClientRequest newRequest = ClientRequest.from(clientRequest) .headers(httpHeaders ->{ httpHeaders.add("context-session-id",currentContext.getSessionId() ); httpHeaders.add("context-correlation-id",currentContext.getCorrelationId() ); }).build(); return exchangeFunction.exchange(newRequest); }); } } Initializing the Context as part of WebFilter. Java @Slf4j @Component public class RequestContextFilter implements WebFilter { @Override public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) { String sessionId = exchange.getRequest().getHeaders().getFirst("context-session-id"); String correlationId = exchange.getRequest().getHeaders().getFirst("context-correlation-id"); RequestContext requestContext = RequestContext.builder().sessionId(sessionId).correlationId(correlationId).build() Context.setContext(requestContext); return chain.filter(exchange); } }
Hello DZone Community! Recently, you might have seen our announcement about the updates to the Core program. We’ve received a lot of great feedback about the new program, and we’re very excited to continue growing and expanding it to more members! But that was just the beginning. We’ve been working hard on improvements across the entire DZone community, and today, we are thrilled to announce some big improvements to your DZone profiles! There’s a lot to unpack with these new profiles, but the overall gist of it is that it gives them a fresh new look (ooh shiny!!) and adds some new features for you. Among other things, we’ve added: A section for your education, training, and credentials earned Sections for any Trend Reports and Refcards you’ve contributed to A section for any DZone events you’ve been a part of While all members will receive the above updates to their profiles, we’ve built some additional features for our Core members. They truly go above and beyond for the DZone community by being highly engaged and regularly contributing expert content to the site. These additional changes will help continue to elevate them as thought leaders both within the DZone community and across the industry at large. Core member profiles will now have: Optimized profile A place to add open-source projects they're working on or support A section recognizing when they're highlighted as a Featured Expert on DZone A new, exclusive banner showcasing their Core membership We could not be more excited to roll out these new profiles to you all. Every single one of our contributors is essential to what we do at DZone, and these new profiles will help highlight to our community and the rest of our audience just how knowledgeable and important you are to DZone. We literally would not be here without you! If you haven't already and would like to begin your contributor journey, you can start by creating your own article! Our team of editors is here to help along the way. You can reach out to editors@dzone.com with any of your content questions. Please spend some time poking around your new profile, and let us know what you think. We’re always open to feedback and new ideas! Drop us a line at community@dzone.com with your thoughts. We are so incredibly grateful for all you do for DZone! Sincerely, The DZone Team
Implementing DevOps Practices in Salesforce Development
December 13, 2023 by
Cloud Native Predictions 2024: Stress, Careers, and Costs
December 13, 2023
by
CORE
Beyond Captchas: Exploring the Advancements of AI in Modern Bot Mitigation
December 15, 2023 by
Achieving Continuous Compliance
December 15, 2023
by
CORE
Explainable AI: Making the Black Box Transparent
May 16, 2023 by
Beyond Captchas: Exploring the Advancements of AI in Modern Bot Mitigation
December 15, 2023 by
Achieving Continuous Compliance
December 15, 2023
by
CORE
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Simplifying Database Operations With HarperDB SDK for Java
December 15, 2023
by
CORE
How To Build a Google Meet AI Assistant App in 10 Minutes With Unbody and Appsmith
December 15, 2023 by
Demystifying SAST, DAST, IAST, and RASP
December 15, 2023
by
CORE
Architecting a Comprehensive Testing Framework for API and UI Testing
December 15, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Beyond Captchas: Exploring the Advancements of AI in Modern Bot Mitigation
December 15, 2023 by
Simplifying Database Operations With HarperDB SDK for Java
December 15, 2023
by
CORE
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by