Modern Digital Website Security: Prepare to face any form of malicious web activity and enable your sites to optimally serve your customers.
Low-Code Development: Learn the concepts of low code, features + use cases for professional devs, and the low-code implementation process.
New Profiles Now on DZone!
The Long Road to Java Virtual Threads
Observability and Application Performance
Making data-driven decisions, as well as business-critical and technical considerations, first comes down to the accuracy, depth, and usability of the data itself. To build the most performant and resilient applications, teams must stretch beyond monitoring into the world of data, telemetry, and observability. And as a result, you'll gain a far deeper understanding of system performance, enabling you to tackle key challenges that arise from the distributed, modular, and complex nature of modern technical environments.Today, and moving into the future, it's no longer about monitoring logs, metrics, and traces alone — instead, it’s more deeply rooted in a performance-centric team culture, end-to-end monitoring and observability, and the thoughtful usage of data analytics.In DZone's 2023 Observability and Application Performance Trend Report, we delve into emerging trends, covering everything from site reliability and app performance monitoring to observability maturity and AIOps, in our original research. Readers will also find insights from members of the DZone Community, who cover a selection of hand-picked topics, including the benefits and challenges of managing modern application performance, distributed cloud architecture considerations and design patterns for resiliency, observability vs. monitoring and how to practice both effectively, SRE team scalability, and more.
Getting Started With Low-Code Development
E-Commerce Development Essentials
Kubernetes' CronJob API is a pivotal feature for automating regular tasks in a cloud-native environment. This guide not only walks you through the steps to use this API but also illustrates practical use cases where it can be highly beneficial. Prerequisites A running Kubernetes Cluster (version 1.21 or later) kubectl Command Line Tool Basic Kubernetes knowledge (Pods, Jobs, CronJobs) Understanding the CronJob API The CronJob resource in Kubernetes is designed for time-based job execution. The new API (batch/v1) brings enhancements in reliability and scalability. Use Cases Database Backup Regular database backups are crucial for data integrity. A cron job can be configured to perform database backups at regular intervals, say, daily at midnight. See the following YAML example: YAML apiVersion: batch/v1 kind: CronJob metadata: name: db-backup spec: schedule: "0 0 * * *" jobTemplate: spec: template: spec: restartPolicy: OnFailure volumes: - name: backup-volume hostPath: path: /mnt/backup containers: - name: db-backup image: mysql:5.7 args: - mysqldump - --host=<database host> - --user=root - --password=<database password> - --result-file=/mnt/backup/all-databases.sql - <database name> volumeMounts: - name: backup-volume mountPath: /mnt/backup Explanation of Key Components apiVersion: batch/v1: Specifies the API version. kind: CronJob: Defines the resource type. metadata: Contains the name of the cron job. spec.schedule: Cron format string, here set to run daily at midnight. jobTemplate: Template for the job to be created. containers: name: Name of the container. image: Docker image to use (MySQL 5.7 in this case). args: Commands to execute in the container. Here, it runs mysqldump to backup all databases. result-file=/mnt/backup/all-databases.sql: Redirects the output to a file. restartPolicy: OnFailure: Restart strategy for the container. volumes and volumeMounts: Configures a volume for storing the backup file Automated License Plate Recognition A large commercial parking area requires an efficient system to track vehicles entering and exiting by recognizing their license plates. This scenario outlines a Kubernetes CronJob setup for processing images captured by parking area cameras, using an Automated License Plate Recognition (ALPR) system. See the following YAML snippet: YAML apiVersion: batch/v1 kind: CronJob metadata: name: alpr-job spec: schedule: "*/5 * * * *" # Every 5 minutes jobTemplate: spec: template: spec: containers: - name: alpr-processor image: mycompany/alpr-processor:latest env: - name: IMAGE_SOURCE_DIR value: "/data/camera-feeds" - name: PROCESSED_IMAGE_DIR value: "/data/processed" volumeMounts: - name: camera-data mountPath: "/data" restartPolicy: OnFailure volumes: - name: camera-data persistentVolumeClaim: claimName: camera-data-pvc Explanation of Key Components schedule: "*/5 * * * *": The cron job runs every 5 minutes to process recent images. containers: image: mycompany/alpr-processor:latest: A custom Docker image containing the ALPR software. You can search Docker Hub and replace this with the appropriate container image. env: Environment variables set the paths for the source and processed images. volumeMounts and volumes: A Persistent Volume Claim (PVC) is used to store images from cameras and processed data. Some of the benefits from the above use case can be as following: Entry and Exit Tracking: The system processes images to extract license plate data, providing real-time information on vehicles entering or exiting. Security and Surveillance: Enhanced monitoring of vehicle movement for security purposes. Data Analytics: Accumulate data over time for traffic pattern analysis and parking management optimization. Other Use Cases Report Generation Generate and email system performance reports or business analytics daily or weekly. Cleanup Operations Automatically purge temporary files, logs, or unused resources from your system every night to maintain a clean and efficient environment. Data Synchronization Synchronize data between different environments or systems, like syncing staging database with production every weekend. Certificate Renewal Automate the renewal of SSL/TLS certificates before they expire. Deploy Cron Jobs You can deploy a cron job as show below: kubectl apply -f db-backup-cronjob.yaml To list the jobs fired by the cron jobs, the following command can be used: kubectl get jobs Conclusion Leveraging Kubernetes' new CronJob API allows for efficient and automated management of routine tasks, enhancing both operational efficiency and system reliability. These practical use cases demonstrate how cron jobs can be pivotal in various scenarios, from data management to system maintenance. Disclaimer: This guide is intended for users with a basic understanding of Kubernetes concepts.
In a few words, the idea of canary releases is to deliver a new software version to only a fraction of the users, analyze the results, and decide whether to proceed further or not. If results are not aligned with expectations, roll back; if they are, increase the number of users exposed until all users benefit from the new version. In this post, I'd like to detail this introduction briefly, explain different ways to define the fraction, and show how to execute it with Apache APISIX. Introduction to Canary Releases The term "canary" originates from the coal mining industry. When mining, it's not uncommon to release toxic gases. In a small enclosed space, it can mean quick death. Worse, the gas may be odorless, so miners would breathe it until it was too late to leave. Carbon monoxide is quite common in coal mines and is not detectable by human senses. For this reason, miners brought canaries with them underground. If the canary suddenly dropped dead, chances were high that such a gas pocket had been breached, and it was high time to leave the place. Years ago, we brought this approach to releasing a new software version. The analogy goes like this: miners are the Ops team deploying the version, the canary consists of all tools to measure the impact of the release, and the gas is a (critical) bug. The most crucial part is that you need to measure the impact of the release, including failure rates, HTTP status codes, etc., and compare them with those of the previous version. It's outside the scope of this post, but again, it's critical if you want to benefit from canary releases. The second most important part is the ability to roll back fast if the new version is buggy. Canary Releases vs. Feature Flags Note that canary releases are not the only way to manage the risk of releasing new code. For example, feature flags are another popular way: The canary approach delivers the complete set of features in the new component version Feature flags deploy the component as well, but dedicated configuration parameters allow activating and deactivating each feature individually. Feature flags represent a more agile approach (in the true sense of the word) toward rollbacks. If one feature out of 10 is buggy, you don't need to undeploy the new version; you only deactivate the buggy feature. However, this superpower comes at the cost of additional codebase complexity, regardless of whether you rely on third-party products or implement it yourself. On the other hand, canary requires a mature deployment pipeline to be able to deploy and undeploy at will. Approaches to Canary Releases The idea behind canary releases is to allow only a fraction of users to access the new version. Most canary definitions only define "fraction" as a percentage of users. However, there's more to it. The first step may be to allow only vetted users to check that the deployment in the production environment works as expected. In this case, you may forward only a specific set of internal users, e.g., testers, to the new version. If you know the people in advance, and the system authenticates users, you can configure it by identity; if not, you need to fallback to some generic way, e.g., HTTP headers - X-Canary: Let-Me-Go-To-v2. Remember that we must monitor the old and the new systems to look at discrepancies. If nothing shows up, it's an excellent time to increase the pool of users forwarded to the new version. I assume you eat your own dog food, i.e., team members use the software they're developing. If you don't, for example, an e-commerce site for luxury cars, you're welcome to skip this section. To enlarge the fraction of users while limiting the risks, we can now indiscriminately provide the new version to internal users. We can configure the system to forward to the new version based on the client IP to do this. At a time when people were working on-site, it was easy as their IPs were in a specific range. Remote doesn't change much since users probably access the company's network via a VPN. Again, monitor and compare at this point. The Whole Nine Yards At this point, everything should work as expected for internal users, either a few or all. But just as no plan survives contact with the enemy, no usage can mimic the whole diversity of a production workload. In short, we need to let regular users access the new version, but in a controlled way, just as we gradually increased the number of users so far: start with a small fraction, monitor it, and if everything is fine, increase the fraction. Here's how to do it with Apache APISIX. Apache APISIX offers a plugin-based architecture and provides a plugin that caters to our needs, namely the traffic-split plugin. The traffic-split Plugin can be used to dynamically direct portions of traffic to various Upstream services. This is done by configuring match, which are custom rules for splitting traffic, and weighted_upstreams which is a set of Upstreams to direct traffic to. — traffic-split Let's start with some basic upstreams, one for each version: YAML upstreams: - id: v1 nodes: "v1:8080": 1 - id: v2 nodes: "v2:8080": 1 We can use the traffic-split plugin to forward most of the traffic to v1 and a fraction to v2: YAML routes: - id: 1 uri: "*" #1 upstream_id: v1 plugins: traffic-split: rules: - weighted_upstreams: #2 - upstream_id: v2 #3 weight: 1 #3 - weight: 99 #3 Define a catch-all route Configure how to split traffic; here, weights Forward 99% of the traffic to v1 and 1% to v1. Note that the weights are relative to each other. To achieve 50/50, you can set weights 1 and 1, 3 and 3, 50 and 50, etc. Again, we monitor everything and make sure results are as expected. Then, we can increase the fraction of the traffic forwarded to v2, e.g.: YAML routes: - id: 1 uri: "*" upstream_id: v1 plugins: traffic-split: rules: - weighted_upstreams: - upstream_id: v2 weight: 5 #1 - weight: 95 #1 Increase the traffic to v2 to 5% Note that Apache APISIX reloads changes to the file above every second. Hence, you split traffic in near-real time. Alternatively, you can use the Admin API to achieve the same. More Controlled Releases In the above, I moved from internal users to a fraction of external users. Perhaps releasing to every internal user is too big a risk in your organization, and you need even more control. Note that you can further configure the traffic-split plugin in this case. YAML routes: - id: 1 uri: /* upstream_id: v1 plugins: traffic-split: rules: - match: - vars: [["http_X-Canary","~=","Let-Me-Go-To-v2"]] #1 - weighted_upstreams: - upstream_id: v2 weight: 5 - weight: 95 Only split traffic if the X-Canary HTTP header has the configured value. The following command always forwards to v1: Shell curl http://localhost:9080 The following command also always forwards to v1: Shell curl -H 'X-Canary: Let-Me-Go-To-v1' http://localhost:9080 The following command splits the traffic according to the configured weights, i.e., 95/5: Shell curl -H 'X-Canary: Let-Me-Go-To-v2' http://localhost:9080 Conclusion This post explained canary releases and how you can configure one via Apache APISIX. You can start with several routes with different priorities and move on to the traffic-split plugin. The latter can even be configured further to allow more complex use cases. The complete source code for this post can be found on GitHub. To Go Further CanaryRelease on Martin Fowler's bliki traffic-split Implementation of canary release solution based on Apache APISIX Canary Release in Kubernetes With Apache APISIX Ingress Smooth Canary Release Using APISIX Ingress Controller with Flagger Apache APISIX Canary Deployments
AI holds significant promise for the IoT, but running these models on IoT semiconductors is challenging. These devices’ limited hardware makes running intelligent software locally difficult. Recent breakthroughs in neuromorphic computing (NC) could change that. Even outside the IoT, AI faces a scalability problem. Running larger, more complex algorithms with conventional computing consumes a lot of energy. The strain on power management semiconductors aside, this energy usage leads to sustainability and cost complications. For AI to sustain its current growth, tech companies must rethink their approach to computing itself. What Is Neuromorphic Computing? Neuromorphic computing models computer systems after the human brain. As neural networks teach software to think like humans, NC designs circuits to imitate human synapses and neurons. These biological systems are far more versatile and efficient than artificial “thinking” machines, so taking inspiration from them could lead to significant computing advancements. NC has been around as a concept for decades but has struggled to come to fruition. That may not be the case for long. Leading computing companies have come out with and refined several neuromorphic chips over the past few years. Another breakthrough came in August 2022, when researchers revealed a neuromorphic chip twice as energy efficient than previous models. These circuits typically store memory on the chip — or neuron — instead of connecting separate systems. Many also utilize analog memory to store more data in less space. NC is also parallel by design, letting all components operate simultaneously instead of processes moving from one point to another. How Neuromorphic Computing Could Change AI and IoT As this technology becomes more reliable and accessible, it could forever change the IoT semiconductor. This increased functionality would enable further improvements in AI, too. Here are a few of the most significant of these benefits. More Powerful AI Neuromorphic computing’s most obvious advantage is that it can handle much more complex tasks on smaller hardware. Conventional computing struggles to overcome the Von Neumann bottleneck — moving data between memory and processing locations slows it down. Since NC collocates memory and processing, it avoids this bottleneck. Recent neuromorphic chips are 4,000 times faster than the previous generation and have lower latencies than any conventional system. Consequently, they enable much more responsive AI. Near-real-time decision-making in applications like driverless vehicles and industrial robots would become viable. These AI systems could be as responsive and versatile as the human brain. The same hardware could process real-time responses in power management semiconductors and monitor for cyber threats in a connected energy grid. Robots could fill multiple roles as needed instead of being highly specialized. Lower Power Consumption NC also poses a solution to AI’s power problem. Like the human brain, NC is event-driven. Each specific neuron wakes in response to signals from others and can function independently. As a result, the only components using energy at any given point are those actually processing data. This segmentation, alongside the removal of the Von Neumann bottleneck, means NCs use far less energy while accomplishing more. On a large scale, that means computing giants can minimize their greenhouse gas emissions. On a smaller scale, it makes local AI computation possible on IoT semiconductors. Extensive Edge Networks The combination of higher processing power and lower power consumption is particularly beneficial for edge computing applications. Experts predict 75% of enterprise data processing will occur at the edge by 2025, but edge computing still faces several roadblocks. Neuromorphic computing promises a solution. Conventional IoT devices lack the processing capacity to run advanced applications in near-real-time locally. Network constraints further restrain that functionality. By making AI more accessible on smaller, less energy-hungry devices, NC overcomes that barrier. NC also supports the scalability the edge needs. Adding more neuromorphic chips increases these systems’ computing capacity without introducing energy or speed bottlenecks. As a result, it’s easier to implement a wider, more complex device network that can effectively function as a cohesive system. Increased Reliability NC could also make AI and IoT systems more reliable. These systems store information in multiple places instead of a centralized memory unit. If one neuron fails, the rest of the system can still function normally. This resilience complements other IoT hardware innovations to enable hardier edge computing networks. Thermoset composite plastics could prevent corrosion in the semiconductor, protecting the hardware, while NC ensures the software runs smoothly even if one component fails. These combined benefits expand the IoT’s potential use cases, bringing complex AI processes to even the most extreme environments. Edge computing systems in heavy industrial settings like construction sites or mines would become viable. Remaining Challenges in NC NC’s potential for IoT semiconductors and AI applications is impressive, but several obstacles remain. High costs and complexity are the most obvious. These brain-mimicking semiconductors are only effective with more recent, expensive memory and processing components. On top of introducing higher costs, these technologies’ newness means limited data on their efficacy in real-world applications. Additional testing and research will inevitably lead to breakthroughs past these obstacles, but that will take time. Most AI models today are also designed with conventional computing architectures in mind. Converting them for optimized use on a neuromorphic system could lower model accuracy and introduce additional costs. AI companies must develop NC-specific models to use this technology to its full potential. As with any AI application, neuromorphic computing may heighten ethical concerns. AI poses serious ethical challenges regarding bias, employment, cybersecurity, and privacy. If NC makes IoT semiconductors capable of running much more advanced AI, those risks become all the more threatening. Regulators and tech leaders must learn to navigate this moral landscape before deploying this new technology. Neuromorphic Computing Will Change the IoT Semiconductor Neuromorphic computing could alter the future of technology, from power management semiconductors to large-scale cloud data centers. It’d spur a wave of more accurate, versatile, reliable, and accessible AI, but those benefits come with equal challenges. NC will take more research and development before it’s ready for viable real-world use. However, its potential is undeniable. This technology will define the future of AI and the IoT. The question is when that will happen and how positive that impact will be.
The pursuit of speed and agility in software development has given rise to methodologies and practices that transcend traditional boundaries. Continuous testing, a cornerstone of modern DevOps practices, has evolved to meet the demands of accelerated software delivery. In this article, we'll explore the latest advancements in continuous testing, focusing on how it intersects with microservices and serverless architectures. I. The Foundation of Continuous Testing Continuous testing is a practice that emphasizes the need for testing at every stage of the software development lifecycle. From unit tests to integration tests and beyond, this approach aims to detect and rectify defects as early as possible, ensuring a high level of software quality. It extends beyond mere bug detection and it encapsulates a holistic approach. While unit tests can scrutinize individual components, integration tests can evaluate the collaboration between diverse modules. The practice allows not only the minimization of defects but also the robustness of the entire system. Its significance lies in fostering a continuous loop of refinement, where feedback from tests informs and enhances subsequent development cycles, creating a culture of continual improvement. II. Microservices: Decoding the Complexity Microservices architecture has become a dominant force in modern application development, breaking down monolithic applications into smaller, independent services. This signifies a departure from monolithic applications, introducing a paradigm shift in how software is developed and deployed. While this architecture offers scalability and flexibility, it comes with the challenge of managing and testing a multitude of distributed services. Microservices' complexity demands a nuanced testing strategy that acknowledges their independent functionalities and interconnected nature. Decomposed Testing Strategies Decomposed testing strategies are key to effective microservices testing. This approach advocates for the examination of each microservice in isolation. It involves a rigorous process of testing individual services to ensure their functionality meets specifications, followed by comprehensive integration testing. This methodical approach not only identifies defects at an early stage but also guarantees seamless communication between services, aligning with the modular nature of microservices. It fosters a testing ecosystem where each microservice is considered an independent unit, contributing to the overall reliability of the system. A sample of testing strategies that fall in this category include, but are not limited to: Unit Testing for Microservices Unit testing may be used to verify the correctness of individual microservices. If you have a microservice responsible for user authentication, for example, unit tests would check whether the authentication logic works correctly, handles different inputs, and responds appropriately to valid and invalid authentication attempts. Component Testing for Microservices Component testing may be used to test the functionality of a group of related microservices or components. In an e-commerce system, for example, you might have microservices for product cataloging, inventory management, and order processing. Component testing would involve verifying that these microservices work together seamlessly to enable processes like placing an order, checking inventory availability, and updating the product catalog. Contract Testing This is used to ensure that the contracts between microservices are honored. If microservice A relies on data from microservice B, contract tests would verify that microservice A can correctly consume the data provided by microservice B. This may ensure that changes to microservice B don't inadvertently break the expectations of microservice A. Performance Testing for Microservices Performance tests on a microservice could involve evaluating its response time, scalability, and resource utilization under various loads. This helps identify potential performance bottlenecks early in the development process. Security Testing for Microservices Security testing for a microservice might involve checking for vulnerabilities, ensuring proper authentication and authorization mechanisms are in place, and verifying that sensitive data is handled securely. Fault Injection Testing This is to assess the resilience of each microservice to failures. You could intentionally inject faults, such as network latency or service unavailability, into a microservice and observe how it responds. This helps ensure that microservices can gracefully handle unexpected failures. Isolation Testing Isolation testing verifies that a microservice operates independently of others. Isolation tests may involve testing a microservice with its dependencies mocked or stubbed. This ensures that the microservice can function in isolation and doesn't have hidden dependencies that could cause issues in a real-world environment. Service Virtualization Service virtualization is indispensable to microservices. It addresses the challenge of isolating and testing microservices by allowing teams to simulate their behavior in controlled environments. Service virtualization empowers development and testing teams to create replicas of microservices, facilitating isolated testing without dependencies on the entire system. This approach not only accelerates testing cycles but also enhances the accuracy of results by replicating real-world scenarios. It may become an enabler, ensuring thorough testing without compromising the agility required in the microservices ecosystem. API Testing Microservices heavily rely on APIs for seamless communication. Robust API testing becomes paramount in validating the reliability and functionality of these crucial interfaces. An approach to API testing involves scrutinizing each API endpoint's response to various inputs and edge cases. This examination may ensure that microservices can effectively communicate and exchange data as intended. API testing is not merely a validation of endpoints; it is a verification of the entire communication framework, forming a foundational layer of confidence in the microservices architecture. III. Serverless Computing: Revolutionizing Deployment Serverless computing takes the abstraction of infrastructure to unprecedented levels, allowing developers to focus solely on code without managing underlying servers. While promising unparalleled scalability and cost efficiency, it introduces a paradigm shift in testing methodologies that demands a new approach to ensure the reliability of serverless applications. Event-Driven Testing Serverless architectures are often event-driven, responding to triggers and stimuli. Event-driven testing becomes a cornerstone in validating the flawless execution of functions triggered by events. One approach involves not only scrutinizing the function's response to specific events but also assessing its adaptability to dynamic and unforeseen triggers. Event-driven testing ensures that serverless applications respond accurately and reliably to diverse events, fortifying the application against potential discrepancies. This approach could be pivotal in maintaining the responsiveness and integrity of serverless functions in an event-centric environment. Cold Start Challenges Testing the performance of serverless functions, especially during cold starts, emerges as a critical consideration in serverless computing. One approach to addressing cold start challenges involves continuous performance testing. This may help serverless functions perform optimally even when initiated from a dormant state, identifying and addressing latency issues promptly. By proactively tackling cold start challenges, development teams may confidently allow for a seamless user experience, regardless of the serverless function's initialization state. Third-Party Services Integration Serverless applications often rely on seamless integration with third-party services. Ensuring compatibility and robustness in these integrations becomes a crucial aspect of continuous testing for serverless architectures. One approach involves rigorous testing of the interactions between serverless functions and third-party services, verifying that data exchanges occur flawlessly. By addressing potential compatibility issues and ensuring the resilience of these integrations, development teams may fortify the serverless application's reliability and stability. IV. Tools and Technologies The evolution of continuous testing can be complemented by a suite of tools and technologies designed to streamline testing processes in microservices and serverless architectures. These tools not only facilitate testing but also enhance the overall efficiency and effectiveness of the testing lifecycle. Testing Frameworks for Microservices Tools like JUnit, TestNG, Spock, Pytest, and Behave are a sample of tools that can be useful in the comprehensive testing of microservices. These frameworks support unit tests, integration tests, and end-to-end tests. Contract tests may further validate that each microservice adheres to specified interfaces and communication protocols. Serverless Testing Tools Frameworks such as AWS SAM (Serverless Application Model), Serverless Framework, AWS Lambda Test, Azure Functions Core Tools, and Serverless Offline are all tools that help you develop, test, and deploy serverless applications. However, they have different features and purposes. AWS SAM is a tool that makes it easier to develop and deploy serverless applications on AWS. It provides a YAML-based syntax for defining your serverless applications, and it integrates with AWS CloudFormation to deploy your applications. Additionally, AWS SAM provides a local development environment that lets you test your applications before deploying them to AWS. Serverless Framework is a tool that supports serverless deployments on multiple cloud providers, including AWS, Azure, and Google Cloud Platform (GCP). It provides a CLI interface for creating, updating, and deploying serverless applications. Additionally, Serverless Framework provides a plugin system that lets you extend its functionality with third-party extensions. AWS Lambda Test is a tool that lets you test your AWS Lambda functions locally. It provides a simulated AWS Lambda environment that you can use to run your functions and debug errors. Additionally, AWS Lambda Test can generate test cases for your Lambda functions, which can help you improve your code coverage. Azure Functions Core Tools is a tool that lets you develop and test Azure Functions locally. It provides a CLI interface for creating, updating, and running Azure Functions. Additionally, Azure Functions Core Tools can generate test cases for your Azure Functions, which can help you improve your code coverage. Serverless Offline is a tool that lets you test serverless applications locally, regardless of the cloud provider that you are using. It provides a simulated cloud environment that you can use to run your serverless applications and debug errors. Additionally, Serverless Offline can generate test cases for your serverless applications, which can help you improve your code coverage. Here is a table that summarizes the key differences between the five tools: Feature AWS SAM Serverless Framework AWS Lambda Test Azure Functions Core Tools Serverless Offline Cloud provider support AWS AWS, Azure, GCP AWS Azure Multi-cloud Deployment YAML-based syntax integrates with AWS CloudFormation CLI interface Not supported CLI interface Not supported Local development environment Yes Yes Yes Yes Yes Plugin system No Yes No No No Test case generation Yes No Yes Yes Yes CI/CD Integration Continuous testing seamlessly integrates with CI/CD pipelines, forming a robust and automated testing process. Tools such as Jenkins, GitLab CI, and Travis CI orchestrate the entire testing workflow, ensuring that each code change undergoes rigorous testing before deployment. The integration of continuous testing with CI/CD pipelines provides a mechanism for maintaining software quality while achieving the speed demanded by today's digital economy. V. Wrapping Up Continuous testing is a central element in the process of delivering software quickly and reliably. It's an essential part that holds everything together since it involves consistently checking the software for issues and bugs throughout its development. As microservices and serverless architectures continue to reshape the software landscape, the role of continuous testing becomes even more pronounced. Embracing the challenges posed by these innovative architectures and leveraging the latest tools and methodologies may empower development teams to deliver high-quality software at the speed demanded by today's digital economy.
In the dynamic landscape of modern application development, the synthesis of Streamlit, OpenAI, and Elasticsearch presents an exciting opportunity to craft intelligent chatbot applications that transcend conventional interactions. This article guides developers through the process of building a sophisticated chatbot that seamlessly integrates the simplicity of Streamlit, the natural language processing prowess of OpenAI, and the robust search capabilities of Elasticsearch. As we navigate through each component, from setting up the development environment to optimizing performance and deployment, readers will gain invaluable insights into harnessing the power of these technologies. Join us in exploring how this potent trio can elevate user engagement, foster more intuitive conversations, and redefine the possibilities of interactive, AI-driven applications. What Is Streamlit? Streamlit is a powerful and user-friendly Python library designed to simplify the creation of web applications, particularly for data science and machine learning projects. It stands out for its ability to transform data scripts into interactive and shareable web apps with minimal code, making it accessible to both beginners and experienced developers. Streamlit's emphasis on simplicity and rapid prototyping significantly reduces the learning curve associated with web development, allowing developers to focus on the functionality and user experience of their applications. Why Choose Streamlit for Building Chatbot Applications When it comes to constructing chatbot applications, Streamlit offers a compelling set of advantages. Its simplicity enables developers to create dynamic chat interfaces with ease, streamlining the development process. The library's real-time feedback feature allows for instant adjustments, facilitating quick iterations during the development of conversational interfaces. Streamlit's integration capabilities with data processing libraries and machine learning models make it well-suited for chatbots that require data interaction and AI-driven functionalities. Additionally, the platform's commitment to rapid prototyping aligns seamlessly with the iterative nature of refining chatbot interactions based on user feedback. Overview of Streamlit’s Features and Benefits Streamlit boasts a rich set of features that enhance the development of chatbot applications. Its diverse widgets, including sliders, buttons, and text inputs, empower developers to create interactive interfaces without delving into complex front-end coding. The platform supports easy integration of data visualization tools, making it convenient for chatbots to present information graphically. Streamlit's customization options allow developers to tailor the look and feel of their applications, ensuring a polished and brand-aligned user experience. Furthermore, Streamlit simplifies the deployment process, enabling developers to share their chatbot applications effortlessly through URLs, contributing to wider accessibility and user engagement. In essence, Streamlit offers a potent combination of simplicity, flexibility, and deployment convenience, making it an optimal choice for developers seeking an efficient framework for building intelligent chatbot applications. Overview of Chatbots Chatbots, driven by advancements in natural language processing (NLP) and artificial intelligence, have become integral components of digital interactions across various industries. These intelligent conversational agents are designed to simulate human-like interactions, providing users with a seamless and responsive experience. Deployed on websites, messaging platforms, and mobile apps, chatbots serve diverse purposes, from customer support and information retrieval to transaction processing and entertainment. One key driver behind the rise of chatbots is their ability to enhance customer engagement and satisfaction. By leveraging NLP algorithms, chatbots can understand and interpret user queries, allowing for more natural and context-aware conversations. This capability not only improves the efficiency of customer interactions but also provides a personalized touch, creating a more engaging user experience. Chatbots are particularly valuable in scenarios where instant responses and round-the-clock availability are essential, such as in customer service applications. Beyond customer-facing interactions, chatbots also find utility in streamlining business processes. They can automate repetitive tasks, answer frequently asked questions, and assist users in navigating through services or products. Moreover, chatbots contribute to data collection and analysis, as they can gather valuable insights from user interactions, helping organizations refine their products and services. As technology continues to evolve, chatbots are poised to play an increasingly pivotal role in shaping the future of human-computer interactions, offering a versatile and efficient means of communication across a wide array of domains. Introduction to OpenAI OpenAI stands as a trailblazer in the realm of artificial intelligence, known for pushing the boundaries of what machines can achieve in terms of understanding and generating human-like language. Established with a mission to ensure that artificial general intelligence (AGI) benefits all of humanity, OpenAI has been at the forefront of cutting-edge research and development. The organization's commitment to openness and responsible AI practices is reflected in its pioneering work, which includes the creation of advanced language models like GPT (Generative Pre-trained Transformer). OpenAI's contributions have reshaped the landscape of natural language processing, empowering applications ranging from chatbots and language translation to content generation. As a driving force in the AI community, OpenAI continues to pave the way for innovations that not only enhance machine capabilities but also address ethical considerations and the broader societal impact of artificial intelligence. Setting up the Development Environment Below are key steps to set up the development environment for building a Streamlit Chatbot Application with OpenAI and Elasticsearch: Install Streamlit: Begin by installing Streamlit using pip install streamlit in your Python environment. Streamlit simplifies the creation of interactive web applications and serves as the foundation for your chatbot interface. OpenAI API Access: Obtain access to the OpenAI API by signing up on the OpenAI platform. Retrieve your API key, which will enable your application to leverage OpenAI's natural language processing capabilities for intelligent chatbot responses. Set up Elasticsearch: Install and configure Elasticsearch, a powerful search engine, to enhance your chatbot's capabilities. You can download Elasticsearch from the official website and follow the setup instructions to get it running locally. Dependencies: Ensure you have the necessary Python libraries installed, including those required for interfacing with OpenAI (e.g., openai library) and connecting to Elasticsearch (e.g., elasticsearch library). How To Build a Chatbot Building a chatbot that integrates Elasticsearch for information retrieval and OpenAI's LangChain for advanced natural language processing involves several steps. Below is a simplified example using Python, Streamlit for the interface, and the elasticsearch and openai libraries. Step 1: Install Required Libraries Python pip install streamlit openai elasticsearch Step 2: Set Up Elasticsearch Connection Python pip install elasticsearch Step 3: Update OpenAI API Key Update your_openai_api_key in code using OpenAI API key from the OpenAI platform Step 4: Create a Streamlit App Python import streamlit as st import openai from elasticsearch import Elasticsearch # Set up OpenAI API key openai.api_key = 'your_openai_api_key' # Set up Elasticsearch connection es = Elasticsearch() # Streamlit App def main(): st.title("Chatbot using OpenAI and Elasticsearch") # User input user_input = st.text_input("Question:") if st.button("Answer"): # Call OpenAI API for generating response response = get_openai_response(user_input) # Display response st.text("Response: " + response) # Store the conversation in Elasticsearch index_conversation(user_input, response) # OpenAI API call function def get_openai_response(user_input): prompt = f"User: {user_input}\nChatbot:" response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, temperature=0.7, max_tokens=150, n=1, ) return response['choices'][0]['text'].strip() # Store conversation in Elasticsearch def index_conversation(user_input, chatbot_response): doc = { 'user_input': user_input, 'chatbot_response': chatbot_response } es.index(index='chat_data', body=doc) if __name__ == "__main__": main() Step 5: Run the Streamlit App Python streamlit run your_script_name.py Enhancements and Efficiency Suggestions When integrating OpenAI with Elasticsearch using Streamlit, there are several enhancements and optimization techniques you can implement to improve the performance, user experience, and overall functionality of your chatbot application. Here are some suggestions: Context tracking for multi-turn conversations: Enhance the chatbot to handle multi-turn conversations by maintaining context between user interactions. Error handling: Implement robust error handling to gracefully manage situations where Elasticsearch queries return no results or when there are issues with the OpenAI API. User authentication and personalization: Consider implementing user authentication to personalize the chatbot experience. Optimize Elasticsearch queries: Fine-tune your Elasticsearch queries for optimal performance. Caching responses: Implement a caching mechanism to store and retrieve frequently used responses from both Elasticsearch and OpenAI. Implement throttling and rate limiting: To prevent abuse and control costs, consider implementing throttling and rate limiting for both Elasticsearch and OpenAI API requests. Integration with additional data sources: Expand the chatbot's capabilities by integrating it with other data sources or APIs. Natural Language Understanding (NLU) enhancements: Improve the natural language understanding of your chatbot by incorporating NLU models or techniques. User interface enhancements: Enhance the Streamlit user interface by incorporating features like interactive buttons, sliders, or dropdowns for user input. Monitoring and analytics: Implement monitoring and analytics tools to track user interactions, performance metrics, and potential issues. A/B testing: Conduct A/B testing to experiment with different variations of your chatbot's responses, Elasticsearch queries, or user interface elements. Security considerations: Ensure that your application follows best practices for security, especially when handling user data or sensitive information. Documentation and user guidance: Provide clear documentation and user guidance within the application to help users understand the capabilities of the chatbot. By incorporating these enhancements and optimization techniques, you can create a more robust, efficient, and user-friendly OpenAI and Elasticsearch integration using Streamlit. Use Cases Integrating OpenAI with Elasticsearch using Streamlit can offer a versatile solution for various use cases where natural language understanding, information retrieval, and user interaction are crucial. Here are a few use cases for such an integration: Customer support chatbots: Deploy an OpenAI-powered chatbot integrated with Elasticsearch for quick and accurate responses to customer queries. Knowledge base access: Enable users to access and search through a knowledge base using natural language queries. Interactive educational platforms: Develop interactive educational platforms where students can engage in natural language conversations with an OpenAI-based tutor. Technical troubleshooting: Build a technical support chatbot that assists users in troubleshooting issues. Interactive data exploration: Develop a chatbot that assists users in exploring and analyzing data stored in Elasticsearch indices. Personalized content recommendations: Implement a content recommendation chatbot that uses OpenAI to understand user preferences. Legal document assistance: Build a chatbot to assist legal professionals in retrieving information from legal documents stored in Elasticsearch. These use cases highlight the versatility of integrating OpenAI with Elasticsearch using Streamlit, offering solutions across various domains where natural language understanding and effective information retrieval are paramount. Conclusion Integration of OpenAI with Elasticsearch through the Streamlit framework offers a dynamic and sophisticated solution for building intelligent chatbot applications. This synergy harnesses the natural language processing capabilities of OpenAI, the efficient data retrieval of Elasticsearch, and the streamlined interface of Streamlit to create a responsive and user-friendly conversational experience. The outlined enhancements, from context tracking and error handling to user authentication and personalized responses, contribute to a versatile chatbot capable of addressing diverse user needs. This guide provides a comprehensive blueprint, emphasizing optimization techniques, security considerations, and the importance of continuous improvement through monitoring and A/B testing. Ultimately, the resulting application not only interprets user queries accurately but also delivers a seamless, engaging, and efficient interaction, marking a significant stride in the evolution of intelligent chatbot development.
In this blog, you will learn how to monitor a Spring Boot application using Ostara. Ostara is a desktop application that monitors and manages your application. Enjoy! Introduction When an application runs in production (but also your other environments), it is wise to monitor its health. You want to make sure that everything is running without any problems, and the only way to know this is to measure the health of your application. When something goes wrong, you hopefully will be notified before your customer notices the problem, and maybe you can solve the problem before your customer notices anything. In a previous post, it was explained how to monitor your application using Spring Actuator, Prometheus, and Grafana. In this post, you will take a look at an alternative approach using Spring Actuator in combination with Ostara. The setup with Ostara is a bit easier; therefore, it looks like a valid alternative. The proof of the pudding is in the eating, so let’s try Ostara! The sources used in this blog are available on GitHub. Prerequisites The prerequisites needed for this blog are: Basic Spring Boot 3 knowledge; Basic Linux knowledge; Java 17 is used. Create an Application Under Test First, you need to create an application that you can monitor. Navigate to Spring Initializr and add the Spring Web and Spring Boot Actuator dependencies. Spring Web will be used to create two dummy Rest endpoints, and Spring Boot Actuator will be used to enable the monitor endpoints. See a previous post in order to get more acquainted with Spring Boot Actuator. The post is written for Spring Boot 2, but the contents are still applicable for Spring Boot 3. Add the git-commit-id-plugin to the pom file in order to be able to generate build information. Also, add the build-info goal to the executions of the spring-boot-maven-plugin in order to generate the information automatically during a build. See a previous post if you want to know more about the git-commit-id-plugin. XML <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <executions> <execution> <goals> <goal>build-info</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>pl.project13.maven</groupId> <artifactId>git-commit-id-plugin</artifactId> <version>4.9.10</version> <executions> <execution> <id>get-the-git-infos</id> <goals> <goal>revision</goal> </goals> </execution> </executions> <configuration> <dotGitDirectory>${project.basedir}/.git</dotGitDirectory> <prefix>git</prefix> <verbose>false</verbose> <generateGitPropertiesFile>true</generateGitPropertiesFile> <generateGitPropertiesFilename>${project.build.outputDirectory}/git.properties</generateGitPropertiesFilename> <format>properties</format> <gitDescribe> <skip>false</skip> <always>false</always> <dirty>-dirty</dirty> </gitDescribe> </configuration> </plugin> </plugins> </build> Enable the full git information to the actuator endpoint in the application.properties. Properties files management.info.git.mode=full Add a Rest controller with two dummy endpoints. Java @RestController public class MetricsController { @GetMapping("/endPoint1") public String endPoint1() { return "Metrics for endPoint1"; } @GetMapping("/endPoint2") public String endPoint2() { return "Metrics for endPoint2"; } } Build the application. Shell $ mvn clean verify Run the application. Shell $ java -jar target/myostaraplanet-0.0.1-SNAPSHOT.jar Verify the endpoints. Shell $ curl http://localhost:8080/endPoint1 Metrics for endPoint1 $ curl http://localhost:8080/endPoint2 Metrics for endPoint2 Verify the actuator endpoint. Shell $ curl http://localhost:8080/actuator | python3 -mjson.tool ... { "_links": { "self": { "href": "http://localhost:8080/actuator", "templated": false }, "health": { "href": "http://localhost:8080/actuator/health", "templated": false }, "health-path": { "href": "http://localhost:8080/actuator/health/{*path}", "templated": true } } } Add Security The basics are in place now. However, it is not very secure. Let’s add authorization to the actuator endpoint. Beware that the setup in this paragraph is not intended for production usage. Add the Spring Security dependency to the pom. XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Add the credentials and role to the application.properties file. Again, do not use this for production purposes. Properties files spring.security.user.name=admin spring.security.user.password=admin123 spring.security.user.roles=ADMIN Add a WebSecurity class, which adds the security layer to the actuator endpoint. Java @Configuration @EnableWebSecurity public class WebSecurity { @Bean public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http.authorizeHttpRequests(authz -> authz .requestMatchers("/actuator/**").hasRole("ADMIN") .anyRequest().permitAll()) .httpBasic(Customizer.withDefaults()); return http.build(); } } Build and start the application. Verify whether the actuator endpoint can be accessed using the credentials as specified. Shell $ curl http://localhost:8080/actuator -u "admin:admin123" | python3 -mjson.tool ... { "_links": { "self": { "href": "http://localhost:8080/actuator", "templated": false }, "health": { "href": "http://localhost:8080/actuator/health", "templated": false }, "health-path": { "href": "http://localhost:8080/actuator/health/{*path}", "templated": true } } } Install Ostara Navigate to the Ostara website and click the Download Ostara button. Choose the platform you are using (Linux 64bit, in my case), and the file Ostara-0.12.0.AppImage is downloaded. Double-click the file, and Ostara is started. That’s all! Monitor Application By default, only a limited set of actuator endpoints are enabled. Ostara will function with this limited set, but less information will be visible as a consequence. In order to see the full set of capabilities of Ostara, you enable all actuator endpoints. Again, beware of how much you expose in production. Properties files management.endpoints.web.exposure.include=* management.endpoint.health.show-details=always Before you continue using Ostara, you are advised to disable sending usage statistics and error information. Navigate to the settings (right top corner), choose Privacy, and disable the tracking options. In the left menu, choose Create Instance and fill in the fields as follows: Actuator URL Alias: MyFirstInstance Application Name: MyFirstApp Disable SSL Verification: Yes (for this demo, no SSL connection is used) Authentication Type: Basic Username and Password: the admin credentials Click the Test Connection button. This returns an unauthorized error, which appears to be a bug in Ostara because the credential information is correct. Ignore the error and click the Save button. Ostara can connect to the application, and the dashboard shows some basic status information. You can explore all the available information for yourself. Some of them are highlighted below. Info The Info page shows you the information which you made available with the help of the git-commit-id-plugin. App Properties The App Properties page shows you the application properties. However, as you can see in the below screenshot, all values are masked. This is the default Spring Boot 3 behavior. This behavior can be changed in application.properties of the Spring Boot Application. You can choose between always (not recommended), when-authorized or never. Properties files management.endpoint.configprops.show-values=when-authorized management.endpoint.env.show-values=when-authorized Build and start the application again. The values are visible. Metrics The Metrics page allows you to enable notifications for predefined or custom metrics. Open the http.server.requests metric and click the Add Metric Notification. Fill in the following in order to create a notification when EndPoint1 is invoked more than ten times, and click the Save button: Name: EndPoint 1 invoked > 10 times Type: Simple Tags: /endPoint1 Operation: Greater Than Value: 10 Invoke EndPoint1 more than ten times in a row. Wait for a minute, and the notification appears at the top of your main screen. Loggers The Loggers page shows you the available loggers, and you are able to change the desired log level. This is an interesting feature when you need to analyze a bug. Click the DEBUG button for the com.mydeveloperplanet.myostaraplanet.MetricsController. A message is shown that this operation is forbidden. The solution is to disable the csrf protection for the actuator endpoints. For more information about csrf attacks, see this blog. Add the following line to the WebSecurity class. Java public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http.authorizeHttpRequests(authz -> authz .requestMatchers("/actuator/**").hasRole("ADMIN") .anyRequest().permitAll()) .csrf(csrf -> csrf .ignoringRequestMatchers("/actuator/**") ) .httpBasic(Customizer.withDefaults()); return http.build(); } Also, add some logging statements to the EndPoint1 code in order to verify the result. Java @RequestMapping("/endPoint1") public String endPoint1() { logger.debug("This is DEBUG message"); logger.trace("This is a TRACE message"); return "Metrics for endPoint1"; } Build and restart the application. Enable the DEBUG logging again for the MetricsController and invoke EndPoint1. The DEBUG statement is shown in the logs. Shell 2023-09-10T15:06:04.511+02:00 DEBUG 30167 --- [nio-8080-exec-8] c.m.myostaraplanet.MetricsController : This is DEBUG message Multiple Instances When you have multiple instances of your application, you can create another instance to monitor. Start another instance of the application on port 8081. Shell $ java -jar -Dserver.port=8081 target/myostaraplanet-0.0.1-SNAPSHOT.jar Hover over MyFirstApp and click the three dots menu. Choose Add Instance and fill in the following: Actuator URL Alias: MySecondInstance Clicking the Test Connection button is successful this time. Click the Save button. The second instance is added, and the application dashboard view shows the summary information. Conclusion Ostara is a good alternative for monitoring Spring Boot applications. The installation only requires you to download a file and start it. Ostara gives you a clear visual view, and the notifications notify you when something is wrong. It is also capable of starting thread profiling on your instance and downloading heap dumps. Compared to Grafana, Grafana has some more fancy graphs than Ostara. However, Ostara is not only a visualization tool; but you can also interact with your application and receive notifications when something goes wrong.
I’ve noticed two danger zones that organizations run into. Today, I’ll describe these two danger zones, and give some advice for navigating them. I’ll talk about this for engineering organizations. But I suspect it’s applicable to any group of humans working at these scales. Why Are These Danger Zones Important? I’ve seen several companies waste years getting trapped in these danger zones. That time is precious for a startup and can result in the business failing. I’ve seen people who are smarter than me get into these traps over and over. I believe the reason for that is that these are structural problems. Solving them requires some deep refactoring of your organization, and most people haven’t done this type of work before. So, they fail, and their company and employees suffer. The growth traps in these danger zones interact with other leadership and organizational problems in harmful ways. For example, if you have a leadership team that tends to micromanage, these growth traps will make it worse. Or if you have a lack of leadership alignment, that will be more prominent. So the traps in these danger zones have a disproportionate impact. Why Listen to Me on This? I have both breadth and depth of experience in these danger zones. At New Relic, we had an experienced leader who helped us navigate the first danger zone. However, I saw areas where we struggled and was involved throughout the process. I spent years grappling with the second danger zone at New Relic. It was the most critical deficiency in our engineering organization for years. It was something we continuously grappled with. We worked with Jim Shore, author of The Art of Agile Development, to successfully address it. I was part of the team that fixed it, and it shaped my thinking about the patterns behind how groups of humans can operate at increasingly larger scales. It has become a major theme in my leadership work ever since. I’ve since worked at almost twenty-five startups, and any of them that have grown through the danger zones have run into these same growth traps. I’ve focused a lot of my consulting practice on helping organizations get through these traps and avoid much of the suffering you would expect. Danger Zone #1: The Team Trap The first of these two danger zones is what I call the “team trap." It generally happens sometime between ten and twenty people. You start with a co-founder or leader who is in charge of engineering. The engineers all report to this person. There are lots of projects going on, and they get busier and busier as the team grows. Often, you’ll have each individual focusing on their own project! Because the team is small enough, everyone has a pretty decent sense of what is going on. You know what projects are being worked on, and how things are progressing. Priorities are fairly clear, and communication is uncomplicated. Usually, it all happens in one place, with everyone reading everything or in the same room. Communication is many to many. The future is bright. What Happens With the Team Trap As the team grows, however, things start to break down. The leader works longer and longer hours. Yet, it seems harder and harder to get work done, and execution and quality seem to be slipping. Communication seems muddled, and you hear more people wondering about priorities or what the strategy is. This is the Team Trap. You’re heading towards failure, and unless you reshape things, it will get worse and worse! Why It’s Hard To Fix the Team Trap You will face a few obstacles to make the right changes. First of all, the founders and early people are often really smart, incredibly dedicated people. They will work harder and harder, and try to brute force themselves through the Team Trap. That won’t work – it will only delay the solution. Second, some of the solutions to the Team Trap will feel like bureaucracy to the founders and early people in the company. They’ll resist the changes because they want to preserve the way the company felt early on – everything could happen quickly and effortlessly. They will often have an aversion to the changes they might need to make. What To Do About the Team Trap The changes you typically need to make at this phase are structural changes: You need to set up cross-functional teams with ownership. This is a hard thing to do well, and if done incorrectly, can actually make everything worse! I advise you to read Team Topologies, and to get help with this. You can also try an alternative approach: FAST agile collectives. You’ll need to start thinking about your organization’s design. You may need managers. But you may need a different type of manager than you think. You’ll need to think about how to design your meetings. You may need some lightweight role definition. Ultimately, someone has to start thinking about the way your organization is structured, and how all the pieces will fit together. Part of this organizational design is to also think through your communication design. You probably want to start segmenting communication, so that people know what they need to, but aren’t flooded with a lot of information they don’t need. There is a balance to this. You don’t want to over tilt towards structure, but you also don’t want to avoid necessary structure. All of this is pretty hard, and I’ve built a business helping engineering organizations with this (so definitely reach out if you need help with it, or find someone else to help you). Why Does the Team Trap Happen? Incidentally, I believe the reason this seems to happen at between ten and twenty engineers is because that’s when one person can no longer reasonably manage everyone in engineering. You have to start to split the world. And once you do that, it forces a lot of other changes to happen at the same time. It’s a little like when you have a web server that is delivering content over the internet. As soon as you want a second server for redundancy or scaling reasons, all of a sudden, you need a lot more in place to make things work. You may need a load balancer. You may need to think about state and caching since it is done independently for each server. All of these concerns happen at the same time. If you’re successful in your design, you’ll have a structure that will take you pretty far. Your teams will be autonomously creating value for the company. And things should go pretty well until you hit the second danger zone. Danger Zone #2: The Cross-Team Project Trap The next trap is with how teams work with each other. You reach a level of complexity where the primary challenge for your organization is how to ensure that anything that crosses team boundaries can be successful. As the number of teams grows, each of them delivers value. But they aren’t perfect encapsulations of delivery. Teams need things from each other. And as your product grows, you’ll need things from multiple teams. The themes for this stage are coordination and dependencies. How do you get teams to coordinate, to deliver something bigger than themselves? And how do you deal with the fact that dependencies often aren’t reasonable? How do you sort through those dependencies, and minimize them? The cross-team project danger zone occurs somewhere after about forty people. I often see it happen between forty and sixty people in an organization. At New Relic, we tried valiantly to fix cross-team projects, but we didn’t really succeed until we worked with Jim Shore, and at that time there were probably 200 engineers in the organization. It was long overdue. As an aside: It’s plausible that our failure to address this earlier was instrumental in Datadog’s ascendance. Why? We were much less effective in engineering at the time, and this slowed our ability to succeed in the enterprise market. Most of the bigger projects were enterprise features. Our focus on growing into the enterprise distracted us from Datadog’s rise and prevented us from addressing shifts in the way developers were working with microservices. It’s possible that handling this earlier could have resulted in a completely different outcome, though there were a lot of factors involved. What Happens With the Cross-Team Project Trap To understand the cross-team project trap, consider a couple of examples. First, you may want to do some work that affects many teams. For example, let’s say your customers are asking for role-based access controls. This is work that many teams will need to focus on. Yet the enabling work might be done by one team. This can require coordination. Another need you’ll see increasingly is that multiple teams need similar functionality. They might both want a similar table user interface. Or they might both require a similar API. Or they might depend on similar data. This type of work tends to require teams to depend on other teams to do work for them. This is a growth in dependencies. At some point, coordination and dependencies grow to become your most serious obstacle to delivery. You’ll know you’re in this second danger zone if you see some of these symptoms: There is a lack of confidence from the rest of the company that engineering can deliver large, important initiatives. The general track record is that engineering ships late, if at all. You see lots of heroic effort to deliver anything that crosses team boundaries. You may have a few people who are unusually good program managers, but even they have failures. You might see opposing instincts to add more structure, or operate more “like a startup." A few really experienced engineers who can get things done are held up as saviors. But the general default is that things don’t ship well. You have areas of the organization that are such hot spots that they go through waves of failure - often because they are the hot spot for dependencies. Why It’s Hard To Fix the Cross-Team Project Trap When I was at New Relic, I was leading up the engineering side of a new analytics product. It was an ambitious product. We had widespread agreement that it was a top priority. However, we needed things from many parts of engineering in order to deliver on the complete vision for the product. Those dependencies didn’t seem optional. So the way I attempted to handle this was by acting as a program manager. I tried to organize my dependencies as projects. Each of them would be updated on progress and risks. Fairly standard stuff for program management. But what I found is that the structure of the organization didn’t make the execution on this type of project possible. It was mathematically impossible. Every team had their own priorities. Even if they thought we were the top priority, that was subject to change. If they had a reliability problem, they had to do work to address that problem. Sometimes something new would come up, and bump our priority back. I was essentially making plans that weren’t based on anything structurally sound. It was difficult to fix at that point. I couldn’t tell all of engineering how to operate! At the time, I didn’t know what would fix the situation. We had tried for years at New Relic to crack the “cross-team project” problem. We rewarded people who were good at project and program management. We hired people that were good at it. We even made delivery of cross-team projects part of our promotion criteria! But ultimately, we didn’t make the structural changes we needed. The challenges you face to fixing these at this stage are mostly that there is a lot of organizational inertia. Changes can feel threatening. People can feel demotivated to make changes when they are under so much stress. Working within the existing system can take all of your bandwidth, so people will be reluctant to work extra to extricate themselves from the mess. And structural fixes can intrude on leadership turf, so you need a high level of support from the very top of the organization to make your changes. This is not something you can just ignore. It will continue to get worse and worse until any project that crosses team boundaries ends up being impossible to complete. What To Do About the Cross-Team Project Trap These are the types of changes I recommend if you run into the cross-team project trap: Centralize cross-team priorities (possibly with a product council) and teach teams how to work with those central priorities. Define organizational and team coordination models. For example, move platform teams from service model teams to self-service. Make your product teams act as independent executors, assuming no dependencies in their projects. Carefully design which teams act in an embedded model. Use program managers for cross-team initiatives. Limit the number of cross-team initiatives. Reorganize your teams mostly along cross-functional lines. Reduce dependencies between teams. Or, you might experiment with FAST agile teams. Reduce the size of projects (using milestones or increments). Ideally, get help! Why Does the Cross-Team Project Trap Happen? I have a hunch these growth traps are the result of complexity jumps that occur when you add a layer of management. The first danger zone corresponds to when you add managers, the second danger zone is often when you add directors. When you have this jump in complexity, you have to shift the structure of the organization. Otherwise, you have a mismatch between what is necessary for that structure to be successful, and the way it really is. This is not to say that adding managers or directors is what causes the problem. You can run into these growth traps even if you have managers or directors. It’s just that you need them both at the same time. Incidentally, this is why I am bullish on FAST Agile. I think it may allow you to have a simpler organizational structure for a longer period of time. Combined with some of these other structures, I think the potential benefits outweigh the fact that it is a new, less well-developed practice. Thank You I try to credit people who have influenced my thinking or directly affected my approach. Much of the first danger zone is through my own observation. I’m not sure I’ve seen anyone else articulate it. But I would guess others have seen the same thing – when I talk with other leaders or venture capitalists, they have a look of “yeah, that sounds familiar." For the second danger zone, there isn’t a place I can point to (that I remember) that highlights this as a scaling barrier for organizations. I’m pretty sure something must exist! For how to address it, my biggest source of credit goes to Jim Shore. His work at New Relic was quite effective, and it was a career highlight to work with him and the Upscale team to design and implement the solutions. While the coordination models have been my own pattern language for organizational and cross-team work, you’ll notice he is credited on many of them.
Working in technology for over three decades, I have found myself in a position of getting up to speed on something new at least 100 times. For about half of my career, I have worked in a consulting role, which is where I faced the challenge of understanding a new project the most. Over time, I established a personal goal to be productive on a new project in half the time it took for the average team member. I often called this the time to first commit or TTFC. The problem with my approach to setting a TTFC record was the unexpected levels of stress that I endured in those time periods. Family members and friends always knew when I was in the early stages of a brand-new project. At the time, however, since I always wanted to provide my clients with the best value for the rate they agreed to pay for my services, there really wasn’t any other option. Recently, I discovered Unblocked … which provides the possibility to crush TTFCs I had set on past projects. About Unblocked Unblocked, which is still in beta, at the time of this writing, is focused on removing the mysteries in your code. The AI platform trains on all the information about your project and then allows you to ask questions and get answers about your project and codebase. It absorbs threads stored within instant messaging, pull requests, source code, and bugs/stories/tasks within project manager software. Even project information stored in content collaboration solutions can be consumed by Unblocked. Information from these various sources is then cataloged into a secured repository owned and maintained by Unblocked. From there, a simple user interface allows you to ask questions…and get answers fast… in a human-readable format. Use Case: The Service Owned By No One The idea of taking ownership of a service or solution that is owned by no one has become quite common as API adoption has skyrocketed. Services can be initialized to meet a shared need by contributors from various groups within the organization. This can be an effective approach to solve short-term problems, however, when there is no true service owner, the following long-term challenges can occur: Vulnerability mitigation – who is going to address vulnerabilities as they surface? Bug fixes and enhancements – who is going to fix or further extend the service? Tooling updates – who will handle larger scale migrations, like a change in CI/CD tooling? Supportability – who is responsible for answering general questions posed by service consumers? I ran into these exact issues recently, because my team inherited a service that was effectively owned by no one. In fact, there were features within the service that had very little documentation, except the source code itself. The challenge for our team was that a bug existed within the original source code and we weren’t sure what the service was supposed to be doing. Efforts to scan completed tickets in Jira or even Confluence pages would result in incomplete and incorrect information. I attempted to perform searches against the Slack instant messaging service, but it appeared that the chat history around these concepts had long since been removed as a part of corporate retention policies. Getting Started with Unblocked The Unblocked platform can be used to reduce an engineer’s TTFC by simply selecting the source code management system they wish to use: After selecting which source code repositories you wish to use, you have the opportunity to add integrations with Slack and Jira as shown below: Additional integrations can be configured from the Unblocked dashboard: Confluence Linear Notion Stack Overflow After setup, Unblocked begins the data ingestion and process phase. The amount of time required to complete this step is largely dependent on the amount of data that needs to be analyzed. At this point, one of the following client platforms can be prepared for use: Unblocked Client for macOS Unblocked IDE Plug-in for Visual Studio Code Unblocked IDE Plug-in for any JetBrains IDE (IntelliJ, PyCharm, and so on) There is also a web dashboard that can be accessed via a standard web browser. Where Unblocked Provides Value I decided to use the web dashboard. After completing the data ingestion and processing phase, I decided to see what would happen if I asked Unblocked “How does the front end communicate with the back end?” Below is how the interaction appeared: When I clicked the block-patterns.php file, I was taken directly to the file within the connected GitHub repository. Diving a little deeper, I wanted to understand what endpoints are available in the backend. This time I was provided the result from an answer that had been asked 11 days earlier. What is really nice is that the /docs URI was also provided, saving me more time in getting up to speed. I also wanted to understand what changes had been made to the backend recently. I was impressed by the response Unblocked provided: For this answer, there were five total references included in the response. Let’s take a look at several of these references. Clicking the first reference provided information from Github: The second reference provided the ability to download Markdown files from the Git source code management: The experience was quite impressive. By asking a few simple questions, I was able to make huge progress in understanding a service that is completely new to me in a matter of minutes. Conclusion The “service owned by no one” scenario is more common now than at any other point in my 30+ year career in technology. The stress of having issues to understand and fix – without any documentation or service owner expertise – does not promote a healthy and productive work environment. My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” - J. Vester Unblocked supports my personal mission statement by giving software engineers an opportunity to be productive quickly. The platform relies on a simple interface and AI-based process to do the hard work for you, allowing you to remain focused on meeting your current objectives. By asking a few simple questions, I was able to gain valuable information about the solutions connected to Unblocked. In a world where it can be difficult to find a subject matter expert, this is a game changer – especially from a TTFC perspective. While updating my IntelliJ IDEA client I realized there is even an Unblocked plug-in that I could have utilized just as easily too! The same good news applies to users of Visual Studio Code. This functionality allows engineers to pose questions to Unblocked without leaving their IDE. The best part is that Unblocked is currently in an open beta, which means it is 100% free to use. You can get started by clicking here. Take Unblocked for a test drive and see how it holds up for your use case. I am super interested in hearing about your results in the comments section. Have a really great day!
Do you excel in the art of setting unattainable, imposed, or plain non-existing Sprint Goals? In other words, are you good at missing Sprint Goals with regularity? If not, don’t worry; help is on the way! In this article, we’ll explore how to consistently miss the mark. For example, enjoy the thrill of cherry-picking unrelated backlog items and defining success by sheer output, not outcome. Countless Scrum Teams have thoroughly tested all suggestions. They are ideally suited for teams who love the challenge of aimlessly wandering through Sprints! The Essence and Inherent Importance of the Sprint Goal Before we indulge ourselves in missing Sprint Goals and, thus, failing core responsibilities as a Scrum Team, let’s revisit the original ideas behind Sprint Goals: The Sprint Goal is a Scrum team’s single objective for Sprint, delivering the most valuable result from the customers’ and the organization’s perspective. It informs the composition of the Sprint Backlog and becomes a part of it, thus acting as a beacon that guides the Developers during the Sprint. Moreover, it is instrumental to creating the Sprint plan, having a successful Daily Scrum, and collaborating and supporting each other as a Scrum team. Also, the Sprint Goal helps the Scrum team to identify whether their work was successful: did we accomplish the goal at the end of the Sprint? In that respect, it separates a few weeks of working on “stuff” from experiencing the satisfaction and joy of being a successful Scrum team, delivering value to customers and the organization. The Sprint Goal thus supports a Scrum team — and its organization — to move from an industrial paradigm-driven output orientation, the proverbial feature factory, to an outcome-based approach to solving your customers’ most valuable problem every Sprint. This change of perspective has a far-reaching consequence: every Sprint, the Scrum team strives to accomplish the Sprint Goal, which is different from maximizing the output in the form of work hours or the number of work items. The process of forming a Sprint Goal begins with Sprint Planning, when the Developers, the Product Owner, and the Scrum Master come together to decide on the next steps for building, ensuring the delivery of maximum value to customers in the forthcoming Sprint. How to Create Sprint Goals Initially, the Product Owner highlights the overarching Product Goal and outlines the business aim for the new Sprint. Using this as a foundation, the Scrum team collaboratively establishes the Sprint Goal, considering various factors such as: Team availability during the Sprint. Any changes in team composition, including new members joining or existing members departing. The desired quality level as specified in the Definition of Done. The team’s proficiency with the necessary technology. The availability of required tools. Dependencies to other teams or suppliers. Specific governance requirements that need to be met. The necessity to manage daily operations, like maintaining the product’s functionality, and how this impacts team capacity. Following this, the Developers pledge their commitment to the Sprint Goal. It’s important to understand that this commitment isn’t to a fixed amount of work, such as the tasks listed in the Sprint Backlog after Sprint Planning. Scrum focuses on outcomes rather than outputs. In response, the Developers then project the work needed to reach the Sprint Goal. They do this by selecting items from the Product Backlog to include in the Sprint Backlog. If additional, previously unidentified tasks are necessary to achieve the Sprint Goal, they add these to the Sprint Backlog. Moreover, the Developers form an initial plan for accomplishing their projection. Doing so for the first two or three days is advisable, as the team will begin gathering insights once the work commences. Detailed planning for the entire Sprint at this stage would be counterproductive. 10 Sure-Fire Ways to Miss Your Sprint Goals Here are my top ten approaches to missing Sprint Goals to ensure you will fail your stakeholders every single Sprint: No Visualization of Progress: The Developers cannot promptly assess whether they are on track to achieve the Sprint Goal. This lack of clarity often stems from inadequate tracking and visualization of progress. The Daily Scrum addresses this by ensuring the team is aligned and on track, with adjustments made as needed to the plan or Sprint Backlog. Without a clear understanding of their progress, Developers are less likely to meet the Sprint Goal, as success in Sprints builds from growing confidence over time, not last-minute efforts. Kanban through the Backdoor: The Scrum team consistently takes on too many tasks, leading to a regular overflow of unfinished work into the next Sprint — without further consideration or inspection. This practice, especially when 30 to 40 percent of tasks routinely spill over, indicates a shift towards a ‘time-boxed Kanban’ style rather than adhering to Scrum principles. This habitual spillover suggests a need to reassess and realign the team’s approach to fit the Scrum framework better. Scope Stretching or Gold-Plating: The Developers expand the scope of the Sprint beyond the agreed-upon Sprint Goal by adding extra, unnecessary work to the Product Backlog items in the Sprint Backlog. This issue arises when Developers disregard the original scope agreement with the Product Owner and unilaterally decide to enhance tasks without consultation. This behavior can lead to questionable allocation of development time, as it shifts focus away from the agreed priorities and goals, potentially impacting the team’s ability to deliver value effectively. This anti-pattern may reflect a disconnect between the Developers and the Product Owner, undermining the collaborative spirit essential for proper Scrum implementation. Cherry-Picking Product Backlog Items: The Developers select Product Backlog items unrelated to the Sprint Goal, resulting in a disorganized assortment of tasks. This issue often arises from a lack of a clear Sprint Goal or a goal that is too vague or simply a task list. Factors contributing to this pattern may include the need to address urgent technical issues, a desire to pursue new learning opportunities or disagreement with the product direction. If these scenarios don’t apply, it raises concerns about the team’s unity and effectiveness, suggesting they might operate more as individuals than as a cohesive Scrum team. The Imposed Sprint Goal: In this case, the Sprint Goal is not a collective decision of the Scrum team but rather dictated by an individual, often a dominant Product Owner or lead engineer. This scenario often unfolds in environments lacking psychological safety, where team members, despite foreseeing potential failure, remain silent and unopposed to the imposition. This pattern reflects a deeper issue within the team, signaling a departure from the core Scrum Values. Some team members may have resigned to the status quo, losing interest in continuous improvement and collaboration. In such cases, the team might be more accurately described as a group of individuals working in parallel, more focused on their paychecks than genuine teamwork and shared success. The Overly Ambitious Sprint Goal: In this scenario, Scrum teams, often new ones, set unattainably high Sprint Goals, leading to an oversized Sprint Backlog and inevitable underdelivery at Sprint’s end. This issue typically decreases as the team gains experience and better understands their capacity and customer problems. Mature Scrum teams learn to align their capabilities with their aspirations, ensuring they deliver the best possible value to customers and the organization. Lack of Focus: The organization treats the Scrum team as a jack-of-all-trades unit, burdening them with various unrelated tasks hampering the team’s ability to formulate a cohesive Sprint Goal. Such a scenario is counterproductive to Scrum’s essence, which is about tackling complex problems through self-managing, autonomous teams and minimizing development risks. While Scrum excels at achieving specific objectives, its effectiveness diminishes when external stakeholders dictate the team’s workload in detail. This approach undermines Scrum’s core principle of focused, goal-oriented work and risks turning the team into a reactive rather than proactive unit. No Space for Non-Sprint Goal-Related Work: The Scrum team focuses solely on the Sprint Goal, overlooking other critical tasks such as customer support and organizational demands. Effective Scrum practice requires balancing the Sprint Goal with responding to unexpected, yet crucial, issues. Ignoring significant problems, like a critical bug or a malfunctioning payment system, just because they fall outside the Sprint Goal can quickly erode stakeholder trust. Scrum is about adaptability and responding to new challenges, not rigidly adhering to an initial plan, turning the Sprint into a Waterfall-ish time box. Regularly Not Delivering the Sprint Goal: Some Scrum teams fail to meet their Sprint Goals with the precision of a Swiss clockwork. This ongoing issue undermines Scrum’s core objective: solving customer problems effectively and aiding organizational sustainability. Scrum’s usefulness relies on meeting the Sprint Goal, which should be the norm, not the exception. Continual failures, whether due to technical issues, skill shortages, or unforeseen complexities, question the validity of using Scrum. A successful application of Scrum involves a commitment to goals in return for decision-making autonomy and self-organization, not merely mimicking Kanban under the guise of Scrum. No Sprint Goal: Here, the Product Owner presents a disparate collection of tasks, lacking a cohesive objective, which leaves the Scrum team without clear direction. This situation indicates a potential misapplication of Scrum principles, suggesting that shifting to a more flow-based system like Kanban might better suit the team’s needs. Typically, this pattern arises when a Product Owner is either overwhelmed by stakeholder demands or lacks the experience to align tasks effectively with the team’s overall Product Goal. Food for Thought — Missing Sprint Goals Consider the following questions to help your teams and your organization to avoid missing Sprint Goal and embrace agility fully: Are there other underlying team dynamics or organizational practices contributing to these anti-patterns? What are the long-term impacts of these anti-patterns on the overall health and productivity of the Scrum team and its standing within the organization? How can the Scrum framework be adapted or reinforced to mitigate these anti-patterns, especially in diverse or rapidly changing work environments? Conclusion These ten Sprint Goal anti-patterns highlight various challenges that Scrum teams may face, from minor inefficiencies to major dysfunctions that can significantly undermine Scrum principles and, thus, the team’s effectiveness. Addressing these issues requires a nuanced understanding of team dynamics, organizational culture, and commitment to continuous improvement and adherence to Scrum values. By recognizing and proactively addressing these anti-patterns, Scrum teams can enhance their ability to deliver value effectively and sustainably. What anti-patterns have you encountered, and how did you counter missing Sprint Goals? Please share your experience in the comments.
According to several sources we queried, more than 33 percent of the world's web servers are running Apache Tomcat, while other sources show that it's 48 percent of application servers. Some of these instances have been containerized over the years, but many still run in the traditional setup of a virtual machine with Linux. Red Hat JBoss Web Server (JWS) combines a web server (Apache HTTPD), a servlet engine (Apache Tomcat), and modules for load balancing (mod_jk and mod_cluster). Ansible is an automation engine that provides a suite of tools for managing an enterprise at scale. In this article, we'll show how 1+1 becomes 11 by using Ansible to completely automate the deployment of a JBoss Web Server instance on a Red Hat Enterprise Linux 8 server. A prior article covered this subject, but now you can use the Red Hat-certified content collection for the JBoss Web Server, which has been available since the 5.7 release. In this article, you will automate a JBoss Web Server deployment through the following tasks: Retrieve the archive containing the JBoss Web Server from a repository and install the files on the system. Configure the Red Hat Enterprise Linux operating system, creating users, groups, and the required setup files to enable JBoss Web Server as a systemd service. Fine-tune the configuration of the JBoss Web Server server, such as binding it to the appropriate interface and port. Deploy a web application and start the systemd service. Perform a health check to ensure that the deployed application is accessible. Ansible fully automates all those operations, so no manual steps are required. Preparing the Target Environment Before you start the automation, you need to specify your target environment. In this case, you'll be using Red Hat Enterprise Linux 8 with Python 3.6. You'll use this setup on both the Ansible control node (where Ansible is executed) and the Ansible target (the system being configured). On the control node, confirm the following requirements: Shell $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.7 (Ootpa) $ ansible --version ansible [core 2.13.3] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)] jinja version = 3.1.2 libyaml = True Note: The procedure in this article might not execute successfully if you use a different Python version or target operating system. Installing the Red Hat Ansible Certified Content Collection Once you have Red Hat Enterprise Linux 8 set up and Ansible ready to go, you need to install the Red Hat Ansible Certified Content Collection 1.2 for the Red Hat JBoss Web Server. Ansible uses the collection to perform the following tasks on the JBoss Web Server: Ensure that the required system dependencies (e.g., unzip) are installed. Install Java (if it is missing and requested). Install the web server binaries and integrate the software into the system (setting the user, group, etc.). Deploy the configuration files. Start and enable JBoss Web Server as a systemd service. To install the certified collection for the JBoss Web Server, you'll have to configure Ansible to use Red Hat Automation Hub as a Galaxy server. Follow the instructions on Automation Hub to retrieve your token and update the ansible.cfg configuration file in your project directory. Update the field with the token obtained from Automation Hub: YAML [galaxy] server_list = automation_hub, galaxy [galaxy_server.galaxy] url=https://galaxy.ansible.com/ [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/api/galaxy/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=<your-token> Install the certified collection: Shell $ ansible-galaxy collection install redhat.jws Starting galaxy collection install process Process install dependency map Starting collection install process Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-jws-1.2.2.tar.gz to /root/.ansible/tmp/ansible-local-475cum49011/tmptmiuep63/redhat-jws-1.2.2-299_snr4 Installing 'redhat.jws:1.2.2' to '/root/.ansible/collections/ansible_collections/redhat/jws' Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-redhat_csp_download-1.2.2.tar.gz to /root/.ansible/tmp/ansible-local-475cum49011/tmptmiuep63/redhat-redhat_csp_download-1.2.2-tb4zjzut redhat.jws:1.2.2 was installed successfully Installing 'redhat.redhat_csp_download:1.2.2' to '/root/.ansible/collections/ansible_collections/redhat/redhat_csp_download' redhat.redhat_csp_download:1.2.2 was installed successfully Ansible Galaxy fetches and downloads the collection's dependencies. These dependencies include the redhat_csp_download collection, which helps facilitate the retrieval of the archive containing the JBoss Web Server server from either the Red Hat customer portal or a specified local or remote location. For more information about this step, please refer to the official documentation for Red Hat. Installing the Red Hat JBoss Web Server The configuration steps in this section include downloading JBoss Web Server, installing Java, and enabling JBoss Web Server as a system service (systemd). Downloading the Archive First, you need to download the archive for the JBoss Web Server from the Red Hat Customer Portal. By default, the collection expects the archive to be in the root folder of the Ansible project. The only remaining requirement is to specify the version of the JBoss Web Server being used (5.7) in the playbooks. Based on this information, the collection determines the path and the full name of the archive. Therefore, update the value of jws_version in the jws-article.yml playbook: YAML --- - name: "Red Hat JBoss Web Server installation and configuration" hosts: all vars: jws_setup: True jws_version: 5.7.0 jws_home: /opt/jws-5.7/tomcat … Installing Java JBoss Web Server is a Java-based server, so the target system must have a Java Virtual Machine (JVM) installed. Although Ansible primitives can perform such tasks natively, the redhat.jws collection can also take care of this task, provided that the jws_java_version variable is defined: YAML jws_home: /opt/jws-5.7/tomcat jws_java_version: 1.8.0 … Note: This feature works only if the target system's distribution belongs to the Red Hat family.Enabling JBoss Web Server as a system service (systemd) The JBoss Web Server server on the target system should run as a service system. The collection can also take care of this task if the jws_systemd_enabled variable is defined as True: YAML jws_java_version: 1.8.0 jws_systemd_enabled: True jws_service_name: jws Note: This configuration works only when systemd is installed, and the system belongs to the Red Hat family. Now that you have defined all the required variables to deploy the JBoss Web Server, finish the playbook: YAML ... jws_service_name: jws collections: - redhat.jws roles: - role: jws Running the Playbook Run the playbook to see whether it works as expected: Shell $ ansible-playbook -i inventory jws-article.yml PLAY [Red Hat JBoss Web Server installation and configuration] ******************************************************************************************************************************************************************************* TASK [Gathering Facts] *********************************************************************************************************************************************************************************************************************** ok: [localhost] TASK [redhat.jws.jws : Validating arguments against arg spec 'main'] ************************************************************************************************************************************************************************* ok: [localhost] TASK [redhat.jws.jws : Set default values] *************************************************************************************************************************************************************************************************** skipping: [localhost] TASK [redhat.jws.jws : Set default values (jws)] ********************************************************************************************************************************************************************************************* ok: [localhost] TASK [redhat.jws.jws : Set jws_home to /opt/jws-5.7/tomcat if not already defined] *********************************************************************************************************************************************************** skipping: [localhost] TASK [redhat.jws.jws : Check that jws_home has been defined.] ******************************************************************************************************************************************************************************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [redhat.jws.jws : Install required dependencies] **************************************************************************************************************************************************************************************** included: /root/.ansible/collections/ansible_collections/redhat/jws/roles/jws/tasks/fastpackage.yml for localhost => (item=zip) included: /root/.ansible/collections/ansible_collections/redhat/jws/roles/jws/tasks/fastpackage.yml for localhost => (item=unzip) TASK [redhat.jws.jws : Check arguments] ****************************************************************************************************************************************************************************************************** ok: [localhost] … TASK [redhat.jws.jws : Remove apps] ********************************************************************************************************************************************************************************************************** changed: [localhost] => (item=ROOT) ok: [localhost] => (item=examples) TASK [redhat.jws.jws : Create vault configuration (if enabled)] ****************************************************************************************************************************************************************************** skipping: [localhost] RUNNING HANDLER [redhat.jws.jws : Reload Systemd] ******************************************************************************************************************************************************************************************** ok: [localhost] RUNNING HANDLER [redhat.jws.jws : Ensure Jboss Web Server runs under systemd] **************************************************************************************************************************************************************** included: /root/.ansible/collections/ansible_collections/redhat/jws/roles/jws/tasks/systemd/service.yml for localhost RUNNING HANDLER [redhat.jws.jws : Check arguments] ******************************************************************************************************************************************************************************************* ok: [localhost] RUNNING HANDLER [redhat.jws.jws : Enable jws.service] **************************************************************************************************************************************************************************************** changed: [localhost] RUNNING HANDLER [redhat.jws.jws : Start jws.service] ***************************************************************************************************************************************************************************************** changed: [localhost] RUNNING HANDLER [redhat.jws.jws : Restart Jboss Web Server service] ************************************************************************************************************************************************************************** changed: [localhost] PLAY RECAP *********************************************************************************************************************************************************************************************************************************** localhost : ok=64 changed=15 unreachable=0 failed=0 skipped=19 rescued=2 ignored=0 As you can see, quite a lot happened during this execution. Indeed, the redhat.jws role took care of the entire setup: Deploying a base configuration Removing unused applications Starting the web server Deploying a Web Application Now that JBoss Web Server is running modify the playbook to facilitate the deployment of a web application: YAML roles: - role: jws tasks: - name: " Checks that server is running" ansible.builtin.uri: url: "http://localhost:8080/" status_code: 404 return_content: no - name: "Deploy demo webapp" ansible.builtin.get_url: url: 'https://people.redhat.com/~rpelisse/info-1.0.war' dest: "{{ jws_home }/webapps/info.war" notify: - "Restart Jboss Web Server service" The configuration uses a handler, provided by the redhat.jws collection to ensure that the JBoss Web Server is restarted once the application is downloaded. Automation Saves Time and Reduces the Chance of Error The Red Hat Ansible Certified Content Collection encapsulates, as much as possible, the complexities and the inner workings of Red Hat JBoss Web Server deployment. With the help of the collection, you can focus on your business use case, such as deploying applications, instead of establishing the underlying application server. The result is reduced complexity and faster time to value. The automated process is also repeatable and can be used to set up as many systems as needed.
Navigating the Evolution: How SRE Is Revolutionizing IT Operations
December 8, 2023 by
Generative AI 2024 and Beyond: A Glimpse Into the Future
December 8, 2023 by
Transitioning From Monoliths to Microservices: Companies, Experiences, and Migration Strategies
December 8, 2023 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by
Generative AI 2024 and Beyond: A Glimpse Into the Future
December 8, 2023 by
Navigating the Evolution: How SRE Is Revolutionizing IT Operations
December 8, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
The State of Data Streaming for Insurance in 2023
December 8, 2023
by
CORE
Real-Time Advertising With Apache Kafka and Flink
December 8, 2023
by
CORE
Generative AI 2024 and Beyond: A Glimpse Into the Future
December 8, 2023 by
Navigating the Evolution: How SRE Is Revolutionizing IT Operations
December 8, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Generative AI 2024 and Beyond: A Glimpse Into the Future
December 8, 2023 by
Real-Time Advertising With Apache Kafka and Flink
December 8, 2023
by
CORE
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by