Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.
[Last call] What metrics does your organization use to measure success? MTTR? Frequency of deploys? Other? Tell us!
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
How to Run Apache Spark on Kubernetes in Less Than 5 Minutes
Cloud Build Unleashed: Expert Techniques for CI/CD Optimization
Terraform is an Infrastructure as Code (IaC) tool that allows the DevOps engineer to automate the provision and management of infrastructure resources. It uses configuration files written in HashiCorp Config Language (HCL) to define the desired state of the infrastructure and has various commands to configure and apply the infra resources. GitHub Actions is a continuous integration and delivery platform (CI/CD) that allows developers to automate build, test, and deployment pipelines. During the deployment configuration, we need to define a step: a step is an individual action that GitHub Actions performs. Current State While deploying an infrastructure resource through Terraform in general, the Terraform plan output shows all the execution logs and displays the final plan output at last. If many infrastructure changes are going on at the same time, all changes will get dumped into a single plan output, and the reviewer needs to scroll down to see the final output. It may lead to distraction and the possibility of missing the final plan output clearly, which results in destroying the resources by accident after execution. Proposed Solution In this article, I have given a simpler solution to how to overcome the above problem in a simpler way. This will separate the Terraform output step into 3 steps. Prerequisite For this mock pipeline execution, I have used Google Cloud for the resource deployment. Before the code execution, set up the Google credentials as required (highlighted in the below code snippet). Step 1 Introduce a new step in GitHub actions to collect all Terraform stdout log output. Step 2 This output needs to be saved into a GitHub output variable. Step 3 Use the output variable in the next steps, filter the plan output log alone to display in this step execution log, and provide the text contents and background color to gain attention during the pull request reviews. YAML #comment: introduce a new step to capture terraform stdout output and dump the logs into GitHub output variable #comment: used google cloud for deployment and set up google credentials for this execution - name: terraform_plan_output id: terraform_plan_output run: | { echo 'tfplan_output<<EOF' terraform plan -input=false 2>&1 echo EOF } >> "$GITHUB_OUTPUT" env: GOOGLE_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS } #comment: with the help of above step output variable, in this step, filter the logs and find only for final plan output line. #comment: color code the line as needed with background shade to get reviewers attention before pull request approval - name: terraform_plan_output_final_review run: | echo -e "\033[44m REVIEW THE BELOW TERRAFORM FINAL OUTPUT THOROUGHLY BEFORE RAISING PULL REQUEST" echo -e "\033[31m -----" echo "${{ steps.terraform_plan_output.outputs.tfplan_output }" | grep 'Plan:' echo -e "\033[31m -----" Mock Execution Screenshots Introduce a separate step in the Terraform output plan: Terraform final review plan with a comment: Benefits Separate out Terraform stdout logs and final plan output. This helps the GitHub reviewer to focus on the exclusive plan output step and see the infra changes clearly. Background color helps to get more attention during the review. Infra changes through Terraform, especially during update and delete, need more attention. This individual step may avoid environmental outages at later stages. Conclusion By following these tips, as a code reviewer or pull request approver, you are able to easily identify and confirm the exact cloud resource changes for your GitHub pipeline approvals. For sample code, visit the repository on GitHub (maintainer: Karthigayan Devan).
The Trivial Answer Most engineers know that we must have green builds because a red build indicates some kind of issue. Either a test did not pass, or some kind of tool found a vulnerability, or we managed to push our code when it couldn’t even compile. Either way, it is bad. You might have noticed that this article is far from over, so there must be more to this. You are right! What Does Green Mean Exactly? We have already discussed that red means something wrong, but can we say that green is the opposite? Does it guarantee that everything is working great, meets the requirements, and is ready to deploy? As usual, it depends. When your build turns green, we can say, that: The code compiled (assuming you are using a language with a compiler). The existing (and executed) tests passed. The analyzers found no critical issues that needed to be fixed right away. We were able to push our binaries to an artifact storage or image registry. Depending on our setup, we might be ready to deploy our code at the moment. Why am I still not saying anything definite about the state of the software even when the tests passed? It is because I am simply not sure whether a couple of important things are addressed by our theoretical CI system in this thought-experiment. Let me list a couple of the factors I am worried about. Please find these in the following sections! Test Quality I won’t go deep into details as testing and good quality tests are bigger topics, deserving way more focus than what I could squeeze in here. Still, when talking about test quality, I think we should at least mention the following thoughts as bullet points: Do we have sufficient test coverage? Are our test cases making strict assertions that can discover the issues we want to discover? Are we testing the things we should? Meaning: Are we focusing on the most important requirement first instead of testing the easy parts? Are our tests reliable and in general following the F.I.R.S.T. principles? Are we running our tests with each build of the code they are testing? Are we aware of the test pyramid and following the related recommendations? Augmenting these generic ideas, I would like to mention a few additional thoughts in a bit more detail. What Kinds of Dependencies Are We Using in Our Tests? In the lower layers of the test pyramid, we should prefer using test doubles instead of the real dependencies to help us focus on the test case and be able to generate exceptional scenarios we need to cover in our code. Do We Know What We Should Focus on For Each Layer of The Test Pyramid? The test pyramid is not only about the number of tests we should have on each layer, but it gives us an idea about their intent as well. For example, the unit tests should test only a small unit (i.e., a single class) to prod and poke our code and see how it behaves in a wide variety of situations assuming that everything else is working well. As we go higher, the focus moves onto how our classes behave when they are integrated into a component, still relying on test doubles to eliminate any dependency (and any unknowns) related to the third-party components used by our code. Then in the integration tests, we should focus on the integration of our components with their true dependencies to avoid any issues caused by the imperfections of the test doubles we have been using in our lower-layer tests. In the end, the system tests can use an end-to-end mindset to observe how the whole system behaves from the end user’s point of view. Are Our Code Dependencies Following Similar Practices? Hopefully, the dependency selection process considers the maturity and reliability of the dependencies as well as their functionality. This is very important because we must be able to trust our dependencies that they are doing what they say they do. Thorough testing of the dependency can help us build this trust, while the lack of tests can do the opposite. My personal opinion on this is that I cannot expect my users to test my code when they pick my components as dependencies, because not only they cannot possibly do it well; but I won’t know either when their tests fail because my code contains a bug — a bug that I was supposed to find and fix when I released my component. For the same reason, when I am using a dependency, I think my expectation that I should not test that dependency is reasonable. Having Repeatable Builds It can be a great feeling when our build turns green after a hard day’s work. It can give us pride, a feeling of accomplishment, or even closure depending on the context. Yet it can be an empty promise, a lie that does very little good (other than generating a bunch of happy chemicals for our brain) if we cannot repeat it when we need to. Fortunately, there is a way to avoid these issues if we consider the following factors. Using Reliable Tags It is almost a no-brainer that we need to tag our builds to be able to get back to the exact version we have used to build our software. This is a great start for at least our code, but we should keep in mind that nowadays it is almost impossible to imagine a project where we are starting from an empty directory and doing everything on our own without using any dependencies. When using dependencies, we can make a choice between convenience and doing the right thing. On one hand, the convenient option lets us use the latest dependencies without doing anything: we just need to use the wildcard or special version constant supported by our build tool to let it resolve the latest stable version during the build process. On the other hand, we can pin down our dependencies; maybe we can even vendor them if we want to avoid some nasty surprises and have a decent security posture. If we decide to do the right thing, we will be able to repeat the build process using the exact same dependencies as before, giving us a better chance of producing the exact same artifact if needed. In the other case, we would be hard-pressed to do the same a month or two after the original build. In my opinion, this is seriously undermining the usability of our tags and makes me trust the process less. Using The Same Configuration It is only half of the battle to be able to produce the same artifact in the end when we are rebuilding the code. We must be able to repeat the same steps during the build and use the same application configuration for the deployments in order to have the same code and use the same configuration and input to run our tests. It Shouldn't Start With The Main Branch Although we are doing this work in order to have repeatable builds on the main branch the process should not start there. If we want to be sure that the thing we are about to merge won't break the main build, we should at least try building it using the same tools and tests before we click merge. Luckily the Git branch protection rules are very good at this. To avoid broken builds, we should make sure that: The PRs cannot be merged without both the necessary approvals and a successful build validating everything the main build will validate as well.* The branch is up to date, meaning that it contains all changes from the main branch as well. Good code can still cause failures if the main branch contains incompatible changes. *Note: Of course, this is not trivial to achieve, because how can we test, for example, that the artifact will be successfully published to the registry containing the final, ready-to-deploy artifacts? Or how could we verify, that we will be able to push the Git tag when we release using the other workflow? Still, we should do our best to minimize the number of differences just like we do when we are testing our code. Using this approach, we can discover the slight incompatibilities of otherwise well-working changes before we merge them into the main. Why Do We Need Green Build Then? To be honest, green builds are not what we need. They are only the closest we have to the thing we need: a reliable indicator of working software. We need this indicator because we must be able to go there and develop the next feature or fix a production bug when it is discovered. Without being 100% sure that the main branch contains working software, we cannot do either of those because first, we need to see whether it is still working and fix the build if it is broken. In many cases, the broken builds are not due to our own changes, but external factors. For example, without pinning down all dependencies, we cannot guarantee the same input for the build, so the green builds cannot be considered reliable indicators either. This is not only true for code dependencies, but any dependency we are using for our tests as well. Of course, we cannot avoid every potential cause for failure. For example, we can’t do anything against security issues that are noticed after our initial build. Quite naturally, these can still cause build failures. My point is that we should do our best in the area where we have control over things, like the tests where we can rely on test doubles for the lower layers of the test pyramid. What Can You Do When Facing These Issues? Work on improving build repeatability. You can: Consider pinning down all your dependencies to use the same components in your tests. This can be achieved by: Using fixed versions instead of ranges in Maven or Gradle Make sure the dependencies of your dependencies will remain pinned, too, by checking whether their build files contain any ranges. Using SHA 256 manifest digests for Docker images instead of the tag names Make sure that you are performing the same test cases as before by: Following general testing best practices like the F.I.R.S.T. principles Starting from the same initial state in case of every other dependency (cleaning up database content, user accounts, etc.) Performing the same steps (with similar or equivalent data) Make sure you always tag: Your releases Your application configuration The steps of the build pipeline you have used for the build Apply strict branch protection rules. What Should We Not Try to Fix? We should keep in mind that this exercise is not about zealously working until we can just push a build button repeatedly and expect that the exact same workflow does the exact same thing every time like clockwork. This could be an asymptotic goal, but in my opinion, it shouldn’t. The goal is not to do the same thing and produce the exact same output, because we don’t need that. We have already built the project, published all versioned binary artifacts, and saved all test results the first time around. Rebuilding and overwriting these can be harmful because it can become a way to rewrite history and we can never trust our versioning or artifacts ever again. When a build step produces an artifact that is saved somewhere (may it be a binary, a test report, some code scan findings, etc.) that artifact should be handled as a read-only archive and should never change once saved. Therefore, if someone kicks off a build from a previously successfully built tag, it is allowed (or even expected) to fail when the artifact uploads are attempted. In Conclusion I hope this article helped you realize that focusing on the letter of the law is less important than the spirit of the law. It does not matter if you had a green build if you are not able to demonstrate, that your tagged software remained ready for deployment. At the end of the day, if you have a P1 issue in production nobody will care about the fact that your software was ready to deploy in the past if you cannot show that it is still ready to deploy now, and we can start working on the next increment without additional unexpected problems. What do you think about this? Let me know in the comments!
Cross-Origin Resource Sharing (CORS) is an essential security mechanism utilized by web browsers, allowing for regulated access to server resources from origins that differ in domain, protocol, or port. In the realm of APIs, especially when utilizing AWS API Gateway, configuring CORS is crucial to facilitate access for web applications originating from various domains while mitigating potential security risks. This article aims to provide a comprehensive guide on CORS and integrating AWS API Gateway through CloudFormation. It will emphasize the significance of CORS, the development of authorization including bearer tokens, and the advantages of selecting optional methods in place of standard GET requests. Why CORS Matters In the development of APIs intended for access across various domains, CORS is essential in mitigating unauthorized access. By delineating the specific domains permitted to interact with your API, you can protect your resources from Cross-Site Request Forgery (CSRF) attacks while allowing valid cross-origin requests. Benefits of CORS Security: CORS plays a crucial role in regulating which external domains can access your resources, thereby safeguarding your API against harmful cross-origin requests. Flexibility: CORS allows you to define varying levels of access (such as methods like GET, POST, DELETE, etc.) for different origins, offering adaptability based on your specific requirements. User experience: Implementing CORS enhances user experience by allowing users to seamlessly access resources from multiple domains without encountering access-related problems. Before we proceed with setting up CORS, we need to understand the need to use optional methods over GET. This comparison helps in quickly comparing the aspects of using GET versus optional methods (PUT, POST, OPTIONS) in API requests. Reason GET Optional Methods (POST, PUT, OPTIONS) Security GET requests are visible in the browser's address bar and can be cached, making it less secure for sensitive information. Optional methods like POST and PUT are not visible in the address bar and are not cached, providing more security for sensitive data. Flexibility GET requests are limited to sending data via the URL, which restricts the complexity and size of data that can be sent. Optional methods allow sending complex data structures in the request body, providing more flexibility. Idempotency and Safety GET is idempotent and considered safe, meaning it does not modify the state of the resource. POST and PUT are used for actions that modify data, and OPTIONS are used for checking available methods. CORS Preflight GET requests are not typically used for CORS preflight checks. OPTIONS requests are crucial for CORS preflight checks, ensuring that the actual request can be made. Comparison between POST and PUT methods, the purposes and behavior: Aspect POST PUT Purpose Used to create a new resource. Used to update an existing resource or create it if it doesn't exist. Idempotency Not idempotent; multiple identical requests may create multiple resources. Idempotent; multiple identical requests will not change the outcome beyond the initial change. Resource Location The server decides the resource's URI, typically returning it in the response. The client specifies the resource's URI. Data Handling Typically used when the client does not know the URI of the resource in advance. Typically used when the client knows the URI of the resource and wants to update it. Common Use Case Creating new records, such as submitting a form to create a new user. Updating existing records, such as editing user information. Caching Responses to POST requests are generally not cached. Responses to PUT requests can be cached as the request should result in the same outcome. Response Usually returns a status code of 201 (Created) with a location header pointing to the newly created resource. Usually returns a status code of 200 (OK) or 204 (No Content) if the update is successful. Setting Up CORS in AWS API Gateway Using CloudFormation Configuring CORS in AWS API Gateway can be accomplished manually via the AWS Management Console; however, automating this process with CloudFormation enhances both scalability and consistency. Below is a detailed step-by-step guide: 1. Define the API Gateway in CloudFormation Start by defining the API Gateway in your CloudFormation template: YAML Resources: MyApi: Type: AWS::ApiGateway::RestApi Properties: Name: MyApi 2. Create Resources and Methods Define the resources and methods for your API. For example, create a resource for /items and a GET method: YAML ItemsResource: Type: AWS::ApiGateway::Resource Properties: ParentId: !GetAtt MyApi.RootResourceId PathPart: items RestApiId: !Ref MyApi GetItemsMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE HttpMethod: GET ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: MOCK IntegrationResponses: - StatusCode: 200 MethodResponses: - StatusCode: 200 3. Configure CORS Next, configure CORS for your API method by specifying the necessary headers: YAML OptionsMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE HttpMethod: OPTIONS ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: MOCK RequestTemplates: application/json: '{"statusCode": 200}' IntegrationResponses: - StatusCode: 200 SelectionPattern: '2..' ResponseParameters: method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'" method.response.header.Access-Control-Allow-Methods: "'*'" method.response.header.Access-Control-Allow-Origin: "'*'" MethodResponses: - StatusCode: 200 ResponseModels: { "application/json": "Empty" } ResponseParameters: method.response.header.Access-Control-Allow-Headers: false method.response.header.Access-Control-Allow-Methods: false method.response.header.Access-Control-Allow-Origin: false Incorporating Authorization Implementing authorization within your API methods guarantees that access to specific resources is restricted to authenticated and authorized users. The AWS API Gateway offers various authorization options, including AWS Lambda authorizers, Cognito User Pools, and IAM roles. YAML MyAuthorizer: Type: AWS::ApiGateway::Authorizer Properties: Name: MyLambdaAuthorizer RestApiId: !Ref MyApi Type: TOKEN AuthorizerUri: arn:aws:apigateway:<region>:lambda:path/2015-03-31/functions/<lambda_arn>/invocations GetItemsMethodWithAuth: Type: AWS::ApiGateway::Method Properties: AuthorizationType: CUSTOM AuthorizerId: !Ref MyAuthorizer HttpMethod: GET ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: AWS_PROXY IntegrationHttpMethod: POST Uri: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MyFunction.Arn}/invocations MethodResponses: - StatusCode: 200 After implementation, here's how the API looks in AWS: Integration request: API Gateway Documentation can be found here: Amazon API. Conclusion Establishing CORS and integrating AWS API Gateway through CloudFormation offers an efficient and reproducible method for managing API access. By meticulously setting up CORS, you guarantee that your APIs remain secure and are accessible solely to permitted origins. Incorporating authorization adds a layer of security by limiting access to only those users who are authorized. Moreover, evaluating the advantages of utilizing optional methods instead of GET requests ensures that your API maintains both security and the flexibility necessary for managing intricate operations. The implementation of these configurations not only bolsters the security and performance of your API but also enhances the overall experience for end-users, facilitating seamless cross-origin interactions and the appropriate management of sensitive information.
Digital transformation remains a clear focus for many organizations, with proactive leaders seeing opportunities worth navigating the complexities of technology and process disruption. According to Kristy Ellmer, a managing director for Boston Consulting Group, “thirty percent of any given sector is in transformation at any given time. Companies that go into a transformation mindset when they’re in a place of strength will be more successful than if they do it reactively. And then how you execute it and move the organization behind it is what actually creates competitive advantage.” While the business case for digital transformation is clear, many enterprises still struggle to successfully transform all aspects of their organization. Part of the technical challenge is building and deploying the apps and services that deliver the experiences users (both internal and external) expect as standard. Enterprises are leaning on various platforms like ServiceNow and Salesforce to enable their digital transformation, helping them to take on some of the heavy lifting and manual tasks associated with building vast enterprise-specific applications. These platforms have proven significant value in multiple business functions, so it’s no surprise that businesses are demanding more functionality from their platform teams. The results are impressive. Take ServiceNow, for example. The company saw a 31.5% jump in Q2 RPO to $18.6 billion, and CEO Bill McDermot is betting big: “Well, we’re going to reinvent the whole industry, and we’ve got to put it on the ServiceNow platform. And we’ve got to take the data, and we’re going to connect all the disparate parts that are suffocating companies, and we’re going to move it into the Now platform, and we’re going to reimagine the way workflows.” However, while the platform’s go-big-or-go-home aim is admirable, the reality is the demands large enterprises put on platforms are still exceeding the capability to supply new apps and services. From banks to insurance companies to energy and utilities, platforms are creating project backlogs across all sectors and impacting software developer employee satisfaction due to bottlenecks. Digital transformation needs more work. What’s Going to Fix These Platform Shortcomings? The solution lies in taking a dual approach. Firstly, companies using these platforms must automate where appropriate to cut errors, improve quality consistency, and alleviate the burden on overstretched teams. At the same time, they also need to synchronize across all the environments used in development lifecycles to catch inconsistencies before they become sequelae for complex errors and troubleshooting impacts delivery timelines. Automation's Role in Efficiency, Consistency, and Compliance Automation should enhance operational efficiency and consistency across platform environments, ensuring that all environments remain production-like without manual intervention - which will significantly reduce the risk of inconsistencies and errors. The goal should be to put in place environment synchronization, automated deployment flows and release payload bundling capabilities to streamline the process of introducing updates, applications, and configurations into production-like environments. By minimizing manual intervention, and synchronizing environments automation not only speeds up delivery but also reduces the likelihood of human error and code errors during deployments. This automation extends to release packaging and management, allowing teams to efficiently bundle and deploy changes as cohesive and auditable units. Ultimately, automation should also enhance governance and compliance by facilitating the enforcement of policies and standards throughout the platform ecosystems, ensuring innovations and changes are not only rapidly deployed, but also authorized, secure, and compliant with regulatory requirements. Imagine flying a commercial airline without automation. You may not be aware, but autopilot systems make continuous course corrections per minute. Without this automation flying from New York to Los Angeles would be impossible unless the plane were flown closer to the ground and the pilots followed landmarks - something that was done in the early days of flight. If you travel to the desert west, you can still find the cement foundations for airplane towers, whose sole purpose was to help pilots navigate from point A to point B. Automation changed everything, and it can allow for the paradigm shift between digital transformation to business transformation. Catching Issues Early Through Seamless Propagation and Multi-Environment Visibility When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. Without knowing where the problem is, and the nature of the problem, doctors would be left to make diagnoses and treatments in the dark. This challenge is something psychiatrists know too well, as the human brain is the only organ that is treated without imagery. Automation, Visibility and are Key to Better Digital Transformation The benefits of automation and synchronization don’t stop there: with better visibility, developers can open the lines of communication and work together more collaboratively. Teams that communicate better are seeing increased opportunities to innovate beyond their typical day-to-day tasks and to fully realize the benefits of their digital transformation efforts. Remember, when everyone sees clearly, teamwork flows freely. When team members work against a shared vision of their platform landscape, productive collaboration ensues. Real-time multi-environment visibility empowers the embodiment of Linus’s Law, which states, “Given enough eyeballs, all bugs are shallow.” However, the actual statement the law is based on comes from Eric S. Raymond’s book, The Cathedral and the Bazaar, and states, “given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.” The key part of the statement is “the fix [will be] obvious to someone.” Having a shared view of the facts will put your smart IT people in the best position to be the most successful.
In the dynamic world of integration, MuleSoft stands out as a powerful platform that enables not only to unlock data across legacy systems, cloud apps, and devices but also to make smarter and faster decisions, and offer highly connected experiences for end-users. As organizations strive for faster and more reliable deployments, the adoption of GitOps is transforming how we manage and automate MuleSoft deployments. In this blog post, we'll explore how we can apply the principles of GitOps to our MuleSoft deployment process. What Is GitOps? GitOps is a new method of controlling infrastructure and application deployments by relying on a Git repository as the primary source of information. Teams can have more oversight, transparency, and tracking of their deployment processes by storing configuration files in a Git repository. GitOps principles prioritize declarative configurations and automated workflows to achieve consistent and reliable deployments. The Power of MuleSoft MuleSoft, as a leading integration platform, provides tools and services to connect applications, data, and devices across on-premises and cloud environments. MuleSoft provides numerous enterprise solutions that enable businesses to make the most of automation and integrations. With its robust API-led connectivity approach, MuleSoft enables organizations to build scalable and flexible integration solutions, and, as businesses increasingly adopt modern technologies and the required delivery pace for IT is constantly increasing, the need for efficient deployment strategies becomes critical. Why GitOps for MuleSoft? Implementing the GitOps approach in your MuleSoft program offers several compelling advantages: Consistency: GitOps ensures that your deployment configurations are consistent across all environments. By maintaining a single source of truth in Git, you can avoid discrepancies and ensure uniformity. Automation: GitOps leverages automation to streamline deployment processes. Automated pipelines can trigger deployments based on changes in the Git repository, reducing manual intervention and minimizing errors. Visibility and traceability: Every change to your deployment configurations is versioned in Git, providing a complete history of modifications. This visibility enhances collaboration and accountability within your team. Faster deployments: By automating repetitive tasks and eliminating manual steps, GitOps accelerates the deployment process, enabling faster delivery of new features and updates. Improved collaboration: By using Git as the single source of truth, teams can collaborate more effectively, with clear visibility into who made changes and why. Enhanced security: Versioning and automating deployments reduce the risk of manual errors and unauthorized changes, enhancing the overall security of your deployment process. Scalability: GitOps enables you to manage deployments and teams across multiple environments and applications, making it easier to scale your integration solutions. Resilience: Automated rollbacks and recovery processes ensure that you can quickly revert to a previous state if something goes wrong, improving the resilience of your deployments. Implementing GitOps With MuleSoft Here’s a step-by-step guide to implementing the GitOps approach for your MuleSoft deployment. The proposed solution is based on the usage of the gbartolonifcg/mule-deployer-cli, a Docker image packeted command-line tool designed to simplify the deployment of MuleSoft applications to the Anypoint Platform Runtime Plane, including CloudHub 2.0. It leverages the mule-maven-plugin and the DataWeave language to automate and orchestrate the deployment process, enabling developers to deploy their applications effortlessly. Here follow the very basic steps to implement the solution. 1. Define Your Configuration Create a YAML manifest file that specifies the configuration for your MuleSoft deployment. This file must include details such as artifact coordinates, deployment type, and environment-specific parameters. Here, follow an example manifest for a CloudHub 2.0 deployment: artifact: artifactId: example-mulesoft-app groupId: "com.example" version: 1.0.0 deploymentType: cloudhub2Deployment configuration: uri: https://eu1.anypoint.mulesoft.com/ muleVersion: "4.5.1" applicationName: example-mulesoft-app target: "your-target" provider: "your-provider" environment: Dev replicas: "1" vCores: "0.2" businessGroupId: "your-business-group-id" properties: env: dev anypoint.platform.base_uri: https://eu1.anypoint.mulesoft.com/ anypoint.platform.client_id: "your-client-id" secureProperties: anypoint.platform.client_secret: "your-client-secret" connectedAppClientId: "your-app-client-id" connectedAppClientSecret: "your-app-client-secret" connectedAppGrantType: "client_credentials" integrations: services: objectStoreV2: enabled: true deploymentSettings: generateDefaultPublicUrl: true http: inbound: publicURL: https://api-dev.example.com/example-mulesoft-app 2. Version Your Configuration in Git Commit your YAML manifest file to a Git repository. This repository will serve as the single source of truth for your deployment configurations. git add example-mulesoft-app.yaml git commit -m "Add deployment manifest for example-mulesoft-app" git push origin main 3. Automate Your Deployment Set up an automated pipeline to trigger deployments based on changes in the Git repository. Tools like Jenkins, GitLab CI/CD, or GitHub Actions can be used to create workflows that deploy your MuleSoft application whenever a change is detected. Below is an example of how you can configure a GitHub Action to trigger a deployment. # Example GitHub Actions workflow name: Deploy MuleSoft Application on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Deploy to CloudHub 2.0 run: | docker run --rm -v $(pwd)/example-mulesoft-app.yaml:/deployment.yaml -it gbartolonifcg/mule-deployer-cli runtime-deploy 4. Monitor and Validate After the deployment, monitor your application in the Anypoint Platform to ensure it is running as expected. Validate that the configurations are correctly applied and that the application is functioning properly. Conclusion Managing MuleSoft deployments with a GitOps approach represents a great improvement in operational efficiency, consistency, and security. By leveraging the power of Git for version control and the automation capabilities of modern CI/CD tools, you can achieve faster, more reliable, and more secure deployments. Embrace this innovative methodology to revolutionize your MuleSoft deployments and stay ahead in the rapidly evolving integration landscape. For any suggestions, comments, or inquiries, please feel free to reach out. Happy integrating!
Before containerization made it so easy to prepare images for virtualization, it was quite an art to prepare custom ISO images to boot from CD. Later these images were used to boot virtual machines from. In other words, ISO images were precursors of container images. It is so that I had a couple of unfortunate run-ins with the Windows Docker client. Even when not running any containers, the Windows memory manager would hand it as much memory as possible slowing whatever I was busy with. I hence banned the Windows Docker client from my machine. Please do not get me wrong. I do not hate Docker — just its Windows client. This step forced me to move back in time. I started running virtual machines directly on Hyper-V, the Windows Hypervisor. Forming Kubernetes clusters on Windows then became a happy hobby for me as can be seen from my past posts published here at DZone. Shoemaker, Why Do You Go Barefoot? After following the same mouse clicks to create virtual machines on Hyper-V Manager for many an hour, I realized that I am like a shoemaker who goes barefoot: I build a DevOps pipeline for an hourly rate, but waste time on mouse clicks? Challenge accepted. I duckduckgo'd and read that it is possible to create virtual machines using PowerShell. It did not take a week to have a script that creates a new virtual machine as can be seen here. A sister script can start a virtual machine that is turned off. An Old Art Rediscovered This was great, but I realized I was still doing mouse clicks when installing Ubuntu. To automate this looked like a tougher nut to crack. One has to unpack an ISO image, manipulate it some way or another, and then package it again taking care to leave whatever instructs a computer how to boot intact. Fortunately, I found an excellent guide on how to do just this. The process consists of three steps: Unpack the Ubuntu ISO boot image. Manipulate the content: Move the master boot record (MBR) out. Specify what users normally do on the GUI and customize what is installed and run during installation. This is done using a subset of Ubuntu's Cloud-init language. See here for the instructions I created. Instruct the bootloader (Grub, in this case) where to find the custom boot instructions and to not wait for user input. Here is the Grub config I settled on. Package it all using an application called Xorriso. For the wizards of this ancient craft, Xorriso serves as their magic wand. It has pages of documentation in something that resembles a spell book. I will have to dirty my hands to understand fully, but my current (and most likely faulty) understanding is that it creates Boot partitions, loads the MBR copied out, and does something with the Cloud-init-like instructions to create an amended ISO image. Ansible for the Finishing Touches Although it was with great satisfaction that I managed to boot Ubuntu22 from PowerShell without any further input from me, what about the next time when Ubuntu brings out a new version? True DevOps mandates to document the process not in ASCII, but in some script ready to run when needed. Ansible shows its versatility in that I managed to do just this in an afternoon. The secret is to instruct Ansible that it is a local action. In other words, do not use SSH to target a machine to receive instruction, but the Ansible Controller is also the student: YAML - hosts: localhost connection: local The full play is given next and provides another view of what was explained above: YAML # stamp_images.yml - hosts: localhost connection: local become: true vars_prompt: - name: "base_iso_location" prompt: "Enter the path to the base image" private: no default: /tmp/ubuntu-22.04.4-live-server-amd64.iso tasks: - name: Install 7Zip ansible.builtin.apt: name: p7zip-full state: present - name: Install Xorriso ansible.builtin.apt: name: xorriso state: present - name: Unpack ISO ansible.builtin.command: cmd: "7z -y x {{ base_iso_location } -o/tmp/source-files" - name: Copy boot partitions ansible.builtin.copy: src: /tmp/source-files/[BOOT]/ dest: /tmp/BOOT - name: Delete working boot partitions ansible.builtin.file: path: /tmp/source-files/[BOOT] state: absent - name: Copy files for Ubuntu Bare ansible.builtin.copy: src: bare/source-files/bare_ubuntu dest: /tmp/source-files/ - name: Copy boot config for Ubuntu bare ansible.builtin.copy: src: bare/source-files/boot/grub/grub.cfg dest: /tmp/source-files/boot/grub/grub.cfg - name: Stamp bare image ansible.builtin.command: cmd: xorriso -as mkisofs -r -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' -o ../ubuntu-22.04-wormhole-autoinstall-bare_V5_1.iso --grub2-mbr ../BOOT/1-Boot-NoEmul.img -partition_offset 16 --mbr-force-bootable -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img -appended_part_as_gpt -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 -c '/boot.catalog' -b '/boot/grub/i386-pc/eltorito.img' -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info -eltorito-alt-boot -e '--interval:appended_partition_2:::' -no-emul-boot . chdir: /tmp/source-files - name: Copy files for Ubuntu Atomika ansible.builtin.copy: src: atomika/source-files/atomika_ubuntu dest: /tmp/source-files/ - name: Copy boot config for Ubuntu Atomika ansible.builtin.copy: src: atomika/source-files/boot/grub/grub.cfg dest: /tmp/source-files/boot/grub/grub.cfg - name: Stamp Atomika image ansible.builtin.command: cmd: xorriso -as mkisofs -r -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' -o ../ubuntu-22.04-wormhole-autoinstall-atomika_V5_1.iso --grub2-mbr ../BOOT/1-Boot-NoEmul.img -partition_offset 16 --mbr-force-bootable -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img -appended_part_as_gpt -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 -c '/boot.catalog' -b '/boot/grub/i386-pc/eltorito.img' -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info -eltorito-alt-boot -e '--interval:appended_partition_2:::' -no-emul-boot . chdir: /tmp/source-files Note the magic of the Xorriso command used here to prepare two images: one with support and one without support for Kubernetes. The only caveat is to have a machine installed with Ansible to run this play from. The output from the above play can be downloaded from here and pre-install a very recent version of Ansible. Conclusion This post went retro, but it is important to revisit where things started to gain an understanding of why things are the way they are. Windows and containers, furthermore, do not mix that well and any investigation into ways to make the days of developers better should be welcomed. I referred to part of the code, but the full project can be viewed on GitHub.
For decades now, software projects have relied on messaging APIs to exchange data. In the Java/Java EE ecosystem, this method of asynchronous communication has been standardized by the JMS specification. In many cases, individuals and organizations leverage the Red Hat JBoss Enterprise Application Platform (JBoss EAP) to act as message-oriented middleware (MOM), which facilitates the management of message queues and topics. Messaging ensures that no messages are lost as they are transmitted from the client and delivered to interested parties. On top of that, JBoss EAP provides authentication and other security-focused capabilities on top of the management functions. In this article, we'll show how to fully automate the setup of JBoss EAP and a JMS queue using Ansible so that we can easily make this service available. Prerequisites and Installation Install Ansible First, we’ll set up our Ansible control machine, which is where the automation will be executed. On this system, we need to install Ansible as the first step: $ sudo dnf install -y ansible-core Note that the package name has changed recently from ansible to ansible-core. Configure Ansible To Use Red Hat Automation Hub An extension to Ansible, an Ansible collection, dedicated to Red Hat JBoss EAP is available from Automation Hub. Red Hat customers need to add credentials and the location for Red Hat Automation Hub to their Ansible configuration file (ansible.cfg) to be able to install the content using the ansible-galaxy command-line tool. Be sure to replace the with the API token you retrieved from Automation Hub. For more information about using Red Hat Automation Hub, please refer to the associated documentation. #ansible.cfg: [defaults] host_key_checking = False retry_files_enabled = False nocows = 1 [inventory] # fail more helpfully when the inventory file does not parse (Ansible 2.4+) unparsed_is_failed=true [galaxy] server_list = automation_hub, galaxy [galaxy_server.galaxy] url=https://galaxy.ansible.com/ [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=<paste-your-token-here> Install the Ansible Collection for JBoss EAP With this configuration, we can now install the Ansible collection for JBoss EAP (redhat.eap) available on Red Hat Ansible Automation Hub: $ ansible-galaxy collection install redhat.eap Starting galaxy collection install process Process install dependency map Starting collection install process Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-eap-1.3.4.tar.gz to /root/.ansible/tmp/ansible-local-2529rs7zh7/tmps_4n2eyj/redhat-eap-1.3.4-lr8dvcxo Installing 'redhat.eap:1.3.4' to '/root/.ansible/collections/ansible_collections/redhat/eap' Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/redhat-runtimes_common-1.1.0.tar.gz to /root/.ansible/tmp/ansible-local-2529rs7zh7/tmps_4n2eyj/redhat-runtimes_common-1.1.0-o6qfkgju redhat.eap:1.3.4 was installed successfully Installing 'redhat.runtimes_common:1.1.0' to '/root/.ansible/collections/ansible_collections/redhat/runtimes_common' Downloading https://console.redhat.com/api/automation-hub/v3/plugin/ansible/content/published/collections/artifacts/ansible-posix-1.5.4.tar.gz to /root/.ansible/tmp/ansible-local-2529rs7zh7/tmps_4n2eyj/ansible-posix-1.5.4-4pgukpuo redhat.runtimes_common:1.1.0 was installed successfully Installing 'ansible.posix:1.5.4' to '/root/.ansible/collections/ansible_collections/ansible/posix' ansible.posix:1.5.4 was installed successfully As we will describe a little later on, this extension for Ansible will manage the entire installation and configuration of the Java application server on the target systems. Inventory File Before we can start using our collection, we need to provide the inventory of targets to Ansible. There are several ways to provide this information to the automation tool, but for the purposes of this article, we elected to use a simple ini-formatted inventory file. To easily reproduce this article's demonstration, you can use the same control node as the target. This also removes the need to deploy the required SSH key on all the systems involved. To do so, simply use the following inventory file by creating a file called inventory: [all] localhost ansible_connection=local [messaging_servers] localhost ansible_connection=local Deploying JBoss EAP JBoss EAP Installation Before we configure the JMS queues that will be configured by Ansible, we'll first deploy JBoss EAP. Once the server is successfully running on the target system, we'll adjust the automation to add the required configuration to set up the messaging layer. This is purely for didactic purposes. Since we can leverage the content of the redhat.eap collection, the playbook to install EAP and set it up as systemd service on the target system is minimal. Create a file called eap_jms.yml with the following content: --- - name: "Deploy a JBoss EAP" hosts: messaging_servers vars: eap_apply_cp: true eap_version: 7.4.0 eap_offline_install: false eap_config_base: 'standalone-full.xml' collections: - redhat.eap roles: - eap_install - eap_systemd Note that the Ansible collection for JBoss EAP will also take care of downloading the required assets from the Red Hat Customer Portal (the archive containing the Java app server files). However, one does need to provide the credentials associated with a service account. A Red Hat customer can manage service accounts using the hybrid cloud console. Within this portal, on the service accounts tab, you can create a new service account if one does not already exist. **Note:** The values obtained from the hybrid cloud console are sensitive and should be managed accordingly. For the purpose of this article, the value is passed to the ansible-playbook command line. Alternatively, ansible-vault could be used to enforce additional defense mechanisms: $ ansible-playbook -i inventory -e rhn_username=<client_id> -e rhn_password=<client_secret> eap_jms.yml PLAY [Deploy a JBoss EAP] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [redhat.eap.eap_install : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_install : Ensure prerequirements are fullfilled.] ********* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/prereqs.yml for localhost TASK [redhat.eap.eap_install : Validate credentials] *************************** ok: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Check that required packages list has been provided.] *** ok: [localhost] TASK [redhat.eap.eap_install : Prepare packages list] ************************** skipping: [localhost] TASK [redhat.eap.eap_install : Add JDK package java-11-openjdk-headless to packages list] *** ok: [localhost] TASK [redhat.eap.eap_install : Install required packages (4)] ****************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure required local user exists.] ************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/user.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ ok: [localhost] TASK [redhat.eap.eap_install : Ensure workdir /opt/jboss_eap/ exists.] ********* ok: [localhost] TASK [redhat.eap.eap_install : Ensure archive_dir /opt/jboss_eap/ exists.] ***** ok: [localhost] TASK [redhat.eap.eap_install : Ensure server is installed] ********************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/install.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Check local download archive path] ************** ok: [localhost] TASK [redhat.eap.eap_install : Set download paths] ***************************** ok: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from website: https://github.com/eap/eap/releases/download] *** skipping: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from RHN] ********************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/install/rhn.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [Download JBoss EAP from CSP] ********************************************* TASK [redhat.eap.eap_utils : Check arguments] ********************************** ok: [localhost] TASK [redhat.eap.eap_utils : Retrieve product download using JBoss Network API] *** ok: [localhost] TASK [redhat.eap.eap_utils : Determine install zipfile from search results] **** ok: [localhost] TASK [redhat.eap.eap_utils : Download Red Hat Single Sign-On] ****************** ok: [localhost] TASK [redhat.eap.eap_install : Install server using RPM] *********************** skipping: [localhost] TASK [redhat.eap.eap_install : Check downloaded archive] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Copy archive to target nodes] ******************* changed: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Verify target archive state: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Read target directory information: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Extract files from /opt/jboss_eap//jboss-eap-7.4.0.zip into /opt/jboss_eap/.] *** changed: [localhost] TASK [redhat.eap.eap_install : Note: decompression was not executed] *********** skipping: [localhost] TASK [redhat.eap.eap_install : Read information on server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Check state of server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Set instance name] ****************************** ok: [localhost] TASK [redhat.eap.eap_install : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_install : Deploy configuration] *************************** changed: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for cumulative patch application are provided.] *** skipping: [localhost] TASK [Apply latest cumulative patch] ******************************************* skipping: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for elytron adapter are provided.] *** skipping: [localhost] TASK [Install elytron adapter] ************************************************* skipping: [localhost] TASK [redhat.eap.eap_install : Install server using Prospero] ****************** skipping: [localhost] TASK [redhat.eap.eap_install : Check eap install directory state] ************** ok: [localhost] TASK [redhat.eap.eap_install : Validate conditions] **************************** ok: [localhost] TASK [Ensure firewalld configuration allows server port (if enabled).] ********* skipping: [localhost] TASK [redhat.eap.eap_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Check current EAP patch installed] ************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check arguments for yaml configuration] ********* skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ ok: [localhost] TASK [redhat.eap.eap_systemd : Set destination directory for configuration] **** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance destination directory for configuration] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set base directory for instance] **************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set bind address] ******************************* ok: [localhost] TASK [redhat.eap.eap_systemd : Create basedir /opt/jboss_eap/jboss-eap-7.4//standalone for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Create deployment directories for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Deploy configuration] *************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Include YAML configuration extension] *********** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check YAML configuration is disabled] *********** ok: [localhost] TASK [redhat.eap.eap_systemd : Set systemd envfile destination] **************** ok: [localhost] TASK [redhat.eap.eap_systemd : Determine JAVA_HOME for selected JVM RPM] ******* ok: [localhost] TASK [redhat.eap.eap_systemd : Set systemd unit file destination] ************** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy service instance configuration: /etc//eap.conf] *** changed: [localhost] TASK [redhat.eap.eap_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/eap.service] *** changed: [localhost] TASK [redhat.eap.eap_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Ensure service is started] ********************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_systemd/tasks/service.yml for localhost TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance eap state to started] ************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=59 changed=6 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 Validating the Installation Before going any further with our automation, we will be thorough and add a validation step to double-check that the application server is not only running but also functional. This will ensure, down the road, that any JMS-related issue only affects this subsystem. The Ansible collection for JBoss EAP comes with a handy role, called eap_validation, for this purpose, so it's fairly easy to add this step to our playbook: --- - name: "Deploy a JBoss EAP" hosts: messaging_servers vars: eap_apply_cp: true eap_version: 7.4.0 eap_offline_install: false eap_config_base: 'standalone-full.xml' collections: - redhat.eap roles: - eap_install - eap_systemd - eap_validation Let's execute our playbook once again and observe the execution of this validation step: $ ansible-playbook -i inventory -e rhn_username=<client_id> -e rhn_password=<client_secret> eap_jms.yml PLAY [Deploy a JBoss EAP] ****************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [redhat.eap.eap_install : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_install : Ensure prerequirements are fullfilled.] ********* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/prereqs.yml for localhost TASK [redhat.eap.eap_install : Validate credentials] *************************** ok: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Check that required packages list has been provided.] *** ok: [localhost] TASK [redhat.eap.eap_install : Prepare packages list] ************************** skipping: [localhost] TASK [redhat.eap.eap_install : Add JDK package java-11-openjdk-headless to packages list] *** ok: [localhost] TASK [redhat.eap.eap_install : Install required packages (4)] ****************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure required local user exists.] ************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/user.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** changed: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ changed: [localhost] TASK [redhat.eap.eap_install : Ensure workdir /opt/jboss_eap/ exists.] ********* changed: [localhost] TASK [redhat.eap.eap_install : Ensure archive_dir /opt/jboss_eap/ exists.] ***** ok: [localhost] TASK [redhat.eap.eap_install : Ensure server is installed] ********************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/install.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Check local download archive path] ************** ok: [localhost] TASK [redhat.eap.eap_install : Set download paths] ***************************** ok: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from website: https://github.com/eap/eap/releases/download] *** skipping: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from RHN] ********************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/install/rhn.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [Download JBoss EAP from CSP] ********************************************* TASK [redhat.eap.eap_utils : Check arguments] ********************************** ok: [localhost] TASK [redhat.eap.eap_utils : Retrieve product download using JBoss Network API] *** ok: [localhost] TASK [redhat.eap.eap_utils : Determine install zipfile from search results] **** ok: [localhost] TASK [redhat.eap.eap_utils : Download Red Hat Single Sign-On] ****************** ok: [localhost] TASK [redhat.eap.eap_install : Install server using RPM] *********************** skipping: [localhost] TASK [redhat.eap.eap_install : Check downloaded archive] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Copy archive to target nodes] ******************* changed: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Verify target archive state: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Read target directory information: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Extract files from /opt/jboss_eap//jboss-eap-7.4.0.zip into /opt/jboss_eap/.] *** changed: [localhost] TASK [redhat.eap.eap_install : Note: decompression was not executed] *********** skipping: [localhost] TASK [redhat.eap.eap_install : Read information on server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Check state of server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Set instance name] ****************************** ok: [localhost] TASK [redhat.eap.eap_install : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_install : Deploy configuration] *************************** changed: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for cumulative patch application are provided.] *** skipping: [localhost] TASK [Apply latest cumulative patch] ******************************************* skipping: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for elytron adapter are provided.] *** skipping: [localhost] TASK [Install elytron adapter] ************************************************* skipping: [localhost] TASK [redhat.eap.eap_install : Install server using Prospero] ****************** skipping: [localhost] TASK [redhat.eap.eap_install : Check eap install directory state] ************** ok: [localhost] TASK [redhat.eap.eap_install : Validate conditions] **************************** ok: [localhost] TASK [Ensure firewalld configuration allows server port (if enabled).] ********* skipping: [localhost] TASK [redhat.eap.eap_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Check current EAP patch installed] ************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check arguments for yaml configuration] ********* skipping: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ ok: [localhost] TASK [redhat.eap.eap_systemd : Set destination directory for configuration] **** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance destination directory for configuration] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set base directory for instance] **************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set bind address] ******************************* ok: [localhost] TASK [redhat.eap.eap_systemd : Create basedir /opt/jboss_eap/jboss-eap-7.4//standalone for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Create deployment directories for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Deploy configuration] *************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Include YAML configuration extension] *********** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check YAML configuration is disabled] *********** ok: [localhost] TASK [redhat.eap.eap_systemd : Set systemd envfile destination] **************** ok: [localhost] TASK [redhat.eap.eap_systemd : Determine JAVA_HOME for selected JVM RPM] ******* ok: [localhost] TASK [redhat.eap.eap_systemd : Set systemd unit file destination] ************** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy service instance configuration: /etc//eap.conf] *** changed: [localhost] TASK [redhat.eap.eap_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/eap.service] *** changed: [localhost] TASK [redhat.eap.eap_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Ensure service is started] ********************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_systemd/tasks/service.yml for localhost TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance eap state to started] ************** changed: [localhost] TASK [redhat.eap.eap_validation : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure user eap were created.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Validate state of user: eap] ***************** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure user eap were created.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Validate state of group: eap.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Wait for HTTP port 8080 to become available.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Check if web connector is accessible] ******** ok: [localhost] TASK [redhat.eap.eap_validation : Populate service facts] ********************** ok: [localhost] TASK [redhat.eap.eap_validation : Check if service is running] ***************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [redhat.eap.eap_validation : Verify server's internal configuration] ****** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/verify_with_cli_queries.yml for localhost => (item={'query': '/core-service=server-environment:read-attribute(name=start-gracefully)'}) included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/verify_with_cli_queries.yml for localhost => (item={'query': '/subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)'}) TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [Use CLI query to validate service state: /core-service=server-environment:read-attribute(name=start-gracefully)] *** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query '/core-service=server-environment:read-attribute(name=start-gracefully)'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Validate CLI query was successful] *********** ok: [localhost] TASK [redhat.eap.eap_validation : Transform output to JSON] ******************** ok: [localhost] TASK [redhat.eap.eap_validation : Display transformed result] ****************** skipping: [localhost] TASK [redhat.eap.eap_validation : Check that query was successfully performed.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [Use CLI query to validate service state: /subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)] *** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query '/subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Validate CLI query was successful] *********** ok: [localhost] TASK [redhat.eap.eap_validation : Transform output to JSON] ******************** ok: [localhost] TASK [redhat.eap.eap_validation : Display transformed result] ****************** skipping: [localhost] TASK [redhat.eap.eap_validation : Check that query was successfully performed.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure yaml setup] *************************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/yaml_setup.yml for localhost TASK [Check standard-sockets configuration settings] *************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=mail-smtp:read-attribute(name=host)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of standard-sockets configuration settings] *** ok: [localhost] TASK [Check ejb configuration settings] **************************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /subsystem=ejb3:read-attribute(name=default-resource-adapter-name)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of ejb configuration settings] *** ok: [localhost] TASK [Check ee configuration settings] ***************************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /subsystem=ee/service=default-bindings:read-attribute(name=jms-connection-factory)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of ee configuration settings] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=98 changed=9 unreachable=0 failed=0 skipped=24 rescued=0 ignored=0 If the execution of the playbook completed without error, validation of the application server passed successfully. Deploying JMS Queues on JBoss EAP Using Ansible Changing EAP Configuration Because the JMS subsystem is not used in the default JBoss EAP server configuration (standalone.xml), we also need to use a different profile (standalone-full.xml). This is why, in the playbook above, we are specifying the required configuration profile: --- - name: "Deploy a JBoss EAP" hosts: messaging_servers vars: eap_apply_cp: true eap_version: 7.4.0 eap_offline_install: false eap_config_base: 'standalone-full.xml' collections: - redhat.eap roles: - eap_install - eap_systemd - eap_validation Leveraging the YAML Config Feature of EAP Using Ansible In the previous section, JBoss EAP was installed and configured as a systemd service on the target systems. Now, we will update this automation to change the configuration of the app server to ensure a JMS queue is deployed and made available. In order to accomplish this goal, we just need to provide a YAML definition with the appropriate configuration for the JMS subsystem of JBoss EAP. This configuration file is used by the app server, on boot, to update its configuration. To achieve this, we need to add another file to our project that we named jms_configuration.yml.j2. While the content of the file itself is YAML, the extension is .j2 because it's a Jinja2 template, which allows us to take advantage of the advanced, dynamic capabilities provided by Ansible. jms_configuration.yml.j2: wildfly-configuration: subsystem: messaging-activemq: server: default: jms-queue: {{ queue.name }: entries: - '{{ queue.entry }' Below, you'll see the playbook updated with all the required parameters to deploy the JMQ queue on JBoss EAP: --- - name: "Deploy a Red Hat JBoss EAP server and set up a JMS Queue" hosts: messaging_servers vars: eap_apply_cp: true eap_version: 7.4.0 eap_offline_install: false eap_config_base: 'standalone-full.xml' eap_enable_yml_config: True queue: name: MyQueue entry: 'java:/jms/queue/MyQueue' eap_yml_configs: - jms_configuration.yml.j2 collections: - redhat.eap roles: - eap_install - eap_systemd - eap_validation Let's execute this playbook again: $ ansible-playbook -i inventory -e rhn_username=<client_id> -e rhn_password=<client_secret> eap_jms.yml PLAY [Deploy a Red Hat JBoss EAP server and set up a JMS Queue] **************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [redhat.eap.eap_install : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_install : Ensure prerequirements are fullfilled.] ********* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/prereqs.yml for localhost TASK [redhat.eap.eap_install : Validate credentials] *************************** ok: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Validate existing zipfiles for offline installs] *** skipping: [localhost] TASK [redhat.eap.eap_install : Check that required packages list has been provided.] *** ok: [localhost] TASK [redhat.eap.eap_install : Prepare packages list] ************************** skipping: [localhost] TASK [redhat.eap.eap_install : Add JDK package java-11-openjdk-headless to packages list] *** ok: [localhost] TASK [redhat.eap.eap_install : Install required packages (4)] ****************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure required local user exists.] ************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/user.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ ok: [localhost] TASK [redhat.eap.eap_install : Ensure workdir /opt/jboss_eap/ exists.] ********* ok: [localhost] TASK [redhat.eap.eap_install : Ensure archive_dir /opt/jboss_eap/ exists.] ***** ok: [localhost] TASK [redhat.eap.eap_install : Ensure server is installed] ********************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_install/tasks/install.yml for localhost TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Check local download archive path] ************** ok: [localhost] TASK [redhat.eap.eap_install : Set download paths] ***************************** ok: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from website: https://github.com/eap/eap/releases/download] *** skipping: [localhost] TASK [redhat.eap.eap_install : Retrieve archive from RHN] ********************** skipping: [localhost] TASK [redhat.eap.eap_install : Install server using RPM] *********************** skipping: [localhost] TASK [redhat.eap.eap_install : Check downloaded archive] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Copy archive to target nodes] ******************* skipping: [localhost] TASK [redhat.eap.eap_install : Check target archive: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Verify target archive state: /opt/jboss_eap//jboss-eap-7.4.0.zip] *** ok: [localhost] TASK [redhat.eap.eap_install : Read target directory information: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Extract files from /opt/jboss_eap//jboss-eap-7.4.0.zip into /opt/jboss_eap/.] *** skipping: [localhost] TASK [redhat.eap.eap_install : Note: decompression was not executed] *********** ok: [localhost] => { "msg": "/opt/jboss_eap/jboss-eap-7.4/ already exists and version unchanged, skipping decompression" } TASK [redhat.eap.eap_install : Read information on server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Check state of server home directory: /opt/jboss_eap/jboss-eap-7.4/] *** ok: [localhost] TASK [redhat.eap.eap_install : Set instance name] ****************************** ok: [localhost] TASK [redhat.eap.eap_install : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_install : Deploy configuration] *************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for cumulative patch application are provided.] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [Apply latest cumulative patch] ******************************************* TASK [redhat.eap.eap_utils : Check installation] ******************************* ok: [localhost] TASK [redhat.eap.eap_utils : Set patch directory] ****************************** ok: [localhost] TASK [redhat.eap.eap_utils : Set download patch archive path] ****************** ok: [localhost] TASK [redhat.eap.eap_utils : Set patch destination directory] ****************** ok: [localhost] TASK [redhat.eap.eap_utils : Check download patch archive path] **************** ok: [localhost] TASK [redhat.eap.eap_utils : Check local download archive path] **************** ok: [localhost] TASK [redhat.eap.eap_utils : Check local downloaded archive: jboss-eap-7.4.9-patch.zip] *** ok: [localhost] TASK [redhat.eap.eap_utils : Retrieve product download using JBossNetwork API] *** skipping: [localhost] TASK [redhat.eap.eap_utils : Determine patch versions list] ******************** skipping: [localhost] TASK [redhat.eap.eap_utils : Determine latest version] ************************* skipping: [localhost] TASK [redhat.eap.eap_utils : Determine install zipfile from search results] **** skipping: [localhost] TASK [redhat.eap.eap_utils : Determine selected patch from supplied version: 7.4.9] *** skipping: [localhost] TASK [redhat.eap.eap_utils : Check remote downloaded archive: /opt/jboss-eap-7.4.9-patch.zip] *** skipping: [localhost] TASK [redhat.eap.eap_utils : Download Red Hat EAP patch] *********************** skipping: [localhost] TASK [redhat.eap.eap_utils : Set download patch archive path] ****************** ok: [localhost] TASK [redhat.eap.eap_utils : Check remote download patch archive path] ********* ok: [localhost] TASK [redhat.eap.eap_utils : Copy patch archive to target nodes] *************** changed: [localhost] TASK [redhat.eap.eap_utils : Check patch state] ******************************** ok: [localhost] TASK [redhat.eap.eap_utils : Set checksum file path for patch] ***************** ok: [localhost] TASK [redhat.eap.eap_utils : Check /opt/jboss_eap/jboss-eap-7.4//.applied_patch_checksum_f641b6de2807fac18d2a56de7a27c1ea3611e5f3.txt state] *** ok: [localhost] TASK [redhat.eap.eap_utils : Print when patch has been applied already] ******** skipping: [localhost] TASK [redhat.eap.eap_utils : Check if management interface is reachable] ******* ok: [localhost] TASK [redhat.eap.eap_utils : Set apply CP conflict default strategy to default (if not defined): --override-all] *** ok: [localhost] TASK [redhat.eap.eap_utils : Apply patch /opt/jboss-eap-7.4.9-patch.zip to server installed in /opt/jboss_eap/jboss-eap-7.4/] *** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_utils/tasks/jboss_cli.yml for localhost TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query 'patch apply --override-all /opt/jboss-eap-7.4.9-patch.zip'] *** ok: [localhost] TASK [redhat.eap.eap_utils : Display patching result] ************************** ok: [localhost] => { "msg": "Apply patch operation result: {\n \"outcome\" : \"success\",\n \"response-headers\" : {\n \"operation-requires-restart\" : true,\n \"process-state\" : \"restart-required\"\n }\n}" } TASK [redhat.eap.eap_utils : Set checksum file] ******************************** changed: [localhost] TASK [redhat.eap.eap_utils : Set latest patch file] **************************** changed: [localhost] TASK [redhat.eap.eap_utils : Restart server to ensure patch content is running] *** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_utils/tasks/jboss_cli.yml for localhost TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query 'shutdown --restart'] *********** ok: [localhost] TASK [redhat.eap.eap_utils : Wait for management interface is reachable] ******* ok: [localhost] TASK [redhat.eap.eap_utils : Stop service if it was started for patching] ****** skipping: [localhost] TASK [redhat.eap.eap_utils : Display resulting output] ************************* skipping: [localhost] TASK [redhat.eap.eap_install : Ensure required parameters for elytron adapter are provided.] *** skipping: [localhost] TASK [Install elytron adapter] ************************************************* skipping: [localhost] TASK [redhat.eap.eap_install : Install server using Prospero] ****************** skipping: [localhost] TASK [redhat.eap.eap_install : Check eap install directory state] ************** ok: [localhost] TASK [redhat.eap.eap_install : Validate conditions] **************************** ok: [localhost] TASK [Ensure firewalld configuration allows server port (if enabled).] ********* skipping: [localhost] TASK [redhat.eap.eap_systemd : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Check current EAP patch installed] ************** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments for yaml configuration] ********* ok: [localhost] TASK [Ensure required local user and group exists.] **************************** TASK [redhat.eap.eap_install : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_install : Set eap group] ********************************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure group eap exists.] *********************** ok: [localhost] TASK [redhat.eap.eap_install : Ensure user eap exists.] ************************ ok: [localhost] TASK [redhat.eap.eap_systemd : Set destination directory for configuration] **** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance destination directory for configuration] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set base directory for instance] **************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Check arguments] ******************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set instance name] ****************************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set bind address] ******************************* ok: [localhost] TASK [redhat.eap.eap_systemd : Create basedir /opt/jboss_eap/jboss-eap-7.4//standalone for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Create deployment directories for instance: eap] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy custom configuration] ******************** skipping: [localhost] TASK [redhat.eap.eap_systemd : Deploy configuration] *************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Include YAML configuration extension] *********** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_systemd/tasks/yml_config.yml for localhost TASK [redhat.eap.eap_systemd : Create YAML configuration directory] ************ skipping: [localhost] TASK [redhat.eap.eap_systemd : Enable YAML configuration extension] ************ skipping: [localhost] TASK [redhat.eap.eap_systemd : Create YAML configuration directory] ************ changed: [localhost] TASK [redhat.eap.eap_systemd : Enable YAML configuration extension] ************ changed: [localhost] TASK [redhat.eap.eap_systemd : Deploy YAML configuration files] **************** changed: [localhost] => (item=jms_configuration.yml.j2) TASK [redhat.eap.eap_systemd : Check YAML configuration is disabled] *********** skipping: [localhost] TASK [redhat.eap.eap_systemd : Set systemd envfile destination] **************** ok: [localhost] TASK [redhat.eap.eap_systemd : Determine JAVA_HOME for selected JVM RPM] ******* ok: [localhost] TASK [redhat.eap.eap_systemd : Set systemd unit file destination] ************** ok: [localhost] TASK [redhat.eap.eap_systemd : Deploy service instance configuration: /etc//eap.conf] *** changed: [localhost] TASK [redhat.eap.eap_systemd : Deploy Systemd configuration for service: /usr/lib/systemd/system/eap.service] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Perform daemon-reload to ensure the changes are picked up] *** ok: [localhost] TASK [redhat.eap.eap_systemd : Ensure service is started] ********************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_systemd/tasks/service.yml for localhost TASK [redhat.eap.eap_systemd : Check arguments] ******************************** ok: [localhost] TASK [redhat.eap.eap_systemd : Set instance eap state to started] ************** ok: [localhost] TASK [redhat.eap.eap_validation : Validating arguments against arg spec 'main'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure user eap were created.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Validate state of user: eap] ***************** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure user eap were created.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Validate state of group: eap.] *************** ok: [localhost] TASK [redhat.eap.eap_validation : Wait for HTTP port 8080 to become available.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Check if web connector is accessible] ******** ok: [localhost] TASK [redhat.eap.eap_validation : Populate service facts] ********************** ok: [localhost] TASK [redhat.eap.eap_validation : Check if service is running] ***************** ok: [localhost] => { "changed": false, "msg": "All assertions passed" } TASK [redhat.eap.eap_validation : Verify server's internal configuration] ****** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/verify_with_cli_queries.yml for localhost => (item={'query': '/core-service=server-environment:read-attribute(name=start-gracefully)'}) included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/verify_with_cli_queries.yml for localhost => (item={'query': '/subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)'}) TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [Use CLI query to validate service state: /core-service=server-environment:read-attribute(name=start-gracefully)] *** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query '/core-service=server-environment:read-attribute(name=start-gracefully)'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Validate CLI query was successful] *********** ok: [localhost] TASK [redhat.eap.eap_validation : Transform output to JSON] ******************** ok: [localhost] TASK [redhat.eap.eap_validation : Display transformed result] ****************** skipping: [localhost] TASK [redhat.eap.eap_validation : Check that query was successfully performed.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure required parameters are provided.] **** ok: [localhost] TASK [Use CLI query to validate service state: /subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)] *** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query '/subsystem=undertow/server=default-server/http-listener=default:read-attribute(name=enabled)'] *** ok: [localhost] TASK [redhat.eap.eap_validation : Validate CLI query was successful] *********** ok: [localhost] TASK [redhat.eap.eap_validation : Transform output to JSON] ******************** ok: [localhost] TASK [redhat.eap.eap_validation : Display transformed result] ****************** skipping: [localhost] TASK [redhat.eap.eap_validation : Check that query was successfully performed.] *** ok: [localhost] TASK [redhat.eap.eap_validation : Ensure yaml setup] *************************** included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_validation/tasks/yaml_setup.yml for localhost TASK [Check standard-sockets configuration settings] *************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=mail-smtp:read-attribute(name=host)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of standard-sockets configuration settings] *** ok: [localhost] TASK [Check ejb configuration settings] **************************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /subsystem=ejb3:read-attribute(name=default-resource-adapter-name)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of ejb configuration settings] *** ok: [localhost] TASK [Check ee configuration settings] ***************************************** TASK [redhat.eap.eap_utils : Ensure required params for JBoss CLI have been provided] *** ok: [localhost] TASK [redhat.eap.eap_utils : Ensure server's management interface is reachable] *** ok: [localhost] TASK [redhat.eap.eap_utils : Execute CLI query /subsystem=ee/service=default-bindings:read-attribute(name=jms-connection-factory)] *** ok: [localhost] TASK [redhat.eap.eap_validation : Display result of ee configuration settings] *** ok: [localhost] RUNNING HANDLER [redhat.eap.eap_systemd : Restart Wildfly] ********************* included: /root/.ansible/collections/ansible_collections/redhat/eap/roles/eap_systemd/tasks/service.yml for localhost RUNNING HANDLER [redhat.eap.eap_systemd : Check arguments] ********************* ok: [localhost] RUNNING HANDLER [redhat.eap.eap_systemd : Set instance eap state to restarted] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=127 changed=8 unreachable=0 failed=0 skipped=34 rescued=0 ignored=0 As illustrated in the output above, the YAML definition is now enabled and the configuration of the JBoss EAP running on the target host has been updated. Validate the JMS Queue Deployment As always, we are going to be thorough and verify that the playbook execution has, indeed, properly set up a JMS queue. To do so, we can simply use the JBoss CLI provided with JBoss EAP to confirm: $ /opt/jboss_eap/jboss-eap-7.4/bin/jboss-cli.sh --connect --command="/subsystem=messaging-activemq/server=default/jms-queue=MyQueue:read-resource" { "outcome" => "success", "result" => { "durable" => true, "entries" => ["queues/MyQueue"], "legacy-entries" => undefined, "selector" => undefined } } The output, as shown above, confirms that the server configuration has indeed been updated and that a brand new JMS queue is now available. Since this verification is fairly easy to automate, we will also add it to our playbook. The Ansible collection for JBoss EAP comes with a handy wrapper allowing for the execution of the JBoss CLI within a playbook. So, all that is needed is the inclusion of the task and the desired command, as shown below: post_tasks: - name: "Check that Queue {{ queue.name } is available." ansible.builtin.include_role: name: eap_utils tasks_from: jboss_cli.yml vars: jboss_home: "{{ eap_home }" jboss_cli_query: "/subsystem=messaging-activemq/server=default/jms-queue={{ queue.name }:read-resource" Conclusion Thanks to the Ansible collection for JBoss EAP, we have a minimalistic playbook, spared of all the heavy lifting of managing the Java application server, fulfilling the role of MOM. All the configuration required by the automation concerns only the use case we tried to implement, not the inner working of the solution (JBoss EAP). The resulting playbook is safely repeatable and can be used to install the software on any number of target systems. Using the collection for JBoss EAP also makes it easy to keep the deployment up to date.
Infrastructure as Code (IaC), as the name implies, is a practice that consists of defining infrastructure elements with code. This is opposed to doing it through a GUI (Graphical User Interface) like, for example, the AWS Console. The idea is that in order to be deterministic and repeatable, the cloud infrastructure must be captured in an abstract description based on models expressed in programming languages to allow the automation of the operations that otherwise should be performed manually. AWS makes several IaC tools available, as follows: CloudFormation: A provisioning tool able to create and manage cloud resources, based on templates expressed in JSON or YAML notation AWS Amplify: An open-source framework that provides developers with anything they need to deliver applications connecting AWS infrastructure elements, together with web and mobile components AWS SAM (Serverless Application Model): A tool that facilitates the integration of AWS Lambda functions with services like API Gateway, REST API, AWS SNS/SMQ, DynamoDB, etc. AWS SDK (Software Development Kit): An API that provides management support to all AWS services using programming languages like Java, Python, TypeScript, and others AWS CDK (Cloud Development Kit): This is another API like the SDK but more furnished, allowing not only management of AWS services, but also to programmatically create, modify, and remove CloudFormation stacks, containing infrastructure elements. It supports many programming languages, including but not limited to Java, Python, TypeScript, etc. Other non-Amazon IaC tools exist, like Pulumi and Terraform, and they provide very interesting multi-cloud support, including but not limited to AWS. For example, exactly like AWS CDK, Pulumi lets you define cloud infrastructure using common programming languages and, like CloudFormation, Terraform uses a dedicated declarative notation, called HCL (HashiCorp Configuration Language). This post is the first part of a series that aims to examine CDK in-depth as a high-level object-oriented abstraction to define cloud infrastructure by leveraging the power of programming languages. Introduction to AWS CDK In AWS's own definition, CDK is an open-source software development framework that defines AWS cloud resources using common programming languages. Here, we'll be using Java. It's interesting to observe from the beginning that as opposed to other IaC tools like CloudFormation or Terraform, the CDK isn't defined as being just an infrastructure provisioning framework. As a matter of fact, in AWS meaning of the term, the CDK is more than that: an extremely versatile IaC framework that unleashes the power of programming languages and compilers to manage highly complex AWS cloud infrastructure with code that is, compared to HCL or any other JSON/YAML based notation, much more readable and extensible. As opposed to these other IaC tools, with the CDK one can loop, map, reference, write conditions, use helper functions, in a word, take full advantage of the programming languages power. But perhaps the most important advantage of the CDK is its Domain Specific Language (DSL)-like style, thanks to the extensive implementation of the builder design pattern that allows the developer to easily interact with the AWS services without having to learn convoluted APIs and other cloud provisioning syntaxes. Additionally, it makes possible powerful management and customizations of reusable components, security groups, certificates, load balancers, VPCs (Virtual Private Cloud), and others. The CDK is based on the concept on Construct as its basic building block. This is a powerful notion that allows us to abstract away details of common cloud infrastructure patterns. A construct corresponds to one or more synthesized resources, which could be a small CloudFormation stack containing just an S3 bucket, or a large one containing a set of EC2 machines with the associated AWS Sytem Manager parameter store configuration, security groups, certificates, and load balancers. It may be initialized and reused as many times as required. The Stack is a logical group of Construct objects. It can be viewed as a chart of the components to be deployed. It produces a declarative CloudFormation template, a Terraform configuration, or a Kubernetes manifest file. Last but not least, the App is a CDK concept which corresponds to a tree of Construct objects. There is a root Appwhich may contain one or more Stack objects, containing in turn one or more Construct objects, that might encompass other Construct objects, etc. The figure below depicts this structure. There are several examples here accompanying this post and illustrating it. They go from the most simple ones, creating a basic infrastructure, to the most complex ones, dealing with multi-region database clusters and bastion hosts. Let's look at some of them. A CDK Starter Let's begin with a starter project and build a CDK application that creates a simple stack containing only an S3 bucket. Installing the CDK is straightforward, as explained here. Once the CDK is installed and bootstrapped according to the above document, you may use its scaffolding functions in order to quickly create a project skeleton. Run the following command: Shell $ cdk init app --language java A bunch of text will be displayed while the CDK scaffolder generates your Maven project and, once finished, you may examine its structure as shown below: $ tree -I target . ├── cdk.json ├── pom.xml ├── README.md └── src ├── main │ └── java │ └── com │ └── myorg │ ├── TestApp.java │ └── TestStack.java └── test └── java └── com └── myorg └── TestTest.java 9 directories, 6 files This is your project skeleton created by the CDK scaffold. As you can see, there are a couple of Java classes, as well as a test one. They aren't very interesting and you can already remove them, together with the package com.myorg which won't probably fit your naming convention. But the real advantage of using the CDK scaffolding function is the generation of the pom.xml and cdk.json files. The first one drives your application build process and defines the required dependencies and plugins. Open it and you'll see: XML ... <dependency> <groupId>software.amazon.awscdk</groupId> <artifactId>aws-cdk-lib</artifactId> </dependency> ... <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <mainClass>fr.simplex_software.aws.iac.cdk.starter.CdkStarterApp</mainClass> </configuration> </plugin> ... In order to develop CDK applications, you need the aws-cdk-lib Maven artifact. This is the CDK library containing all the required resources. The exec-maven-plugin is also required in order to run your application, once built and deployed. If you look in the cdk.json file that the cdk init command has generated for you, you'll see this: JSON ... "app": "mvn -e -q compile exec:java" ... This is the command that the CDK will use in order to build your application. Of course, you don't have to use the scaffolding function if you don't want to and, if you prefer to start from scratch, you can provide your own pom.xml since, after all, as a developer, you must be used to it. However, when it comes to the cdk.jsonfile, you better should get it generated. So, fine: you just got your project skeleton, and now you need to customize it to adapt it to your needs. Have a look at the cdk-starter project in the code repository. As you can see, there are two Java classes, CdkStarterApp and CdkStarterStack. The first one creates a CDK application by instantiating the software.amazon.awscdk.App class which abstracts the most basic CDK concept: the application. It's a recommended practice to tag the application, once instantiated, such that different automatic tools are able to manipulate it, according to different purposes. For example, we can imagine an automatic tool that removes all the test applications and, to do that, scans them looking for the tag environment:development. The goal of an application is to define at least one stack and this is what our application does by instantiating the CdkStarterStack class. This class is a stack as it extends the software.amazon.awscdk.Stack one. And that's in its constructor that we'll be creating an S3 bucket, as shown by the code snippet below: Java Bucket bucket = Bucket.Builder.create(this, "my-bucket-id") .bucketName("my-bucket-" + System.getenv("CDK_DEFAULT_ACCOUNT")) .autoDeleteObjects(true).removalPolicy(RemovalPolicy.DESTROY).build(); Here we created an S3 bucket having the ID of my-bucket-id and the name of my-bucket to which we've appended the current user's default account ID. The reason is that the S3 bucket names must be unique worldwide. As you can see, the class software.amazon.awscdk.services.s3.Bucket used here to abstract the Amazon Simple Storage Service implements the design pattern builder which allows to define, in a DSL-like manner, properties like the bucket name, the auto-delete, and the removal policy, etc. So this is our first simple CDK application. The following line in the CdkStarterApp class: Java app.synth(); ... is absolutely essential because it produces ("synthesizes," in the CDK parlance) the associated AWS CloudFormation stack template. Once "synthesized," it may be deployed and used. So here is how: Shell $ git clone https://github.com/nicolasduminil/cdk.git $ cd cdk/cdk-starter $ mvn package $ cdk deploy --requireApproval=never A bunch of text will be again displayed and, after a while, if everything is okay, you should see a confirmation of your stack's successful deployment. Now, in order to check that everything worked as expected, you can the list of your deployed stack as follows: Shell $ aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE It is critical to filter the output list of the existent stack by their current status (in this case CREATE_COMPLETE), such that to avoid retrieving dozens of irrelevant information. So, you should see something like: JSON { "StackSummaries": [ ... { "StackId": "arn:aws:cloudformation:eu-west-3:...:stack/CdkStarterStack/83ceb390-3232-11ef-960b-0aa19373e2a7", "StackName": "CdkStarterStack", "CreationTime": "2024-06-24T14:03:21.519000+00:00", "LastUpdatedTime": "2024-06-24T14:03:27.020000+00:00", "StackStatus": "CREATE_COMPLETE", "DriftInformation": { "StackDriftStatus": "NOT_CHECKED" } } ... ] } Now, you can get more detailed information about your specific stack: Shell $ aws cloudformation describe-stacks --stack-name CdkStarterStack The output will be very verbose, and we'll not reproduce it here, but you should see interesting information like: JSON ... "RoleARN": "arn:aws:iam::...:role/cdk-hnb659fds-cfn-exec-role-...-eu-west-3", "Tags": [ { "Key": "environment", "Value": "development" }, { "Key": "application", "Value": "CdkApiGatewayApp" }, { "Key": "project", "Value": "API Gateway with Quarkus" } ], ... And of course, you may check that your S3 bucket has been successfully created: Shell $ aws s3api list-buckets --query "Buckets[].Name" Here, using the option --query "Buckets[].Name, you filter the output such that only the bucket name is displayed and you'll see: JSON [ ... "my-bucket-...", ... ] If you want to see some properties (for example, the associated tags): Shell $ aws s3api get-bucket-tagging --bucket my-bucket-... { "TagSet": [ { "Key": "aws:cloudformation:stack-name", "Value": "CdkStarterStack" }, { "Key": "environment", "Value": "development" }, { "Key": "application", "Value": "CdkStarterApp" }, { "Key": "project", "Value": "The CDK Starter projet" }, { "Key": "aws-cdk:auto-delete-objects", "Value": "true" } ] } Everything seems to be okay and you may conclude that your first test with the CDK is successful. And since you have deployed now a stack with an S3 bucket, you are supposed to be able to use this bucket, for example, to upload files in it, to download them, etc. You can do that by using AWS CLI as shown here. But if you want to do it with the CDK, you need to wait for the next episode. While waiting for that, don't forget to clean up your AWS workspace such that to avoid being invoiced! Shell $ cdk destroy --all aws s3 rm s3://my-bucket-... --recursive aws s3 rb s3://my-bucket-... Have fun and stay tuned!
So, I’ve always thought about Heroku as just a place to run my code. They have a CLI. I can connect it to my GitHub repo, push my code to a Heroku remote, and bam…it’s deployed. No fuss. No mess. But I had always run my test suite…somewhere else: locally, or with CircleCI, or in GitHub Actions. How did I not know that Heroku has CI capabilities? Do you mean I can run my tests there? Where have I been for the last few years? So that’s why I didn’t know about Heroku CI… CI is pretty awesome. You can build, test, and integrate new code changes. You get fast feedback on those code changes so that you can identify and fix issues early. Ultimately, you deliver higher-quality software. By doing it in Heroku, I get my test suite running in an environment much closer to my staging and production deployments. If I piece together a pipeline, I can automate the progression from passing tests to a staging deployment and then promote that staged build to production. So, how do we get our application test suite up and running in Heroku CI? It will take you 5 steps: Write your tests Deploy your Heroku app Push your code to Heroku Create a Heroku Pipeline to use Heroku CI Run your tests with Heroku CI We’ll walk through these steps by testing a simple Python application. If you want to follow along, you can clone my GitHub repo. Our Python App: Is It Prime? We’ve built an API in Python that listens for GET requests on a single endpoint:/prime/{number}. It expects a number as a path parameter and then returns true or false based on whether that number is a prime number. Pretty simple. We have a modularized function in is_prime.py: Python def is_prime(num): if num <= 1: return False if num <= 3: return True if num % 2 == 0 or num % 3 == 0: return False i = 5 while i * i <= num: if num % i == 0 or num % (i + 2) == 0: return False i += 6 return True Then, our main.py file looks like this: Python from fastapi import FastAPI, HTTPException from is_prime import is_prime app = FastAPI() # Route to check if a number is a prime number @app.get("/prime/{number}") def check_if_prime(number: int): return is_prime(number) raise HTTPException(status_code=400, detail="Input invalid") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="localhost", port=8000) That’s all there is to it. We can start our API locally (python main.py) and send some requests to try it out: Plain Text ~$ curl http://localhost:8000/prime/91 false ~$ curl http://localhost:8000/prime/97 true That looks pretty good. But we’d feel better with a unit test for the is_prime function. Let’s get to it. Step #1: Write Your Tests With pytest added to our Python dependencies, we’ll write a file called test_is_prime.py and put it in a subfolder called tests. We have a set of numbers that we’ll test to make sure our function determines correctly if they are prime or not. Here’s our test file: Python from is_prime import is_prime def test_1_is_not_prime(): assert not is_prime(1) def test_2_is_prime(): assert is_prime(2) def test_3_is_prime(): assert is_prime(3) def test_4_is_not_prime(): assert not is_prime(4) def test_5_is_prime(): assert is_prime(5) def test_991_is_prime(): assert is_prime(991) def test_993_is_not_prime(): assert not is_prime(993) def test_7873_is_prime(): assert is_prime(7873) def test_7802143_is_not_prime(): assert not is_prime(7802143) When we run pytest from the command line, here’s what we see: Python ~/project$ pytest =========================== test session starts =========================== platform linux -- Python 3.8.10, pytest-8.0.2, pluggy-1.4.0 rootdir: /home/michael/project/tests plugins: anyio-4.3.0 collected 9 items test_is_prime.py ......... [100%] ============================ 9 passed in 0.02s ============================ Our tests pass! It looks like is_prime is doing what it’s supposed to. Step #2: Deploy Your Heroku App It’s time to wire up Heroku. Assuming you have a Heroku account and you’ve installed the CLI, creating your app is going to go pretty quickly. Heroku will look in our project root folder for a file called requirements.txt, listing the Python dependencies our project has. This is what the file should look like: Plain Text fastapi==0.110.1 pydantic==2.7.0 uvicorn==0.29.0 pytest==8.0.2 Next, Heroku will look for a file called Procfile to determine how to start our Python application. Procfile should look like this: Plain Text web: uvicorn main:app --host=0.0.0.0 --port=${PORT} With those files in place, let’s create our app. Plain Text ~/project$ heroku login ~/project$ heroku apps:create is-it-prime That's it. Step #3: Push Your Code to Heroku Next, we push our project code to the git remote that the Heroku CLI set up when we created our app. Plain Text ~/project$ git push heroku main … remote: -----> Launching... remote: Released v3 remote: https://is-it-prime-2f2e4fe7adc1.herokuapp.com/ deployed to Heroku So, that’s done. Let’s check our API. Plain Text $ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/91 false $ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/7873 true $ curl https://is-it-prime-2f2e4fe7adc1.herokuapp.com/prime/7802143 false It works! Step #4: Create a Heroku Pipeline To Use Heroku CI Now, we want to create a Heroku Pipeline with CI enabled so that we can run our tests. We create the pipeline (called is-it-prime-pipeline), adding the app we created above to the staging phase of the pipeline. Plain Text $ heroku pipelines:create \ --app=is-it-prime \ --stage=staging \ is-it-prime-pipeline Creating is-it-prime-pipeline pipeline... done Adding ⬢ is-it-prime to is-it-prime-pipeline pipeline as staging... done With our pipeline created, we want to connect it to a GitHub repo so that our actions on the repo (such as new pull requests or merges) can trigger events in our pipeline (like automatically running the test suite). Plain Text $ heroku pipelines:connect is-it-prime-pipeline -r capnMB/heroku-ci-demo Linking to repo... done As you can see, I’m connecting my pipeline to my GitHub repo. When something like a pull request or a merge occurs in my repo, it will trigger the Heroku CI to run the test suite. Next, we need to configure our test environment in an app.json manifest. Our file contents should look like this: Plain Text { "environments": { "test": { "formation": { "test": { "quantity": 1, "size": "standard-1x" } }, "scripts": { "test": "pytest" } } } } This manifest contains the script we would use to run through our test suite. It also specifies the dyno size we (standard-1x) would want to use for our test environment. We commit this file to our repo. Finally, in the web UI for Heroku, we navigate to the Tests page of our pipeline, and we click the Enable Heroku CI button. After enabling Heroku CI, here’s what we see: Step #5: Run Your Tests With Heroku CI Just to demonstrate it, we can manually trigger a run of our test suite using the CLI: Plain Text $ heroku ci:run --pipeline is-it-prime-pipeline … -----> Running test command `pytest`... ========================= test session starts ============================ platform linux -- Python 3.12.3, pytest-8.0.2, pluggy-1.4.0 rootdir: /app plugins: anyio-4.3.0 collected 9 items tests/test_is_prime.py ......... [100%] ============================ 9 passed in 0.03s ============================ How does the test run look in our browser? We navigate to our pipeline and click Tests. There, we see our first test run in the left-side nav. A closer inspection of our tests shows this: Awesome. Now, let’s push some new code to a branch in our repo and watch the tests run! We create a new branch (called new-test), adding another test case to test_is_prime.py. As soon as we push our branch to GitHub, here’s what we see: Heroku CI detects the pushed code and automates a new run of the test suite. Not too long after, we see the successful results: Heroku CI for the Win If you’re already using Heroku for your production environment — and you’re ready to go all in with DevOps — then using pipelines and Heroku CI may be the way to go. Rather than using different tools and platforms for building, testing, reviewing, staging, and releasing to production… I can consolidate all these pieces in a single Heroku Pipeline and get automated testing with every push to my repo.
Feature flags offer an excellent way to quickly turn off and on product changes by enabling you to remove and add the code in the software quickly. Marketers or product managers can choose a time and moment to make a feature or function live to win that aha moment. The feature flags are helpful to various departments, including marketing, product, testing, CROs, and development. The number of feature flags can rise quickly as the team realizes their helpfulness and begins to utilize them. To avoid the mismanagement it may create, you need feature flag platforms. A comprehensive space where you can place all your feature flags and manage, modify, and delete them. Finding a tool that fits the exact needs and requirements of developers, marketers, and product managers can be challenging. But don’t worry; we have done the heavy lifting for you. In this article, we have curated a list of the 10 feature flag tools and their best features. We’ve also covered the common functionalities you should look for when selecting tools for your team. What Are Feature Flag Tools? A feature flag tool, also known as a feature management or feature toggle tool, is a software or platform designed to facilitate the implementation, management, and control of feature flags in software applications. These tools provide a centralized interface or API that allows developers and teams to easily create, deploy, and monitor feature flags without directly modifying the underlying codebase. To understand feature flags tools, let’s summarize what feature flags are first. Feature flags, also known as feature toggles or feature switches, are software development techniques used to enable or disable certain features or functionalities in an application or system. They allow developers to control the release and availability of specific features to different user segments or environments without the need for code deployments or separate branches. Do Feature Flag Platforms Help? Yes. Feature flag platform comes with a range of features, including centralized flag management, an easy-to-use interface, user segmentation, traffic allocation, and integration with other tools to simplify the process of using feature flags in software development. Feature flag platform enables you to: Gradually roll out new features: Release features to a small percentage of users and gradually increase rollout for feedback and risk mitigation. Perform A/B testing: Run experiments exposing different feature variations to user segments to determine optimal performance. Enable feature toggling: Dynamically enable or disable features without code changes for flexible control over feature availability. Rollback problematic features: Quickly deactivate features causing issues and revert to a stable state to maintain system stability. Trunk-based development: Merge the code to the main branch without releasing it to production. Personalize user experiences: Customize user experiences based on attributes, roles, or preferences to enhance satisfaction and engagement. For a non-tech person, doing it all using CLI and code could be confusing & challenging. Plus, as you continue to create and use, you will have many feature flags, which could lead to mismanagement. Having a feature flag tool helps you there. Popular Feature Flag Tools InfraCloud DevOps, platform engineering, and software development teams extensively use feature flags. So, we asked them which tools they preferred and why. We uncovered many feature flag tools, both open-source and commercial. The ‘best’ depends on the project requirements and engineers’ preferences. However, there are still basic features that a feature flag software must have. Here, we have shortlisted feature flag software covering fundamental features and advanced capabilities for specific use cases. For now, let’s see the best feature flag tools: FeatureHub Unleash Flipt Growth Book Flagsmith Flagd LaunchDarkly Split ConfigCat CloudBees Let’s discuss each of them in detail. 1. FeatureHub (Image src: FeatureHub) FeatureHub is a cloud-native feature flag platform that allows you to run experiments across services in your environment with a user-friendly interface — FeatureHub Admin Console. It comes with a variety of SDKs so you can connect FeatureHub with your software. Whether you are a tester, developer, or marketer, you can control all the feature flags and their visibility in any environment. If you are looking for a tool that focuses more on feature and configuration management, FeatureHub may be the better choice. Its microservices architecture allows for greater scalability and extensibility, and it provides advanced features such as versioning, templates, and the ability to roll back changes. Features of FeatureHub Open source version available SaaS in beta version Google Analytics/RBAC/AB Testing Supported SDK included Python, Ruby, and Go OpenFeature is in process SSO support Community support & documentation Dedicated support to SaaS users 2. Unleash (Image src: Unleash) With 10M+ Docker downloads, Unleash is a popular and widely used open-source feature flag platform. As it supports Docker images, you can scale it horizontally by deploying it on Kubernetes. The platform’s intuitive interface and robust API make it accessible and flexible for developers, testers, and product managers alike. However, the open-source version lacks several critical functions, such as SSO, RBAC, network traffic overview, and notifications. However, you can integrate these features using other open-source solutions. If you are looking for a tool that focuses more on feature flagging and targeting, then Unleash might be the better choice for you. Unleash provides more advanced capabilities for user targeting, including the ability to target users based on custom attributes and the ability to use percentage rollouts. Additionally, it has a wider range of integrations with popular development tools, including Datadog, Quarkus, Jira, and Vue. Features of Unleash Open source version available AB Testing/RBAC/Targeted Release/Canary release SDK support for Go, Java, Node.js, PHP, Python, etc. OpenFeature supported Community support and documentation Premium support for paid users Observability with Prometheus 3. Flipt (Image src: Flipt) Flipt is a 100% open-source, self-hosted feature flag application that helps product teams manage all their features smoothly from a dashboard. You can also integrate Flipt with your GitOps workflow and manage feature flags as code. With Flipt, you get all the necessary features, including flag management and segment-wise rollout. The platform is built in the Go language and is optimized for performance. The project is under active development with a public roadmap. Features of Flipt Only open source version No SaaS Support for REST & GRPC API Native client SDKs are available in Go, Ruby, Java, Python, etc. OpenFeature supported SSO with OIDC & Static Token Observability out of the box with Prometheus & OpenTelemetry 4. GrowthBook (Image src: GrowthBook) GrowthBook is primarily a product testing platform for checking users’ responses to features. It is relatively new, and the SaaS version is much more affordable than other SaaS-based feature flag platforms. SDKs from GrowthBook are available in all major languages and are designed not to interfere with feature flag rendering. You can easily create experiments using GrowthBook’s drag-and-drop interface. Integrations with popular analytics tools, such as Google Analytics and Mixpanel, make tracking experiments easier for better results. If you run many A/B experiments and do not want to share your data with 3rd party apps, GrowthBook could be an amazing option as it pulls the data directly from the source. Features of GrowthBook Open source version available SaaS version available A/B Testing/unlimited projects SDK support for React, PHP, Ruby, Python, Go, etc Observability via Audit Log Community support and documentation 5. Flagsmith (Image src: Flagsmith) Flagsmith is another open-source solution for creating and managing feature flags easily across web, mobile, and server-side applications. You can wrap a section of code with a flag and then use the Flagsmith dashboard to toggle that feature on or off for different environments, users, or user segments. Flagsmith offers segments, A/B testing, and analytics engine integrations that are out of the box. However, if you want real-time updates on the front end, you have to build your own real-time infrastructure. One of the best parts of the Flaghsmith is the Remote config, which lets you change the application in real time, saving you from the approval process for the new features. Features of Flagsmith Open source version available SaaS product available A/B Testing/RBAC/Integrations with tool SDK support for RUBY, .NET, PHP, GO, RUST, etc OpenFeature support HelpDesk for community support Docker/Kubernetes/OpenShift/On-Premise (Paid) 6. Flagd (Image src: Flagd) Flagd is a unique feature flag platform. It does not have a UI, management console, or persistence layer and is completely configurable via a POSIX-style CLI. Due to this, Flagd is extremely flexible and can be fit into various infrastructures to run on various architectures. It supports multiple feature flag sources called syncs like file, HTTP, gRPC, and Kubernetes custom resources, and has the ability to merge those flags. Features of Flagd Only open source version is available Progressive roll outs Works with OpenFeature SDK Technical documentation Lightweight and flexible 7. LaunchDarkly (Image src: LaunchDarkly) LaunchDarkly is a good entry point for premium feature management tools as it is not expensive comparatively but offers many useful features. It enables you to easily create, manage, and organize your feature flags at scale. You can also schedule approved feature flags to build a custom workflow. One of the features of LaunchDarkly is Prerequisites, where you can create feature flag hierarchies, where the triggering of one flag unlocks other flags that control the user experience. This way, you can execute multiple feature flags with one toggle. With multiple integration options available, including API, SDK support, and Git tools, you can automate various tasks in LaunchDarkly. If you are looking for paid software with quality support and a comprehensive set of features, LaunchDarkly could be your option. Features of LaunchDarkly No open-source version is available SaaS product only A/B Testing/Multiple variants testing SDK support for Go, Gatsby, Flutter, Java, PHP etc OpenFeature supported Academy, blogs, tutorials, guides & documentation Live chat support 8. Split (Image src: Split) Split brings an impressive set of features and a cost-effective solution for feature flag management. It connects the feature with engineering and customer data & sends alerts when a new feature misbehaves. With Split, you can easily define percentage rollouts to measure the impact of features. There is no community support, but the documentation is detailed and organized. Once you move ahead of the slight learning curve, you can easily organize all your feature flags at scale with Split. Features of Split No open source version SaaS-based platform A/B Testing/Multi-variant testing/Dimension analysis SDK support for Go, Python, Java, PHP etc OpenFeature supported Blogs, guides & documentation No on-prem solution Free plan available 9. ConfigCat (Image src: ConfigCat) ConfigCat enables product teams to run experiments (without involving developer resources) to measure user interactions and release new features to the products. You can turn the features ON/OFF via a user-friendly dashboard even after your code is deployed. ConfigCat can be integrated with many tools and services, including Datadog, Slack, Zapier, and Trello. It provides open-source SDKs to support easy integration with your mobile, desktop application, website, or any backend system. One fantastic feature of this software is Zombie Flags – which identifies flags that are not functional or have been used for a long time and should be removed. Features of ConfigCat No open-source version is available SaaS product % rollouts, A/B testing/variations. SDK support for Go, Java, Python, PHP, Ruby, etc. OpenFeature supported Blogs, documentation & Slack community support 10. CloudBees (Image src: CloudBees) CloudBees is not a dedicated feature flag management platform, but it allows you to manage feature flag permissions and automate cleanup easily. While having a dashboard helps, CloudBees also offers bidirectional configuration as code with GitHub to edit flags in your preferred environments. The dashboard’s sleek and intuitive design makes it easier for developers and DevOps teams to use and leverage its functionalities. However, the software has so many features that it could be a slight challenge to learn all of them. Features of CloudBees No open-source version is available SaaS product A/B Testing/Multiple variant testing SDK support for Java, Python, C++, Ruby, etc. OpenFeature supported Blogs, video tutorials, & documentation Quick Comparison of the Feature Flag Tools Open the sheet to have a comparison of feature flag tools at a glance. What Should You Look for in a Feature Flag Tool? There are so many feature flag tools, but these are the features you must look for when picking a platform. 1. Community Support Proper support is crucial to overcoming the initial onboarding challenges, whether for an open-source or proprietary product. Some OSSs have an extensive community, documentation, blogs, and user-generated content to help and educate the next generation of users. The OSS product’s creators, maintainers, and experts often offer commercial support. For example, at InfraCloud, we offer Linkerd support, Prometheus support, and Istio support because our engineers are proficient in these technologies. For closed-source products, you can get video tutorials, blogs, documentation, and live chat, and most importantly, you can raise a ticket and solve your problem quickly. Not having a proper support channel can leave you in the middle during an emergency. So, analyze your requirements to see what kind of support your team needs, whether they can do it with the help of documentation or need hand-holding. 2. Integration It is critical for the successful feature flag process that the programming languages used to develop the products are well supported by the feature flag platform. If the language is not supported, enough resources should be available to connect your product and feature flag platform. Going with platforms that support OpenFeature could be a good solution. OpenFeature provides a vendor-agnostic, community-driven API for feature flagging that works with your favorite feature flag management tool. You would not have to change the application code much in case you plan to change it later. In the list, we mentioned the feature flag platforms that support the most common and popular development languages and are OpenFeature friendly. When selecting a feature flag platform, don’t forget to analyze your tech stack to find whether the feature flag is compatible. Otherwise, a major chunk of time might go into developing the integrations between the technology used and the feature flag platform. 3. 3rd Party Apps What if you could view and monitor feature flags and approval requests from your team’s Slack workspace or use Terraform to configure and control the feature flags? All this and more is possible if the feature flag offers integrations. You can bring integrations by wrangling scripts and making an automation process that works on triggers. But here, we picked the software with native integration abilities to streamline & automate the feature flag operations further. 4. Easy-To-Use UI Feature flags are not always used by developers. Often, product marketers like to have control over the lever that launches the features to the public. In case of any issue, marketers and product managers can quickly kill the feature that makes the product unstable from the platform without waiting for the developer. So, having an easy-to-use user interface is a key characteristic when selecting a feature flag tool. Some open-source feature flag platforms have a rudimentary design covering basics, and some are fully-fledged platforms with incredible UX and tutorials at every corner. In the list, we covered the software that has a usable UI. 5. Testing and Reporting New features can be tested using the feature flags. Sophisticated feature flags tools come with various testing methods, including A/B/n and blue-green deployment strategy. Functions like setting up variable and controlled factors, allocating traffic, and insights from the result are extremely helpful in delivering a product feature confidently. With feature flag tools, you can segment the users and roll the features accordingly to test the initial responses. The software also comes with dashboards to see the results of the experiments. You can view all the requests and how users spend time using the software with newly released features. These tools include testing and reporting features, making it easy to run experiments and make data-backed decisions. FAQs Related to Feature Flag Tools What Are the Different Types of Feature Flags? There are several types of feature flags commonly used in software development: Boolean flags: These flags are the simplest feature flags based on a true/false value. They enable or disable a feature globally across all users or environments. Percentage rollouts: Also known as “gradual rollouts” or “canary releases,” these flags allow features to be gradually released to a percentage of users. For example, a feature can be enabled for 10% of users initially, then gradually increased to 25%, 50%, etc. User segmentation flags: These flags enable features for specific user segments based on predefined criteria such as user attributes, roles, or subscription levels. They allow targeted feature releases to specific groups of users. Feature toggle flags: Feature toggle flags provide more granular control over the behavior of a feature. They allow different variations or configurations of a feature to be activated or deactivated dynamically. Who Uses Feature Flags? Software development teams, including developers, product managers, and DevOps engineers, widely use feature flags. They are particularly beneficial in agile and continuous delivery environments, where iterative development, experimentation, and frequent releases are essential. What Are Feature Flags’ Limitations? While feature flags offer numerous advantages, they also have some limitations to consider: Increased complexity: Introducing feature flags adds complexity to the codebase and requires careful management to avoid technical debt and maintainability issues. Performance overhead: Feature flags introduce conditional checks that can impact performance, especially when numerous flags are evaluated at runtime. Flag proliferation: Over time, the number of feature flags may grow, leading to potential confusion, maintenance challenges, and increased technical debt. Testing effort: Feature flags require additional testing efforts to ensure the functionality of different flag combinations and variations. What Is the Difference Between a Feature Gate and a Feature Flag? The terms “feature gate” and “feature flag” are often used interchangeably, but they can have slightly different connotations. A feature gate typically refers to a more granular control mechanism that checks whether a specific user has access to a particular feature, usually based on permissions or user roles. On the other hand, a feature flag is a broader concept encompassing various flags used to control feature availability, behavior, or rollout. What Is a Feature Flag Rollback? Feature flag rollback refers to deactivating a feature flag and reverting the system’s behavior to a previous state. It is typically used when a feature causes unexpected issues, performance problems, or undesirable outcomes. The system can revert to a stable state by rolling back a feature flag until the underlying issues are addressed. What Is Feature Flag Hygiene? Feature flag hygiene refers to best practices and guidelines for managing feature flags effectively. It involves maintaining a clean and manageable set of flags by periodically reviewing and removing obsolete or unused flags. Final Words Finding the best feature flag platform isn’t easy, especially when you have many great options. While all these tools are great, you must factor in your requirements to find the best fit. We hope this list helps you find the best platform to manage feature flags. This article is developed with the contribution of Faizan, Sagar, Bhavin, and Sudhanshu. You can reach out to any of them if you need answers to any of your doubts.
John Vester
Senior Staff Engineer,
Marqeta
Raghava Dittakavi
Manager , Release Engineering & DevOps,
TraceLink