Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Go: Unit and Integration Tests
The Need for Application Security Testing
In the realm of front-end development, ensuring that your application is thoroughly tested and maintains high quality is paramount. One of the strategies that can significantly enhance both the development and testing processes is the use of the data-testid attribute. This attribute, specifically designed for testing purposes, offers numerous advantages, particularly from a QA perspective. Benefits of Using data-testid Stable and Reliable Locators Benefit One of the primary challenges in automated testing is ensuring that test scripts remain stable as the UI evolves. Typically, selectors like classes and IDs are used to locate elements in the DOM, but these can change frequently as the design or structure of the UI is updated. data-testid provides a stable and reliable way to locate elements, as it is intended solely for testing purposes and is less likely to be altered. Impact on Automation Automated tests become more resilient and less prone to failure due to changes in the UI. This reduces the maintenance burden on the QA team, allowing them to focus on expanding test coverage rather than constantly updating selectors. Clear Separation of Concerns Benefit data-testid ensures that testing selectors are decoupled from the visual and functional aspects of the UI. Unlike classes and IDs, which are tied to styling and functionality, data-testid is dedicated solely to testing, meaning that changes to the UI’s look or behavior won't impact the test scripts. Impact on Automation This separation promotes a cleaner codebase and prevents tests from becoming fragile due to design changes. Developers can refactor UI components without worrying about breaking the test automation, as long as the data-testid values remain unchanged. Encourages a Test-First Approach Benefit The use of data-testid encourages developers to think about testability from the outset. By including data-testid attributes during development, teams can ensure that their UI components are easily testable and that the testing process is considered throughout the development lifecycle. Impact on Automation This test-first approach can lead to more robust and comprehensive test coverage. When testability is a priority from the beginning, automated tests can be created more quickly and with greater confidence in their effectiveness. How Can I Implement This Approach? I’ve created a separate step-by-step guide to implement this approach, "Mastering Test Automation: How data-testid Can Revolutionize UI Testing." Impact on Automation Development Simplified Locator Strategy By using data-testid attributes, test automation engineers can adopt a simplified and consistent locator strategy across the entire test suite. This reduces the complexity of writing and maintaining test scripts and minimizes the time spent dealing with flaky tests due to changing locators. Reduced Test Maintenance The stability provided by data-testid attributes means that automated tests require less frequent updates, even as the UI evolves. This leads to lower maintenance costs and allows the QA team to invest their time in creating new tests or enhancing existing ones. Improved Collaboration Between Developers and QA By using data-testid, developers and QA engineers can work more closely together. Developers can ensure that the elements they create are easily identifiable in tests, while QA engineers can provide feedback on which elements need data-testid attributes. This collaboration fosters a more cohesive development process and helps ensure that the application is thoroughly tested. Scalability of the Automation Suite A consistent use of data-testid makes the automation suite more scalable. As the application grows, the test suite can expand with it, confident that the locators will remain stable and that tests will continue to provide reliable results. Impact on Overall QA Process and Product Delivery Implementing data-testid attributes in front-end development has a profound impact on the overall QA process and product delivery: Increased Test Reliability Automated tests that rely on data-testid attributes are less likely to break, leading to more reliable test results. This reliability ensures that the QA team can quickly identify and address issues, reducing the likelihood of bugs making it into production. Faster Development and Testing Cycles With data-testid, both development and testing processes become more efficient. Developers can refactor code without fear of breaking tests, and QA engineers can write tests more quickly and with greater confidence. This efficiency leads to faster development and testing cycles, allowing the team to deliver high-quality products more rapidly. Reduced Technical Debt The stability and maintainability provided by data-testid attributes help reduce technical debt related to testing. With less time spent on test maintenance and more time available for enhancing test coverage, the QA team can focus on preventing bugs rather than constantly fixing them. Better Stakeholder Confidence Reliable, consistent test results build confidence among stakeholders, including product managers, developers, and end-users. Knowing that critical functionalities are thoroughly tested before release can provide peace of mind and support smoother product rollouts. Potential for Misuse While data-testid is a powerful tool, it should be used judiciously. Overuse of data-testid attributes on every element can clutter the HTML and lead to unnecessary complexity. It’s important to apply data-testid selectively, focusing on elements that are critical for testing, to avoid introducing unnecessary overhead. Conclusion Using data-testid attributes in front-end development is highly beneficial from a QA standpoint. It provides reliable locators, promotes best practices, and improves collaboration between development and QA teams. The impact on automation development is overwhelmingly positive, resulting in more robust, maintainable, and scalable automated test suites. However, it’s essential to use this approach judiciously to avoid unnecessary overhead. References playwright-locate-by-test-id Cypress-locate-by-test-id Selenium-locate-by-test-id
Unit testing is the first line of defense against bugs. This level of protection is essential as it lays the foundation for the following testing processes: integration tests, acceptance testing, and finally manual testing, including exploratory testing. In this article, I will shed some light on what differentiates unit testing from other methods and will bring examples of when we can or cannot do without unit testing. We'll also touch upon automation testing, which plays an important role in ensuring code reliability and quality. Unit Testing The idea of unit testing is to write tests for every non-trivial function or method. This allows it to quickly check if recent code changes have caused regressions, meaning errors in parts of the program that were already tested, and it also makes it easier to detect and fix such errors. When Unit Testing Is Excessive Any long-term project without proper test coverage is destined to be rewritten from scratch sooner or later. Unit testing is a must-have for the majority of projects, yet there are cases when one might omit this step. For example, you are creating a project for demonstrational purposes. The timeline is very tough. Your system is a combination of hardware and software, and at the beginning of the project, it's not entirely clear what the final product will look like. The software will operate for 1-2 days during the exhibition or presentation. In this case, there is no need to implement unit testing. Another case is when you are working on an advertising website, or simple flash games, or banners, which involve complex layouts, animations, and a large amount of static content. All of the above serve for the presentation. If you are building a simple business card website with a set of static HTML pages and a single email submission form, no unit tests are required. The client will most likely be satisfied with this and won't need anything more. Most likely it will be faster to check and test everything manually. Unit Testing Implementation When planning the unit testing, bear in mind that your aim is to ensure that your unit testing code coverage exceeds 80%. This means at least 80% of your codebase is executed when running your unit tests. For these purposes, I recommend tools like JaCoCo for Java or Istanbul for JavaScript. So, to start incorporating unit testing into your development process, try going through the steps below. 1. Choose an Appropriate Testing Framework Select a framework that fits your needs rather than reinventing the wheel. For instance, many .NET developers use MsTest because it comes with Visual Studio, but NUnit or xUnit might offer better features for your project. 2. Decide What to Test Not all code needs testing. Simple, dependency-free code might not require tests, whereas complex code with many dependencies might benefit from refactoring before testing. Focus on testing complex, algorithmic code, and interdependent components to ensure clear interaction and integration. 3. Maintain Consistent Test Structure Use the Arrange, Act, Assert (AAA) pattern for clarity and maintainability. 4. Test One Thing at a Time Each test should verify only one aspect of the code. For complex processes, break them into smaller parts and test them individually. 5. Handle Dependencies With Fakes Replace real dependencies with fake implementations to avoid testing unnecessary components. Use stubs for predefined responses and mocks for verifying interactions. 6. Use Isolation Frameworks Use existing frameworks like Moq or Rhino Mocks to create mocks and stubs instead of writing your own. This reduces errors and maintenance overhead. 7. Design for Testability Initially write code with testability in mind. Use dependency injection, avoid direct instantiation of objects within methods, and minimize the use of static methods and constructors with logic. 8. Refactor Legacy Code If dealing with untestable legacy code, start by refactoring small, manageable parts and cover them with integration and acceptance tests before writing unit tests. Gradually expand this process to larger parts of the codebase. Automation Testing The name of this method is self-explanatory: in automation testing the test cases are executed automatically. It happens much faster than manual testing and can be carried out even during nighttime as the whole process requires minimum human interference. This approach is an absolute game changer when you need to get quick feedback. However, as with any automation, it may need substantial time and financial resources during the initial setup stage. Even so, it is totally worth using it, as it will make the whole process more efficient and the code more reliable. Automation Testing Implementation The first step here is to understand if the project incorporates test automation. You need to ensure that the project has a robust test automation framework in place. In turn, automation engineers should be proficient with the tool stack (e.g., Selenium, Appium, Cypress) and follow established automation guidelines. 1. Automation Coverage Compared to Manual Tests Strive for a high percentage of test cases to be automated, ideally over 90%, to maximize efficiency and reduce the reliance on manual testing. 2. Project Overview and Automation Implementation Automation testing is always a large project involving multiple teams developing a shared product, with Manual QA testers present in each team. Testing focuses on both frontend and backend aspects. 3. Understanding the Project First, we need to understand the product's purpose and its users. This helps prioritize automation efforts. For instance, if the product serves businesses, focus on testing legal compliance and payment transactions. For consumer-facing products, prioritize key operations like card-to-card transfers and service payments. Automation should be applied comprehensively across the entire product rather than just individual teams. 4. Identifying Key Stakeholders It's crucial to be familiar with all stakeholders since interaction with them will be necessary. Key people include: Product owners: They are the clients of the automation and define its requirements. QA engineers: They are the end-users of the automation tools, and their satisfaction is a measure of success. Manual testing leads: They help organise the process and coordinate with manual testing. Frontend development leads: They influence the stability and quality of automated tests. Procurement specialists: They handle hardware allocation, mainly for server equipment. 5. Understanding Teams Gather information about each team's project scope, whether it covers frontend, backend, or both. Understand how QA teams test their sections and their familiarity with automation. Identify testing challenges and prioritize areas for automation. 6. Formulating Automation Requirements In the majority of cases, we aim for a classic approach without innovative solutions: Programming language: Java, to facilitate hiring specialists Frontend testing: Use Selenium. Backend testing: Use REST-assured for REST interactions. Database testing: Go for standard Java libraries Automating tests: Choose Cucumber for both training manual QA testers and reducing costs. Reporting: Last, but not the least, use Allure for attractive and informative reports. 7. Demo and Onboarding Conduct a demo for all stakeholders, including Product Owners, QA engineers, developers, and analysts, focusing on clarity. Begin with a front-end team to create visible results. Develop 5-10 automated tests, record them, and show the results using Allure for graphical reports. Illustrate the automation infrastructure, main goals, and effects, and compare manual and automated testing. 8. Preparing the UI for Automation To ensure reliable and stable automated tests, add “data-test-id” attributes to UI elements centrally, with the cooperation of front-end leads and product owners. This practice greatly enhances test reliability by insulating tests from changes in UI element positions or content. 9. Developing Automated Tests Distribute tasks among automation testers. Create a project framework for automation using templates. Prepare Cucumber steps for frontend testing, make these steps reusable across projects, and set up Selenoid and Jenkins. Integrate teams into automation by setting up repositories, creating Jenkins jobs, and training QA in Cucumber, Git, and development environments. QA manual testers will then write their automated tests, which will be reviewed and integrated by automation engineers. The final Cucumber steps development will occur during spare time in the sprint. At the end of each sprint, showcase results and announce new features in the product demo. Conclusion As you can see unit testing and automation testing are complementary approaches. By using them to identify defects daily, you can reduce regression testing time at each stage. Besides, this will gradually lead to a faster release of the product into production, saving time and resources.
A/B testing is the gold standard for online experimentation used by most companies to test out their product features. Whereas A/B test experimentation works just fine in most settings, it is particularly susceptible to interference bias, particularly in the case of online marketplaces or social networks. In this article, we aim to look at the situations with interference bias and some potential ways to mitigate its effect on evaluation. SUTVA, the Fundamental Assumption of A/B Testing and Its Violations One of the fundamental assumptions of A/B Testing is SUTVA — Stable Unit Treatment Value Assumption. The potential outcome of treatment in the randomization unit depends only on the treatment they receive and not on the treatments assigned to other subjects. This is violated often in experiments on marketplaces and social networks. Some examples of potential violations include: A/B test experiments on Social networks. For example, let’s say we want to understand the effect of adding a ‘Stories’ feature on Instagram. A feature that increases engagement for people in the treatment arm can affect people connected to them in the control arm. The people in the control arm respond to their stories and this can increase their engagement. This is an example where the real treatment effect is less than what we see in the experiment. A/B test experiments on Rideshare marketplaces: Let’s say a rideshare marketplace introduces a discount for the rider and wants to test it against the control of no discount. Also, the metric of interest is the Number of Rides. What happens though, is that if treatment riders start requesting more rides, fewer drivers will be available to the control riders. The treatment effect in this case is exaggerated. A similar example is an ads marketplace where multiple campaigns compete for an ad. However, let’s say that there is a fixed advertiser budget. That budget is shared across treatment and control. Imagine our proposed feature increases the click-through rate. In treatment, if the budget starts being spent more, there is less budget available for the control group. The treatment effect again is inflated in this case. Mitigating Interference Effects We can mitigate the impact of interference in A/B tests through a combination of modifying experiment setup and causal inference techniques. I have delved into the intuition behind the techniques rather than technical details. I will share the references so you can do it later too. Budget Split Testing This is generally used when a common resource like an advertiser budget is shared between treatment and control. In this, the budget is split in the ratio of the experiment traffic so that treatment and control have their own budget and there is no cannibalization. This method can be costly as it can lead to underutilization of the budget. More details on this can be found in the paper by Min Liu et al., 2021. I gave a skeleton code to create a budget-split experimentation system. Python import random class BudgetSplitTest: def __init__(self, total_budget, control_traffic_ratio): self.total_budget = total_budget self.control_traffic_ratio = control_traffic_ratio self.treatment_traffic_ratio = 1 - control_traffic_ratio # Split budget based on traffic ratio self.control_budget = total_budget * control_traffic_ratio self.treatment_budget = total_budget * self.treatment_traffic_ratio # Track spent budget and conversions self.control_spent = 0 self.treatment_spent = 0 self.control_conversions = 0 self.treatment_conversions = 0 def run_experiment(self, total_impressions): for _ in range(total_impressions): if random.random() < self.control_traffic_ratio: self._serve_control_ad() else: self._serve_treatment_ad() def _serve_control_ad(self): if self.control_spent < self.control_budget: spend = min(random.uniform(0.1, 1.0), self.control_budget - self.control_spent) self.control_spent += spend if random.random() < 0.1: # 10% conversion rate for control self.control_conversions += 1 def _serve_treatment_ad(self): if self.treatment_spent < self.treatment_budget: spend = min(random.uniform(0.1, 1.0), self.treatment_budget - self.treatment_spent) self.treatment_spent += spend if random.random() < 0.15: # 15% conversion rate for treatment self.treatment_conversions += 1 def get_results(self): return { "Control": { "Budget": round(self.control_budget, 2), "Spent": round(self.control_spent, 2), "Conversions": self.control_conversions, "CPA": round(self.control_spent / self.control_conversions, 2) if self.control_conversions else 0 }, "Treatment": { "Budget": round(self.treatment_budget, 2), "Spent": round(self.treatment_spent, 2), "Conversions": self.treatment_conversions, "CPA": round(self.treatment_spent / self.treatment_conversions, 2) if self.treatment_conversions else 0 } } # Run the experiment total_budget = 10000 control_traffic_ratio = 0.5 # 50% traffic to control, 50% to treatment total_impressions = 100000 experiment = BudgetSplitTest(total_budget, control_traffic_ratio) experiment.run_experiment(total_impressions) results = experiment.get_results() Switchback Experiments Switchbacks are more common in two-sided marketplaces like Lyft, Uber, and Doordash and all users switch between treatment and control. The randomization unit here is not users but time units. This method can have spillover effects from treatment to control if time intervals are short otherwise it can lead to underpower of experiments. We can increase the power by using methods like regression analysis. Python import random from datetime import datetime, timedelta class SwitchbackExperiment: def __init__(self, experiment_name, start_time, end_time, interval_hours=1): self.name = experiment_name self.start_time = start_time self.end_time = end_time self.interval_hours = interval_hours self.schedule = self._create_schedule() self.data = [] def _create_schedule(self): schedule = [] current_time = self.start_time while current_time < self.end_time: schedule.append({ 'start': current_time, 'end': current_time + timedelta(hours=self.interval_hours), 'variant': random.choice(['control', 'treatment']) }) current_time += timedelta(hours=self.interval_hours) return schedule def get_active_variant(self, timestamp): for interval in self.schedule: if interval['start'] <= timestamp < interval['end']: return interval['variant'] return None # Outside experiment time range def record_event(self, timestamp, metric_value): variant = self.get_active_variant(timestamp) if variant: self.data.append({ 'timestamp': timestamp, 'variant': variant, 'metric_value': metric_value }) def get_results(self): control_data = [event['metric_value'] for event in self.data if event['variant'] == 'control'] treatment_data = [event['metric_value'] for event in self.data if event['variant'] == 'treatment'] return { 'control': { 'count': len(control_data), 'total': sum(control_data), 'average': sum(control_data) / len(control_data) if control_data else 0 }, 'treatment': { 'count': len(treatment_data), 'total': sum(treatment_data), 'average': sum(treatment_data) / len(treatment_data) if treatment_data else 0 } } # Example usage if __name__ == "__main__": # Set up the experiment start = datetime(2023, 5, 1, 0, 0) end = datetime(2023, 5, 8, 0, 0) # One week experiment exp = SwitchbackExperiment("New Pricing Algorithm", start, end, interval_hours=4) # Simulate events (e.g., rides in a rideshare app) current_time = start while current_time < end: # Simulate more rides during peak hours num_rides = random.randint(5, 20) if 7 <= current_time.hour <= 9 or 16 <= current_time.hour <= 18: num_rides *= 2 for _ in range(num_rides): # Simulate a ride ride_time = current_time + timedelta(minutes=random.randint(0, 59)) ride_value = random.uniform(10, 50) # Ride value between $10 and $50 exp.record_event(ride_time, ride_value) current_time += timedelta(hours=1) # Analyze results results = exp.get_results() Graph Cluster Randomization (GCR) In social network experiments, graph cluster randomization is a technique used to further reduce interference bias. This method takes into account the network structure when forming clusters, helping to isolate treatment effects within network communities. Clusters are then randomly assigned treatment and control. Interference is reduced automatically because of the isolated clusters. Resource-Adjusted Metrics Rather than solely focusing on absolute outcomes, we can use metrics that account for resource allocation. For instance, in an ad campaign, instead of just measuring clicks, we might track cost per click or return on ad spend, which normalizes the results across varying budget levels. Synthetic Control In cases of interference, synthetic control groups can be constructed to model the effect of treatment on a metric for a unit based on the metrics of other units. For example, let’s say we take the country as a unit, In a pretest period, the metrics of a country are modeled with respect to metrics of other countries. After we promote the feature in a country, we can model the effect of the promotion intervention by comparing the metric with the metric predicted by the model. The variance of results might be high and enough to measure small effects. ITSA Interrupted time series model. Define an intervention point like a feature promotion and then use the pre-intervention time series to predict observations. Compare it with the actual observations to see if the intervention had an effect on the time series. Staggered Rollouts Gradually introduce changes to a small subset of users and monitor the results before expanding the rollout. This allows you to detect potential issues early on and mitigate the impact of interference. In reality, all these methods should be used in conjunction with AB testing methods. For example, some metrics could be defined to see if interference is detected for the ads marketplace. If it's not a problem, AB test results can be trusted, otherwise, we can go for a budget split test.
While working on an open source GitHub project that was created to showcase the working of Selenium WebDriver framework with Java, as the project grew, there was a need to create multiple testng.xml files for running different tests. These multiple files were created to segregate the tests and place all the tests related to a respective website in a single testng.xml (I have used different demo websites to demo different actions that can be automated using Selenium WebDriver). I thought of throwing some light on the usage of multiple testng.xml files and how to execute tests. Since Maven is the build tool that is being used, a single testng.xml file is required to run all the tests in the project. Also, there are cases that had to be debugged for the test failures by running a single testng.xml file. In this project, I have created 9 different testng.xml files that have multiple tests, and I am running all tests in these 9 different testng.xml files using a single testng.xml file. Yes, that is possible! So, join my journey where I will demonstrate how to execute multiple testng.xml files using a single testng.xml file. I will also be shedding some light on executing a single testng.xml file out of the 9 available ones and running it from the command line using Maven. Running Multiple testng.xml Files Using a Single testng.xml File Let’s first focus on running all the tests with all the 9 different testng.xml files. The solution to this is to use the <suite-files> </suitefiles> tag in your testng.xml file and provide the other testng.xml file’s path between this tag. Here is the example file to demonstrate what I am talking about: XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Selenium 4 POC Tests "> <suite-files> <suite-file path="testng-saucedemo.xml"/> <suite-file path="testng-automationpractice.xml"/> <suite-file path="testng-theinternet.xml"/> <suite-file path="testng-juice-shop.xml"/> <suite-file path="testng-lambdatestecommerce.xml"/> <suite-file path="testng-seleniumgrid-theinternet.xml"/> <suite-file path="testng-lambdatest-selenium-playground.xml"/> <!-- <suite-file path="testng-seleniumgrid-juiceshop.xml"/>--> </suite-files> </suite> Once we execute this file, it will execute the respective testng.xmls in the order as updated between the <suite-files> tag. So, “testng-saucedemo.xml” will be executed first, and then, “testng-automationpractice.xml” will be executed, and so on. All the testng.xml files provided in the above example have multiple tests in them. So, all the tests within the respective testng.xml will get executed and after completion, the next XML file will be picked for execution. The following are the contents of the testng-saucedemo.xml file: XML <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite name="Sauce Demo Website Tests" parallel="tests" thread-count="4" verbose="2"> <test name="selenium 4 Tests with Chrome Browser"> <parameter name="browser" value="chrome"/> <classes> <class name="io.github.mfaisalkhatri.tests.saucedemo.SauceDemoTests"> <methods> <include name="loginSauceDemoTest"/> <include name="logOutSauceDemoTest"/> </methods> </class> </classes> </test> <!-- Test --> <test name="selenium 4 Tests with Firefox Browser"> <parameter name="browser" value="firefox"/> <classes> <class name="io.github.mfaisalkhatri.tests.saucedemo.SauceDemoTests"> <methods> <include name="loginSauceDemoTest"/> <include name="logOutSauceDemoTest"/> </methods> </class> </classes> </test> <!-- Test --> <test name="selenium 4 Tests with Edge Browser" enabled="false"> <parameter name="browser" value="edge"/> <classes> <class name="io.github.mfaisalkhatri.tests.saucedemo.SauceDemoTests"> <methods> <include name="loginSauceDemoTest"/> <include name="logOutSauceDemoTest"/> </methods> </class> </classes> </test> <!-- Test --> <test name="selenium 4 Tests with Opera Browser" enabled="false"> <parameter name="browser" value="opera"/> <classes> <class name="io.github.mfaisalkhatri.tests.saucedemo.SauceDemoTests"> <methods> <include name="loginSauceDemoTest"/> <include name="logOutSauceDemoTest"/> </methods> </class> </classes> </test> <!-- Test --> </suite> <!-- Suite --> Once all the tests as updated in this XML file get executed (doesn’t matter pass or fail), and the execution is done, the next file will be picked for running another set of tests. Running the suite files in parallel is not supported by testng. Running a Single testng.xml File Using Maven You have an option in IDE to run your tests using testng.xml file by right-clicking on it and selecting the option to run the tests. However, when it comes to executing the tests in the CI/CD pipeline, that option does not hold well, as you need to execute the tests using commands in the automated pipeline. Configuring Your Project To Run suite-xml File Using Command Line We need to set the following configuration to be able to run the testng.xml file using Maven. Update the Maven Surefire plugin in your pom.xml: Notice the <SuiteXmlFile> tag in the above screenshot. The value for suiteXmlFile is set as ${suite-xml}. We will be setting the default value for this declaration in the properties block of the pom.xml file as follows: The default path to testng.xml is set for the file that we used in the above section of this blog and have updated the suite-files path in it. So, now if we run the command mvn clean install or mvn clean test, Maven will pick up the testng.xml default file based on the file path that is updated in the properties block and execute all the tests. Now, the question that comes to mind is: “What should I do if I want to execute any other testng.xml file, is it possible to do so?” The answer is “Yes”: we can run any testng.xml file in our project by adding the -Dsuite-xml=<testng.xml file path> in our mvn command. Remember, we had earlier set this configuration in our Maven Surefire plugin block in pom.xml. We just need to pass the values for the suite-xml property variable in the command line, which can be done using the -D option in mvn command: mvn clean test -Dsuite-xml=<testng.xml file path> Let’s now try our hands with the command line and run different testng.xml files using Maven from the command line, as we just learned. We will run the testng-internet.xml file, check that it should override the existing default testng.xml, and run only the one that we pass in the command. We need to pass the full path where the testng.xml is saved, and in our case, it is available in test-suite folder, so the full path is test-suites\testng-theinternet.xml. Here is the command that we will run (make sure you are on the project's root folder path in the command line window before you execute the Maven command): Plain Text mvn clean test -Dsuite-xml=test-suite\testng-theinternet.xml "-Dsuite-xml" option can be used with other maven commands like as follows : mvn clean install/ mvn clean verify, etc. The tests were successfully run and the results were printed on the console. It says 32 tests ran and passed successfully. To confirm that the correct XML file was picked and tests were executed, let’s execute the tests for testng-theinternet.xmlfile using IDE and check the number of tests executed. We can see that 32 tests were executed and passed, which confirms that the tests we executed using the mvn command were correctly executed for the testng.xml file we passed. Conclusion We can have multiple testng.xml files to segregate the tests based on the different modules/websites in our project and these multiple testng.xml files can be executed using a single testng.xml file. Likewise, we can execute a testng.xml file from the command line using the Maven Surefire plugin.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code. When software development teams face pressure to deliver high-quality applications rapidly, low-code platforms offer the needed support for rapidly evolving business requirements and complex integrations. Integrating intelligent automated testing (IAT), intelligent process automation (IPA), and robotic process automation (RPA) solutions, which can adapt to changes more readily, ensures that testing and automation keep pace with the evolving applications and processes. In a low-code development environment, as shown in Figure 1, IAT, IPA, and RPA can reduce manual effort and improve test coverage, accuracy, and efficiency in the SDLC and process automation. Figure 1. Low-code development environment Using IAT, IPA, and RPA with low-code platforms can also achieve faster time to market, reduced costs, and increased productivity. The intersection of IAT, IPA, RPA, and low code is a paradigm shift in modern software development and process automation, and the impact extends to industries like professional services, consumer goods, banking, and beyond. This article explores all three integrations. For each integration, we will highlight advantages and disadvantages, explore factors to consider when deciding whether to integrate, present a use case, and highlight key implementation points. The use cases presented are popular examples of how these technologies can be applied in specific scenarios. These use cases do not imply that each integration is limited to the mentioned domains, nor do they suggest that the integrations cannot be used differently within the same domains. The flexibility and versatility of the three integrations explored in this article allow for a wide range of applications across different industries and processes. IAT With Low-Code Development AI-driven test case generation in intelligent automated testing can explore more scenarios, edge cases, and application states, leading to better test coverage and higher application quality. This is particularly beneficial in low-code environments, where complex integrations and rapidly evolving requirements can make comprehensive testing challenging. By automating testing tasks, such as test case generation, execution, and maintenance, IAT can significantly reduce the manual effort required, leading to increased efficiency and cost savings. This is advantageous in low-code development, where citizen developers with limited testing expertise are involved, minimizing the need for dedicated testing resources. Low-code platforms enable rapid application development, but testing can become a bottleneck. Automated testing and IAT can provide rapid feedback on application quality and potential issues, enabling quicker identification and resolution of defects. This may accelerate the overall development and delivery cycle. It may also allow organizations to leverage the speed of low code while maintaining quality standards. We need to keep in mind, though, that not all low-code platforms may integrate with all IAT solutions. IAT solutions may require access to sensitive application data, logs, and other information for training AI/ML models and generating test cases. In cases where training and software engineering skill development is necessary for AI/ML in IAT, we need to also consider costs like maintenance and support as well as customization and infrastructure. The decision on whether to integrate IAT with a low-code platform involves a number of factors that are highlighted in the table below: Table 1. Integrating IAT with low-code development When to Integrate When Not to Integrate Rapid development is critical, but only citizen developers with limited testing experience are available Simple applications have limited functionality, and the low-code platform already provides sufficient testing capabilities Applications built on low-code platforms have good options for IAT integration Complexity and learning curve are high, and a deep understanding of AI/ML is required Complex applications need comprehensive test coverage, requiring extensive testing There are compatibility, interoperability, and data silo issues Frequent release cycles have well-established CI/CD pipelines Data security and regulatory compliance are challenges Enhanced decision-making for testing process is needed There are budget constraints Use Case: Professional Services A low-code platform will be used to develop custom audit applications. Since IAT tools can be integrated to automate the testing of these applications, a professional services company will leverage IAT to enhance the accuracy, speed, efficiency, and effectiveness of its audit and assurance services. Implementation main points are summarized in Figure 2 below: Figure 2. IAT with low-code development for a custom audit app In this professional services use case for integrating IAT with low code, custom audit applications could also be developed for industries such as healthcare or finance, where automated testing can improve compliance and risk management. IPA With Low-Code Development Intelligent process automation may significantly enhance efficiency by automating various aspects of the software development and testing lifecycle. Low-code environments can benefit from IPA's advanced AI technologies, such as machine learning, natural language processing (NLP), and cognitive computing. These enhancements allow low-code platforms to automate more complex and data-intensive tasks that go beyond simple rule-based processes. IPA is not limited to simple rule-based tasks; it incorporates cognitive automation capabilities. This makes IPA able to handle more complex scenarios involving unstructured data and decision-making. IPA can learn from data patterns and make decisions based on historical data and trends. This is particularly useful for testing scenarios that involve complex logic and variable outcomes. For example, IPA can handle unstructured data like text documents, images and emails by using NLP and optical character recognition. IPA may be used to automate complex workflows and decision-making processes, reducing the need for manual intervention. End-to-end workflows and business processes can be automated, including approvals, notifications, and escalations. Automated decision-making can handle tasks such as credit scoring, risk assessment, and eligibility verification without human involvement based on predefined criteria and real-time data analysis. With IPA, low-code testing can go beyond testing applications since we can test entire processes across different verticals of an organization. As IPA can support a wide range of integration scenarios across verticals, security and regulatory compliance may be an issue. If the low-code platform does not fully support the wide range of integrations available by IPA, then we need to consider alternatives. Infrastructure setup, data migration, data integration, licensing, and customization are examples of the costs involved. The following table summarizes the factors to consider before integrating IPA: Table 2. Integrating IPA with low-code development When to Integrate When Not to Integrate Stringent compliance and regulatory requirements exist that change in an adaptable, detailed, and easy-to-automate fashion Regulatory and security compliance frameworks are too rigid, having security/compliance gaps with potential legal issues, leading to challenges and uncertainties Repetitive processes exist across verticals where efficiency and accuracy can be enhanced No clear optimization goals; manual processes are sufficient Rapid development and deployment of scalable automation solutions is necessary The low-code platform has limited customization for IPA End-to-end business processes can be streamlined There is limited IT expertise Decision-making for complex process optimization is necessary There are high initial implementation costs Use Case: Consumer Goods A leading consumer goods company wants to utilize IPA to enhance its supply chain management and business operations. They will use a low-code platform to develop supply chain applications, and the platform will have the option to integrate IPA tools to automate and optimize supply chain processes. Such an integration will allow the company to improve supply chain efficiency, reduce operational costs, and enhance product delivery times. Implementation main points are summarized in Figure 3 below: Figure 3. IPA with low-code development for a consumer goods company This example of integrating IPA with low code in the consumer goods sector could be adapted for industries like retail or manufacturing, where inventory management, demand forecasting, and production scheduling can be optimized. RPA With Low-Code Development Robotic process automation and low-code development have a complementary relationship as they can be combined to enhance the overall automation and application development capabilities within an organization. For example, RPA can be used to automate repetitive tasks and integrate with various systems. Low-code platforms can be leveraged to build custom applications and workflows quickly, which may result in faster time to market. The rapid development capabilities of low-code platforms, combined with the automation power of RPA, may enable organizations to quickly build and deploy applications. By automating repetitive tasks with RPA and rapidly building custom applications with low-code platforms, organizations can significantly improve their overall operational efficiency and productivity. RPA in a low-code environment can lead to cost savings by minimizing manual effort, reducing development time, and enabling citizen developers to contribute to application development. Both RPA and low-code platforms offer scalability and flexibility, allowing organizations to adapt to changing business requirements and scale their applications and automated processes as needed. RPA bots can dynamically scale to handle varying volumes of customer queries. During peak times, additional bots can be deployed to manage the increased workload, ensuring consistent service levels. RPA tools often come with cross-platform compatibility, allowing them to interact with various applications and systems and enhancing the flexibility of low-code platforms. Data sensitivity may be an issue here as RPA bots may directly access proprietary or sensitive data. For processes that are unstable, difficult to automate, or unpredictable, RPA may not provide the expected gains. RPA relies on structured data and predefined rules to execute tasks. Frequently changing, unstable, and unstructured processes that lack clear and consistent repetitive patterns may pose significant challenges for RPA bots. Processes that are complex to automate often involve multiple decision points, exceptions, and dependencies. While RPA can handle some level of complexity, it is not designed for tasks requiring deep context understanding or sophisticated decision-making capabilities. The following table summarizes the factors to consider before integrating RPA: Table 3. Integrating RPA with low-code development When to IntegratE When NOT to Integrate Existing system integrations can be further enhanced via automation Tasks to be automated involve unstructured data and complex decision-making Repetitive tasks and processes exist where manual processing is inefficient Rapidly changing and complex processes must be automated Cost savings are expected by automating heavy loads of structured and repetitive tasks Implementation and maintenance costs of the integration are high Scalability and flexibility of RPA can be leveraged by the low-code platform There is a lack of technical expertise Time to market is important RPA bots operate on sensitive data without safeguarding Use Case: Banking A banking organization aims to streamline its data entry processes by integrating RPA with low-code development platforms to automate repetitive and time-consuming tasks, such as form filling, data extraction, and data transfer between legacy and new systems. The integration is expected to enhance operational efficiency, reduce manual errors, ensure data accuracy, and increase customer satisfaction. Additionally, it will allow the bank to handle increased volumes of customer data with greater speed and reliability. The low-code platform will provide the flexibility to rapidly develop and deploy custom applications tailored to the bank's specific needs. RPA will handle the automation of back-end processes, ensuring seamless and secure data management. Implementation main points are summarized in Figure 4 below: Figure 4. RPA with low-code development for a banking organization In this banking example for integrating RPA with low code, while RPA is used to automate back-end processes such as data entry and transfer, it can also automate front-end processes like customer service interactions and loan processing. Additionally, low code with RPA can be applied in domains such as insurance or telecommunications to automate claims processing and customer onboarding, respectively. Conclusion The value of technological integration lies in its ability to empower society and organizations to evolve, stay competitive, and thrive in a changing landscape — a landscape that calls for innovation and productivity to address market needs and societal changes. By embracing IAT, IPA, RPA, and low-code development, businesses can unlock new levels of agility, efficiency, and innovation. This will enable them to deliver exceptional customer experiences while driving sustainable growth and success. As the digital transformation journey continues to unfold, the integration of IAT, IPA, and RPA with low-code development will play a pivotal role and shape the future of software development, process automation, and business operations across industries. This is an excerpt from DZone's 2024 Trend Report, Low-Code Development: Elevating the Engineering Experience With Low and No Code.Read the Free Report
In today's security landscape, OAuth2 has become a standard for securing APIs, providing a more robust and flexible approach than basic authentication. My journey into this domain began with a critical solution architecture decision: migrating from basic authentication to OAuth2 client credentials for obtaining access tokens. While Spring Security offers strong support for both authentication methods, I encountered a significant challenge. I could not find a declarative approach that seamlessly integrated basic authentication and JWT authentication within the same application. This gap in functionality motivated me to explore and develop a solution that not only meets the authentication requirements but also supports comprehensive integration testing. This article shares my findings and provides a detailed guide on setting up Keycloak, integrating it with Spring Security and Spring Boot, and utilizing the Spock Framework for repeatable integration tests. By the end of this article, you will clearly understand how to configure and test your authentication mechanisms effectively with Keycloak as an identity provider, ensuring a smooth transition to OAuth2 while maintaining the flexibility to support basic authentication where necessary. Prerequisites Before you begin, ensure you have met the following requirements: You have installed Java 21. You have a basic understanding of Maven and Java. This is the parent project for the taptech-code-accelerator modules. It manages common dependencies and configurations for all the child modules. You can get it from here taptech-code-accelerator. Building taptech-code-accelerator To build the taptech-code-accelerator project, follow these steps: git clone the project from the repository: git clone https://github.com/glawson6/taptech-code-accelerator.git Open a terminal and change the current directory to the root directory of the taptech-code-accelerator project. cd path/to/taptech-code-accelerator Run the following command to build the project: ./build.sh This command cleans the project, compiles the source code, runs any tests, packages the compiled code into a JAR or WAR file, and installs the packaged code in your local Maven repository. It also builds the local Docker image that will be used to run later. Please ensure you have the necessary permissions to execute these commands. Keycloak Initial Setup Setting up Keycloak for integration testing involves several steps. This guide will walk you through creating a local environment configuration, starting Keycloak with Docker, configuring realms and clients, verifying the setup, and preparing a PostgreSQL dump for your integration tests. Step 1: Create a local.env File First, navigate to the taptech-common/src/test/resources/docker directory. Create a local.env file to store environment variables needed for the Keycloak service. Here's an example of what the local.env file might look like: POSTGRES_DB=keycloak POSTGRES_USER=keycloak POSTGRES_PASSWORD=admin KEYCLOAK_ADMIN=admin KEYCLOAK_ADMIN_PASSWORD=admin KC_DB_USERNAME=keycloak KC_DB_PASSWORD=keycloak SPRING_PROFILES_ACTIVE=secure-jwk KEYCLOAK_ADMIN_CLIENT_SECRET=DCRkkqpUv3XlQnosjtf8jHleP7tuduTa IDP_PROVIDER_JWKSET_URI=http://172.28.1.90:8080/realms/offices/protocol/openid-connect/certs Step 2: Start the Keycloak Service Next, start the Keycloak service using the provided docker-compose.yml file and the ./start-services.sh script. The docker-compose.yml file should define the Keycloak and PostgreSQL services. version: '3.8' services: postgres: image: postgres volumes: - postgres_data:/var/lib/postgresql/data #- ./dump:/docker-entrypoint-initdb.d environment: POSTGRES_DB: keycloak POSTGRES_USER: ${KC_DB_USERNAME} POSTGRES_PASSWORD: ${KC_DB_PASSWORD} networks: node_net: ipv4_address: 172.28.1.31 keycloak: image: quay.io/keycloak/keycloak:23.0.6 command: start #--import-realm environment: KC_HOSTNAME: localhost KC_HOSTNAME_PORT: 8080 KC_HOSTNAME_STRICT_BACKCHANNEL: false KC_HTTP_ENABLED: true KC_HOSTNAME_STRICT_HTTPS: false KC_HEALTH_ENABLED: true KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN} KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD} KC_DB: postgres KC_DB_URL: jdbc:postgresql://172.28.1.31/keycloak KC_DB_USERNAME: ${KC_DB_USERNAME} KC_DB_PASSWORD: ${KC_DB_PASSWORD} ports: - 8080:8080 volumes: - ./realms:/opt/keycloak/data/import restart: always depends_on: - postgres networks: node_net: ipv4_address: 172.28.1.90 volumes: postgres_data: driver: local networks: node_net: ipam: driver: default config: - subnet: 172.28.0.0/16 Then, use the ./start-services.sh script to start the services: Step 3: Access Keycloak Admin Console Once Keycloak has started, log in to the admin console at http://localhost:8080 using the configured admin username and password (default is admin/admin). Step 4: Create a Realm and Client Create a Realm: Log in to the Keycloak admin console. In the left-hand menu, click on "Add Realm". Enter the name of the realm (e.g., offices) and click "Create". Create a Client: Select your newly created realm from the left-hand menu. Click on "Clients" in the left-hand menu. Click on "Create" in the right-hand corner. Enter the client ID (e.g., offices), choose openid-connect as the client protocol, and click "Save." Click "Save." Extract the admin-cli Client Secret: Follow directions in the doc EXTRACTING-ADMIN-CLI-CLIENT-SECRET.md to extract the admin-cli client secret. Save the client secret for later use. Step 5: Verify the Setup With HTTP Requests To verify the setup, you can use HTTP requests to obtain tokens. Get access token: http -a admin-cli:[client secret] --form POST http://localhost:8080/realms/master/protocol/openid-connect/token grant_type=password username=admin password=Pa55w0rd Step 6: Create a PostgreSQL Dump After verifying the setup, create a PostgreSQL dump of the Keycloak database to use for seeding the database during integration tests. Create the dump: docker exec -i docker-postgres-1 /bin/bash -c "PGPASSWORD=keycloak pg_dump --username keycloak keycloak" > dump/keycloak-dump.sql Save the file: Save the keycloak-dump.sql file locally. This file will be used to seed the database for integration tests. Following these steps, you will have a Keycloak instance configured and ready for integration testing with Spring Security and the Spock Framework. Spring Security and Keycloak Integration Tests This section will set up integration tests for Spring Security and Keycloak using Spock and Testcontainers. This involves configuring dependencies, setting up Testcontainers for Keycloak and PostgreSQL, and creating a base class to hold the necessary configurations. Step 1: Add Dependencies First, add the necessary dependencies to your pom.xml file. Ensure that Spock, Testcontainers for Keycloak and PostgreSQL, and other required libraries are included (check here). Step 2: Create the Base Test Class Create a base class to hold the configuration for your integration tests. package com.taptech.common.security.keycloak import com.taptech.common.security.user.InMemoryUserContextPermissionsService import com.fasterxml.jackson.databind.ObjectMapper import dasniko.testcontainers.keycloak.KeycloakContainer import org.keycloak.admin.client.Keycloak import org.slf4j.Logger import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.context.annotation.Bean import org.springframework.context.annotation.Configuration import org.testcontainers.containers.Network import org.testcontainers.containers.PostgreSQLContainer import org.testcontainers.containers.output.Slf4jLogConsumer import org.testcontainers.containers.wait.strategy.ShellStrategy import org.testcontainers.utility.DockerImageName import org.testcontainers.utility.MountableFile import spock.lang.Shared import spock.lang.Specification import spock.mock.DetachedMockFactory import java.time.Duration import java.time.temporal.ChronoUnit class BaseKeyCloakInfraStructure extends Specification { private static final Logger logger = LoggerFactory.getLogger(BaseKeyCloakInfraStructure.class); static String jdbcUrlFormat = "jdbc:postgresql://%s:%s/%s" static String keycloakBaseUrlFormat = "http://%s:%s" public static final String OFFICES = "offices"; public static final String POSTGRES_NETWORK_ALIAS = "postgres"; @Shared static Network network = Network.newNetwork(); @Shared static PostgreSQLContainer<?> postgres = createPostgresqlContainer() protected static PostgreSQLContainer createPostgresqlContainer() { PostgreSQLContainer container = new PostgreSQLContainer<>("postgres") .withNetwork(network) .withNetworkAliases(POSTGRES_NETWORK_ALIAS) .withCopyFileToContainer(MountableFile.forClasspathResource("postgres/keycloak-dump.sql"), "/docker-entrypoint-initdb.d/keycloak-dump.sql") .withUsername("keycloak") .withPassword("keycloak") .withDatabaseName("keycloak") .withLogConsumer(new Slf4jLogConsumer(logger)) .waitingFor(new ShellStrategy() .withCommand( "psql -q -o /dev/null -c \"SELECT 1\" -d keycloak -U keycloak") .withStartupTimeout(Duration.of(60, ChronoUnit.SECONDS))) return container } public static final DockerImageName KEYCLOAK_IMAGE = DockerImageName.parse("bitnami/keycloak:23.0.5"); @Shared public static KeycloakContainer keycloakContainer; @Shared static String adminCC = "admin@cc.com" def setup() { } // run before every feature method def cleanup() {} // run after every feature method def setupSpec() { postgres.start() String jdbcUrl = String.format(jdbcUrlFormat, POSTGRES_NETWORK_ALIAS, 5432, postgres.getDatabaseName()); keycloakContainer = new KeycloakContainer("quay.io/keycloak/keycloak:23.0.6") .withNetwork(network) .withExposedPorts(8080) .withEnv("KC_HOSTNAME", "localhost") .withEnv("KC_HOSTNAME_PORT", "8080") .withEnv("KC_HOSTNAME_STRICT_BACKCHANNEL", "false") .withEnv("KC_HTTP_ENABLED", "true") .withEnv("KC_HOSTNAME_STRICT_HTTPS", "false") .withEnv("KC_HEALTH_ENABLED", "true") .withEnv("KEYCLOAK_ADMIN", "admin") .withEnv("KEYCLOAK_ADMIN_PASSWORD", "admin") .withEnv("KC_DB", "postgres") .withEnv("KC_DB_URL", jdbcUrl) .withEnv("KC_DB_USERNAME", "keycloak") .withEnv("KC_DB_PASSWORD", "keycloak") keycloakContainer.start() String authServerUrl = keycloakContainer.getAuthServerUrl(); String adminUsername = keycloakContainer.getAdminUsername(); String adminPassword = keycloakContainer.getAdminPassword(); logger.info("Keycloak getExposedPorts: {}", keycloakContainer.getExposedPorts()) String keycloakBaseUrl = String.format(keycloakBaseUrlFormat, keycloakContainer.getHost(), keycloakContainer.getMappedPort(8080)); //String keycloakBaseUrl = "http://localhost:8080" logger.info("Keycloak authServerUrl: {}", authServerUrl) logger.info("Keycloak URL: {}", keycloakBaseUrl) logger.info("Keycloak adminUsername: {}", adminUsername) logger.info("Keycloak adminPassword: {}", adminPassword) logger.info("JDBC URL: {}", jdbcUrl) System.setProperty("spring.datasource.url", jdbcUrl) System.setProperty("spring.datasource.username", postgres.getUsername()) System.setProperty("spring.datasource.password", postgres.getPassword()) System.setProperty("spring.datasource.driverClassName", "org.postgresql.Driver"); System.setProperty("POSTGRES_URL", jdbcUrl) System.setProperty("POSRGRES_USER", postgres.getUsername()) System.setProperty("POSRGRES_PASSWORD", postgres.getPassword()); System.setProperty("idp.provider.keycloak.base-url", authServerUrl) System.setProperty("idp.provider.keycloak.admin-client-secret", "DCRkkqpUv3XlQnosjtf8jHleP7tuduTa") System.setProperty("idp.provider.keycloak.admin-client-id", KeyCloakConstants.ADMIN_CLI) System.setProperty("idp.provider.keycloak.admin-username", adminUsername) System.setProperty("idp.provider.keycloak.admin-password", adminPassword) System.setProperty("idp.provider.keycloak.default-context-id", OFFICES) System.setProperty("idp.provider.keycloak.client-secret", "x9RIGyc7rh8A4w4sMl8U5rF3HuNm2wOC3WOD") System.setProperty("idp.provider.keycloak.client-id", OFFICES) System.setProperty("idp.provider.keycloak.token-uri", "/realms/offices/protocol/openid-connect/token") System.setProperty("idp.provider.keycloak.jwkset-uri", authServerUrl + "/realms/offices/protocol/openid-connect/certs") System.setProperty("idp.provider.keycloak.issuer-url", authServerUrl + "/realms/offices") System.setProperty("idp.provider.keycloak.admin-token-uri", "/realms/master/protocol/openid-connect/token") System.setProperty("idp.provider.keycloak.user-uri", "/admin/realms/{realm}/users") System.setProperty("idp.provider.keycloak.use-strict-jwt-validators", "false") } // run before the first feature method def cleanupSpec() { keycloakContainer.stop() postgres.stop() } // run after @Autowired Keycloak keycloak @Autowired KeyCloakAuthenticationManager keyCloakAuthenticationManager @Autowired InMemoryUserContextPermissionsService userContextPermissionsService @Autowired KeyCloakManagementService keyCloakService @Autowired KeyCloakIdpProperties keyCloakIdpProperties @Autowired KeyCloakJwtDecoderFactory keyCloakJwtDecoderFactory def test_config() { expect: keycloak != null keyCloakAuthenticationManager != null keyCloakService != null } static String basicAuthCredsFrom(String s1, String s2) { return "Basic " + toBasicAuthCreds(s1, s2); } static toBasicAuthCreds(String s1, String s2) { return Base64.getEncoder().encodeToString((s1 + ":" + s2).getBytes()); } @Configuration @EnableKeyCloak public static class TestConfig { @Bean ObjectMapper objectMapper() { return new ObjectMapper(); } DetachedMockFactory mockFactory = new DetachedMockFactory() } } In the BaseKeyCloakInfraStructure class, a method named createPostgresqlContainer() is used to set up a PostgreSQL test container. This method configures the container with various settings, including network settings, username, password, and database name. This class sets up the entire Postgresql and Keycloak env. One of the key steps in this method is the use of a PostgreSQL dump file to populate the database with initial data. This is done using the withCopyFileToContainer() method, which copies a file from the classpath to a specified location within the container. If you have problems starting, you might need to restart the Docker Compose file and extract the client secret. This is explained in EXTRACTING-ADMIN-CLI-CLIENT-SECRET. The code snippet for this is: .withCopyFileToContainer(MountableFile.forClasspathResource("postgres/keycloak-dump.sql"), "/docker-entrypoint-initdb.d/keycloak-dump.sql") Step 3: Extend the Base Class End Run Your Tests package com.taptech.common.security.token import com.taptech.common.EnableCommonConfig import com.taptech.common.security.keycloak.BaseKeyCloakInfraStructure import com.taptech.common.security.keycloak.EnableKeyCloak import com.taptech.common.security.keycloak.KeyCloakAuthenticationManager import com.taptech.common.security.user.UserContextPermissions import com.taptech.common.security.utils.SecurityUtils import com.fasterxml.jackson.databind.ObjectMapper import org.slf4j.Logger import org.slf4j.LoggerFactory import org.springframework.beans.factory.annotation.Autowired import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest import org.springframework.context.annotation.Bean import org.springframework.context.annotation.Configuration import org.springframework.security.oauth2.client.registration.InMemoryReactiveClientRegistrationRepository import org.springframework.test.context.ContextConfiguration import org.springframework.test.web.reactive.server.EntityExchangeResult import org.springframework.test.web.reactive.server.WebTestClient import spock.mock.DetachedMockFactory import org.springframework.boot.autoconfigure.security.reactive.ReactiveSecurityAutoConfiguration @ContextConfiguration(classes = [TestApiControllerConfig.class]) @WebFluxTest(/*controllers = [TokenApiController.class],*/ properties = [ "spring.main.allow-bean-definition-overriding=true", "openapi.token.base-path=/", "idp.provider.keycloak.initialize-on-startup=true", "idp.provider.keycloak.initialize-realms-on-startup=false", "idp.provider.keycloak.initialize-users-on-startup=true", "spring.test.webtestclient.base-url=http://localhost:8888" ], excludeAutoConfiguration = ReactiveSecurityAutoConfiguration.class) class TokenApiControllerTest extends BaseKeyCloakInfraStructure { private static final Logger logger = LoggerFactory.getLogger(TokenApiControllerTest.class); /* ./mvnw clean test -Dtest=TokenApiControllerTest ./mvnw clean test -Dtest=TokenApiControllerTest#test_public_validate */ @Autowired TokenApiApiDelegate tokenApiDelegate @Autowired KeyCloakAuthenticationManager keyCloakAuthenticationManager @Autowired private WebTestClient webTestClient @Autowired TokenApiController tokenApiController InMemoryReactiveClientRegistrationRepository clientRegistrationRepository def test_configureToken() { expect: tokenApiDelegate } def test_public_jwkkeys() { expect: webTestClient.get().uri("/public/jwkKeys") .exchange() .expectStatus().isOk() .expectBody() } def test_public_login() { expect: webTestClient.get().uri("/public/login") .headers(headers -> { headers.setBasicAuth(BaseKeyCloakInfraStructure.adminCC, "admin") }) .exchange() .expectStatus().isOk() .expectBody() .jsonPath(".access_token").isNotEmpty() .jsonPath(".refresh_token").isNotEmpty() } def test_public_login_401() { expect: webTestClient.get().uri("/public/login") .headers(headers -> { headers.setBasicAuth(BaseKeyCloakInfraStructure.adminCC, "bad") }) .exchange() .expectStatus().isUnauthorized() } def test_public_refresh_token() { given: def results = keyCloakAuthenticationManager.passwordGrantLoginMap(BaseKeyCloakInfraStructure.adminCC, "admin", OFFICES).toFuture().join() def refreshToken = results.get("refresh_token") expect: webTestClient.get().uri("/public/refresh") .headers(headers -> { headers.set("Authorization", SecurityUtils.toBearerHeaderFromToken(refreshToken)) headers.set("contextId", OFFICES) }) .exchange() .expectStatus().isOk() .expectBody() .jsonPath(".access_token").isNotEmpty() .jsonPath(".refresh_token").isNotEmpty() } def test_public_validate() { given: def results = keyCloakAuthenticationManager.passwordGrantLoginMap(BaseKeyCloakInfraStructure.adminCC, "admin", OFFICES).toFuture().join() def accessToken = results.get("access_token") expect: EntityExchangeResult<UserContextPermissions> entityExchangeResult = webTestClient.get().uri("/public/validate") .headers(headers -> { headers.set("Authorization", SecurityUtils.toBearerHeaderFromToken(accessToken)) }) .exchange() .expectStatus().isOk() .expectBody(UserContextPermissions.class) .returnResult() logger.info("entityExchangeResult: {}", entityExchangeResult.getResponseBody()) } @Configuration @EnableCommonConfig @EnableKeyCloak @EnableTokenApi public static class TestApiControllerConfig { @Bean ObjectMapper objectMapper() { return new ObjectMapper(); } DetachedMockFactory mockFactory = new DetachedMockFactory() } } Conclusion With this setup, you have configured Testcontainers to run Keycloak and PostgreSQL within a Docker network, seeded the PostgreSQL database with a dump file, and created a base test class to manage the lifecycle of these containers. You can now write your integration tests extending this base class to ensure your Spring Security configuration works correctly with Keycloak.
In software development, maintaining high code quality and reliability is crucial for building robust applications. A key metric for gauging testing effectiveness is code coverage, which measures the percentage of code executed during automated tests. While traditional code coverage offers valuable insights, it has limitations. Code change coverage or code diff coverage addresses these challenges by focusing testing efforts on recent changes in the codebase. This targeted approach not only optimizes testing but also enhances the reliability and quality of software. Challenges of Traditional Code Coverage Quantity over quality: Traditional code coverage often prioritizes achieving high percentages across the entire codebase, potentially overlooking critical functionalities and edge cases. Maintenance overhead: Maintaining high coverage requires continuous effort in writing, updating, and maintaining tests, which can be overwhelming in rapidly evolving projects. False security: High coverage can give a false sense of security, masking under-tested areas where critical bugs may lurk. Legacy code: Achieving high code coverage in legacy systems is challenging due to their complexity and lack of modern testing infrastructure. Requirement coverage: Ensuring that newly introduced tests cover all aspects of new requirements adequately is difficult. How Code Change Coverage Helps Code change or code diff coverage addresses these challenges by targeting recently modified or added code. This approach enhances efficiency and ensures thorough validation before deployment: Focused testing: Prioritizes testing where it’s most needed, minimizing redundant tests and optimizing resource use. Early issue detection: Promotes proactive bug detection, reducing post-deployment issues. CI/CD integration: Seamlessly integrates into pipelines for rigorous testing pre-deployment. Requirement coverage: Helps identify coverage gaps related to specific requirements, enabling targeted test additions to enhance overall quality. Implementing Code Change Coverage To integrate code change or code diff coverage effectively, you need to transform traditional code coverage reports into code change coverage reports and build a tool incorporating the following steps: Step 1: Calculate Differences Between Git Branches Calculate code changes by comparing the target branch with its parent branch, identifying modified line numbers that constitute actual code changes. Example using the jgit library: Java DiffFormatter diffFormatter = new DiffFormatter(out); diffFormatter.setRepository(repository); diffFormatter.setContext(0); List<DiffEntry> entries=null; if(branchMode) { entries=getBranchDifference(repository, diffFormatter, branchForComparison); } else { entries=getCommitDifference(repository, diffFormatter); } for (DiffEntry entry : entries) { diffFormatter.format(diffFormatter.toFileHeader(entry)); String diffs[] = out.toString().split("\\n"); String fileName = ""; LinkedHashMap<String, String> lines = new LinkedHashMap<>(); for (int i = 0; i < diffs.length; i++) { String s = diffs[i]; if (s.startsWith("+++")) { fileName = s.replace("+++", "").replace("b/", "").trim(); } else if (s.startsWith("@@") && s.endsWith("@@") && s.indexOf("+") > -1) { String index = s.substring(s.indexOf("+") + 1).replace("@@", "").trim(); String ind[] = index.split(",", 2); int n = 1; if (ind.length == 2) { n = Integer.parseInt(ind[1]); } if (n == 0) { continue; } int startLine = Integer.parseInt(ind[0]); for (int j = i + 1; j < diffs.length; j++) { s = diffs[j]; if (s.startsWith("@@") && s.endsWith("@@")) { i = j - 1; break; } if (s.startsWith("+")) { String t = s.replaceFirst("\\+", ""); lines.put(String.valueOf(startLine), t); startLine++; } } } } differences.put(fileName.replace("src/", ""), lines); ((ByteArrayOutputStream) out).reset(); } An additional example for calculating differences between Git branches can be found on Stack Overflow. Step 2: Process Output of Traditional Code Coverage Utilize XML or JSON reports from tools like JaCoCo, Cobertura, or pytest-cov. These reports contain covered and uncovered line numbers. Implement a mapping of changed line numbers to determine which lines are covered or not covered. Example using JaCoCo XML report: Java List<Element> packages = xml.getChilds(); for (Element p : packages) { if (p instanceof Element && p.getName().equalsIgnoreCase("package")) { String packageName = p.getAttributeValue("name"); List<Element> sourceFiles = xml.getChilds(p); for (Element c : sourceFiles) { if (c instanceof Element && c.getName().equalsIgnoreCase("sourceFile")) { String className = c.getAttributeValue("name"); List<Element> lines = xml.getChilds(c); ArrayList<String> missed = new ArrayList<>(); ArrayList<Branch> branchList = new ArrayList<>(); for (Element l : lines) { if (l instanceof Element && l.getName().equalsIgnoreCase("line")) { if (!l.getAttributeValue("mi").equalsIgnoreCase("0")) { missed.add(l.getAttributeValue("nr")); } if (!l.getAttributeValue("mb").equalsIgnoreCase("0") || !l.getAttributeValue("cb").equalsIgnoreCase("0")) { Branch branch = new Branch(); branch.lineNumber = l.getAttributeValue("nr"); branch.coveredBranches = Integer.parseInt(l.getAttributeValue("cb")); branch.missedBranches = Integer.parseInt(l.getAttributeValue("mb")); if (branch.missedBranches > 0) { branch.missed = true; } branchList.add(branch); } } } missedLines.put(packageName + "/" + className, missed); branchSummary.put(packageName + "/" + className, branchList); } } } } Step 3: Create a Detailed Report Aggregate and analyze data by mapping the Git diff from step 1 and the coverage data from step 2 to generate a human-readable report that accurately reflects code change coverage percentages. Conclusion Code change or code diff coverage focuses on validating new or modified code, ensuring high quality and reliability. While traditional coverage is beneficial, code change coverage improves by validating critical changes promptly. Additionally, the tool provides insights into coverage related to specific requirements, epics, or entire releases, aiding in assessing release quality effectively. In conclusion, adopting code change coverage is a strategic move toward enhancing software quality in a dynamic development environment. By focusing on recent changes, teams can ensure that new features and modifications are thoroughly tested before deployment. The integration of code change coverage into CI/CD pipelines not only improves efficiency but also provides deeper insights into the quality of specific releases. As the software development landscape continues to evolve, embracing such innovative approaches will be key to maintaining a competitive edge and delivering high-quality software solutions.
In practical terms, knowing how not to write code might be as important as knowing how to write it. This goes for test code, too; and today, we're going to look at common mistakes that happen when writing unit tests. Although writing unit tests is common practice for programmers, tests are still often treated as second-class code. Writing good tests isn't easy — just as in any programming field, there are patterns and anti-patterns. There are some really helpful chapters on test smells in Gerard Meszaros's book about xUnit patterns — and more great stuff around the internet; however, it's always helpful to have practical examples. Here, we're going to write one unit test quick and dirty, and then improve it like we would in our work. The full example is available on GitHub. One Test's Evolution To begin with, what are we testing? A primitive function: Java public String hello(String name) { return "Hello " + name + "!"; } We begin writing a unit test for it: Java @Test void test() { } And just like that, our code already smells. 1. Uninformative Name Naturally, it's much simpler to just write test, test1, test2, than to write an informative name. Also, it's shorter! But having code that is easy to write is much less important than having code that is easy to read - we spend much more time reading it, and bad readability wastes a lot of time. A name should communicate intent; it should tell us what is being tested. A test communicating its intent So maybe we could name the test testHello, since it's testing the hello function? Nope, because we're not testing a method, we're testing behavior. So a good name would be shouldReturnHelloPhrase: Java @Test void shouldReturnHelloPhrase() { assert(hello("John")).matches("Hello John!"); } Nobody (apart from the framework) is going to call the test method directly, so it's not a problem if the name seems too long. It should be a descriptive and meaningful phrase (DAMP). 2. No arrange-act-assert The name is okay, but now there is too much code stuffed into one line. It's a good idea to separate the preparation, the behavior we're testing, and the assertion about that behavior (arrange-act-assert). Arrange, act, assert Like this: Java @Test void shouldReturnHelloPhrase() { String a = "John"; String b = hello("John"); assert(b).matches("Hello John!"); } In BDD, it's customary to use the Given-When-Then pattern, and in this case, it's the same thing. 3. Bad Variable Names and No Variable Re-Usage But it still looks like it's been written in a hurry. What's "a"? What's "b"? You can sort of infer that, but imagine that this is just one test among several dozen others that have failed in a test run (perfectly possible in a test suite of several thousand tests). That's a lot of inferring you have to do when sorting test results! So — we need proper variable names. Something else we've done in a hurry — all our strings are hard-coded. It's okay to hard-code some stuff — only as long as it's not related to other hard-coded stuff! Meaning, that when you're reading your test, the relationships between data should be obvious. Is "John" in 'a' the same as "John" in the assertion? This is not a question we should be wasting time on when reading or fixing the test. So we rewrite the test like this: Java @Test void shouldReturnHelloPhrase() { String name = "John"; String result = hello(name); String expectedResult = "Hello " + name + "!"; assert(result).contains(expectedResult); } 4. The Pesticide Effect Here's another thing to think about: automated tests are nice because you can repeat them at very little cost — but that also means their effectiveness falls over time because you're just testing the exact same thing over and over. That's called the pesticide paradox (a term coined by Boris Beizer back in the 1980s): bugs build resistance to the thing you're killing them with. It's probably not possible to overcome the pesticide paradox completely — but there are tools that reduce its effect by introducing more variability into our tests, for instance, Java Faker. Let's use it to create a random name: Java @Test void shouldReturnHelloPhrase() { Faker faker = new Faker(); String name = faker.name().firstName(); String result = hello(name); String expectedResult = "Hello " + name + "!"; assert(result).contains(expectedResult); } Good thing we've changed the name to a variable in the previous step — now we don't have to look over the test and fish out all the "Johns." 5. Uninformative Error Messages Another thing we've probably not thought about if we've written the test in a hurry — is the error message. You need as much data as possible when sorting test results, and the error message is the most important source of information. However, the default one is pretty uninformative: java.lang.AssertionError at org.example.UnitTests.shouldReturnHelloPhrase(UnitTests.java:58) Great. Literally the only thing this we know is that the assertion hasn't passed. Thankfully, we can use assertions from JUnit's `Assertions` class. Here's how: Java @Test void shouldReturnHelloPhrase4() { Faker faker = new Faker(); String name = faker.name().firstName(); String result = hello(name); String expectedResult = "Hello " + name + ""; Assertions.assertEquals( result, expectedResult ); } And here's the new error message: Expected :Hello Tanja! Actual :Hello Tanja ...which immediately tells us what went wrong: we've forgotten the exclamation mark! Lessons Learned And with that, we've got ourselves a good unit test. What lessons can we glean from the process? A lot of the problems were caused by us being a bit lazy. Not the good kind of lazy, where you think hard about how to do less work. The bad kind, where you follow the path of least resistance, to just "get it over with." Hard-coding test data, doing cut and paste, using "test" + method name (or "test1", "test2", "test3") as the name of the test are marginally easier to do in the short run, but make the test base much harder to maintain. On the one hand, it is a bit ironic that we've been talking about readability and making tests easier on the eyes, and at the same time turned a 1-line test into 9 lines. However, as the number of tests you're running grows, the practices we're proposing here will save you a lot of time and effort.
Approximately one-fourth of all downloaded applications (25.3%) are used only once. The primary reason for this is their failure to meet user expectations. Issues such as technical glitches, excessive file size, and confusing user interfaces often lead to app removal. It is discouraging to realize that two-thirds of users may never open your app again after just one use. Those who do return are likely to be highly critical. Your aim should not just be to avoid falling into the category of quickly uninstalled apps. It would be best if you also strived to exceed user expectations. The Importance of Performance Testing Testing is a vital phase in the development process of any mobile application before its market release. There are many types of application testing, including performance testing, integration testing, security testing, compatibility testing, and usability testing. Today, I want to focus on performance testing. Performance testing is often overlooked, with a focus on features over system speed and efficiency, especially in API-driven architectures. Agile teams usually delay it, waiting for feature stability, and separating it from main development workflows. However, integrating performance testing early, alongside new code development, provides instant feedback, allowing for immediate fixes and aligning with evolving software practices. Performance and load testing are vital steps, ensuring a stable and robust application that meets user expectations. Performance testing checks how the system behaves under various loads, focusing on indicators such as speed, reliability, and system availability. It identifies potential bottlenecks and weaknesses, which is essential for refining the app. This involves analyzing: Resource usage levels under varying loads. Errors that occur during the application's operation. The maximum number of users the application can support before it becomes unstable. The performance of the subsystem responsible for managing load distribution. Potential weaknesses in the software's architecture. Investing in thorough testing might seem costly, but it prevents the need for time-consuming and expensive fixes or modifications late in the development process. By ensuring your product is tested properly from the start in the secure SDLC, you save time and money in the long term and also accelerate its entry into the market. Adopting automated performance testing can also further reduce the cost of developing mobile applications. Core App Performance Testing Areas For any mobile application, performance testing should be conducted across three critical categories: device, server/API, and network. Device testing is all about making sure the app works smoothly on different devices, paying close attention to startup time, how much memory it uses, and how much battery it drains. Server/API testing emphasizes efficient data management and smooth interactions with the server, including API responsiveness and data exchange. Network performance tests assess the app's behavior across different network types, measuring speed, any packet losses, or connectivity issues. Types of Performance Testing Performance testing encompasses several types, each targeting different aspects of application performance: Load Testing This evaluates an application's performance under expected user loads to identify and address performance bottlenecks. Endurance Testing By applying a consistent load over an extended period, this test checks for issues that could slow down the application over time, ensuring the application's long-term performance stability. Stress Testing This tests an application under extreme conditions to determine its breaking point and how it handles massive traffic and data processing, aiming to identify at what load the application fails. Scalability Testing This determines the application's ability to scale up in response to increased user demand, ensuring it can grow to accommodate more users smoothly. Volume Testing This assesses how the application copes with a large volume of data in the database, ensuring performance is not compromised by data size. Spike Testing This looks at the application's response to sudden spikes in traffic, which is crucial for understanding how unexpected surges in usage are handled. While it may be tempting to use as many types of performance testing as possible, the goal should be to select and prioritize performance tests based on the application's specific needs, usage scenarios, and the resources available for testing. Important Considerations When Doing Performance Tests Testing mobile apps presents more challenges and can be more labor-intensive than testing PC software due to several important factors. The vast number and variety of mobile devices, the increasing mobility of users, and the unique features specific to each device make comprehensive testing a complex task. This diversity necessitates developers to test on as wide a range of hardware as possible, which can be time-consuming and resource-intensive. There are various strategies for conducting mobile app testing, including lab testing, guerrilla testing, and unmoderated remote testing. While performance testing often relies on emulators for initial assessments, this method does not guarantee complete test coverage, for example, in cases like voice and gesture interface testing. Testing on real devices and with real users is more accurate. You can find many services and companies that provide access to a vast array of real devices for testing purposes. This allows developers to select and test on devices that are most relevant to their target audience's preferences and the specific requirements of their clients. Remember to always prioritize the user experience in performance testing. Beyond traditional performance metrics, focus on factors such as app startup time, responsiveness to user inputs, and smoothness of animations and transitions. Do not forget to test your app under various network conditions, including different speeds (Wi-Fi, 3G, 4G, 5G) and qualities (high latency, low bandwidth), to ensure it performs well for all users. Consider geographic variations too. Regularly review and adhere to the performance guidelines and best practices provided by Android, iOS, and other platforms, including those for application deployment, to ensure compliance and optimization. After launching your app, continue monitoring its performance in the live environment. Real user monitoring (RUM) tools can help track actual user experiences and highlight issues that may not have been evident during testing. Please note third-party services (like analytics, ad platforms, or payment gateways) may change rules and affect app performance. Regularly monitor their performance over time. In addition, security is a major concern in mobile app testing. Malicious actors can exploit vulnerabilities in mobile devices, networks, and applications to gain unauthorized access to data or compromise user privacy. For testing the mobile app itself, organizations should adhere to DevSecOps best practices and employ security processes like OWASP Mobile Security Testing. To ensure backend security, they have to rely on solutions like Dynamic Application Security Testing (DAST) or External Attack Surface Management (EASM) to discover, prioritize, and remediate vulnerabilities. Improving Mobile Application Performance Here are the top 15 tips for improving mobile application performance: Keep the application's file size small. Users are reluctant to install apps that take up a lot of space. The smaller your app's footprint, the better it is. Implement lazy loading for content and images, ensuring that items are only loaded when needed. Optimize app images by using scalable vector graphics, implementing caching for faster loads, and simplifying color palettes for efficiency. Minimize and optimize the use of animations. Although animations can enhance the user experience, they can also impact performance. Optimize animations by choosing lightweight formats and timing them carefully to avoid unnecessary consumption of resources. Implement efficient data fetching strategies. Use techniques like pagination, infinite scrolling, or data prefetching to manage data loading efficiently. Improve your application's memory efficiency by using memory-conscious coding practices and minimizing reliance on external libraries. Minimize duplicate network requests, as they can degrade the app's performance. Compress data for network transmission to reduce the amount of data sent over the network. Use efficient queries and indexes in your database. Additionally, consider caching results of frequently accessed data to reduce database load. Perform intensive tasks in the background using multi-threading or asynchronous programming. This prevents the UI thread from being blocked, ensuring the app remains responsive to user interactions. Use the latest programming frameworks. They are designed with performance and efficiency in mind. Migrate to these technologies where possible to take advantage of their optimizations. Optimize your app's energy usage by minimizing wake locks and using battery-efficient location services. Implement efficient error handling. It ensures that your app can recover from unexpected conditions without crashing. Regularly profile your app's performance to identify and optimize slow or inefficient code paths. Android Studio and Apple Xcode can help identify performance bottlenecks. Implement feature flags to toggle functionality. This allows for easier rollback of features that may introduce performance issues and enables A/B testing of performance optimizations. Endnote Testing, particularly performance testing, is crucial for app development, ensuring apps are robust, fast, and user-friendly. Covering various aspects like device compatibility, server/API performance, and network behavior, performance testing identifies potential bottlenecks and guides improvements. Automated performance testing strategies can save time and costs, enhancing market readiness and user retention.
I bet you might have come across a scenario while automating API/web or mobile applications where, while registering a user, you may be setting the address for checking out a product in the end-to-end user journey in test automation. So, how do you do that? Normally, we create a POJO class in Java with the fields required to register a user or to set the address for checking out the product and then set the values in the test using the constructor of the POJO class. Let’s take a look at an example of registering a user where the following are mandatory fields required to fill in the registration form: First Name Last Name Address City State Country Mobile Number As we need to handle these fields in automation testing we will have to pass on respective values in the fields at the time of executing the tests. Before Using the Builder Pattern A POJO class, with the above-mentioned mandatory fields, will be created with the Getter and Setter methods inside that POJO class, and using a constructor values are set in the respective fields. Check out the code example of RegisterUser class given below for the representation of what we are discussing. Java public class RegisterUser { private String firstName; private String lastName; private String address; private String city; private String state; private String country; private String mobileNumber; public RegisterUser (final String firstName, final String lastName, final String address, final String city, final String state, final String country, final String mobileNumber) { this.firstName = firstName; this.lastName = lastName; this.address = address; this.city = city; this.state = state; this.country = country; this.mobileNumber = mobileNumber; } public String getFirstName () { return firstName; } public void setFirstName (final String firstName) { this.firstName = firstName; } public String getLastName () { return lastName; } public void setLastName (final String lastName) { this.lastName = lastName; } public String getAddress () { return address; } public void setAddress (final String address) { this.address = address; } public String getCity () { return city; } public void setCity (final String city) { this.city = city; } public String getState () { return state; } public void setState (final String state) { this.state = state; } public String getCountry () { return country; } public void setCountry (final String country) { this.country = country; } public String getMobileNumber () { return mobileNumber; } public void setMobileNumber (final String mobileNumber) { this.mobileNumber = mobileNumber; } } Now, if we want to use this POJO, we would have to create an instance of RegisterUser class and pass the values in the constructor parameters as given in the code example below to set the data in the respective fields. Check out the below example of the Register User test of how we do it. Java public class RegistrationTest { @Test public void testRegisterUser () { RegisterUser registerUser = new RegisterUser ("John", "Doe", "302, Adam Street, 1st Lane", "New Orleans", "New Jersey", "US", "52145364"); assertEquals (registerUser.getFirstName (), "John"); assertEquals (registerUser.getCountry (), "US"); } } There were just seven fields in the example we took for registering the user. However, this would not be the case with every application. There would be some more additional fields required and as the fields keep on increasing every time, we would need to update the POJO class with respective Getter and Setter methods and also update the parameters in the constructor. Finally, we would need to add the values to those fields so the data could be passed in the actual field required. Long story short, we would need to update the code even if there is a single new field added, also, it doesn’t look clean to add values as parameters in the tests. Luckily, the Builder Design Pattern in Java comes to the rescue here. What Is Builder Design Pattern in Java? Builder design pattern is a creational design pattern that lets you construct complex objects step by step. The pattern allows you to produce different types and representations of an object using the same construction code. Builder Pattern helps us solve the issue of setting the parameters by providing a way to build the objects step by step by providing a method that returns the final object which can be used in the actual tests. What Is Lombok? Project Lombok is a Java library that automatically plugs into your editor and builds tools, spicing up your Java. It is an annotation-based Java library that helps in reducing the boilerplate code. It helps us in writing short and crisp code without having to write the boilerplate code. Bypassing the @Getterannotation over the class, it automatically generates Getter methods. Similarly, you don’t have to write the code for Setter methods as well, its @Setterannotation updated over the class automatically generates the Setter methods. It also has support for using the Builder design pattern so we just need to put the @Builderannotation above the class and the rest will be taken care of by the Lombok library. To use Lombok annotations in the project we need to add the following Maven dependency: <!-- https://mvnrepository.com/artifact/org.projectlombok/lombok --> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.32</version> <scope>provided</scope> </dependency> Using the Builder Design Pattern With Lombok Before we start refactoring the code we have written, let me tell you about the DataFaker library as well as how it helps in generating fake data that can be used for testing. Ideally, in our example, every newly registered user’s data should be unique otherwise we may get a duplicate data error and the test will fail. Here, the DataFaker library will help us in providing unique data in each test execution thereby helping us with registering a new user with unique data every time the registration test is run. To use the DataFaker library, we need to add the following Maven dependency to our project. XML <!-- https://mvnrepository.com/artifact/net.datafaker/datafaker --> <dependency> <groupId>net.datafaker</groupId> <artifactId>datafaker</artifactId> <version>2.2.2</version> </dependency> Now, let's start refactoring the code. First, we will make the changes to the RegisterUser class. We would be removing all the Getter and Setter methods and also the constructor and adding the @Getter and @Builder annotation tags on the top of the RegisterUser class. Here is how the RegisterUser class looks now after the refactoring Java @Getter @Builder public class RegisterUserWithBuilder { private String firstName; private String lastName; private String address; private String city; private String state; private String country; private String mobileNumber; } How clean and crisp it looks with that refactoring being done. Multiple lines of code are removed still it will still work in the same fashion as it used to earlier, thanks to Lombok. We would have to add a new Java class for generating the fake data on runtime using the Builder design pattern. We would be calling this new class the DataBuilder class. Java public class DataBuilder { private static final Faker FAKER = new Faker(); public static RegisterUserWithBuilder getUserData () { return RegisterUserWithBuilder.builder () .firstName (FAKER.name () .firstName ()) .lastName (FAKER.name () .lastName ()) .address (FAKER.address () .streetAddress ()) .state (FAKER.address () .state ()) .city (FAKER.address () .city ()) .country (FAKER.address () .country ()) .mobileNumber (String.valueOf (FAKER.number () .numberBetween (9990000000L, 9999999999L))) .build (); } } The getUserData() method will return the test data required for registering the user using the DataFaker library. Notice the builder() method used after the class name RegisterUserWithBuilder. It appears because of the @Builder annotation we have used on the top of the RegisterUserWithBuilder class. After the builder() method we have to pass on the variables we have declared in the RegisterUserWithBuilder class and accordingly, pass the fake data that we need to generate for the respective variables. Java RegisterUserWithBuilder.builder () .firstName (FAKER.name () .firstName ()); The above piece of code will generate a fake first name and set it in the first name variable. Likewise, we have set fake data in all other variables. Now, let’s move towards how we use these data in the tests. It's very simple, the below code snippet explains it all. Java @Test public void testRegisterUserWithBuilder () { RegisterUserWithBuilder registerUserWithBuilder = getUserData (); System.out.println (registerUserWithBuilder.getFirstName ()); System.out.println (registerUserWithBuilder.getLastName ()); System.out.println (registerUserWithBuilder.getAddress ()); System.out.println (registerUserWithBuilder.getCity ()); System.out.println (registerUserWithBuilder.getState ()); System.out.println (registerUserWithBuilder.getCountry ()); System.out.println (registerUserWithBuilder.getMobileNumber ()); } We just need to call the getUserData() method while instantiating the RegisterUserWithBuilder class. Next, we would be calling the Getter methods for the respective variables we declared inside the RegisterUserWithBuilder class. Remember we had passed the @Getter annotation on the top of the RegisterUserWithBuilder class, this actually helps in calling the Getter methods here. Also, we are not required to pass on multiple data as the constructor parameters for the RegisterUserWithBuilder class, instead we just need to instantiate the class and call the getUserData() method! How easy it is to generate the unique data and pass it on in the tests without having to write multiple lines of boilerplate codes. Thanks to the Builder design pattern and Lombok! Running the Test Let’s run the test and check if the user details get printed in the console. We can see that the fake data is generated successfully in the following screenshot of the test execution. Conclusion In this blog, we discussed making use of the builder design pattern in Java with Lombok and DataFaker library to generate fake test data on run time and use it in our automated tests. It would ease the test data generation process by eliminating the need to update the test data before running the tests. I hope it will help you a lot in reducing the code lines and writing your code in a much cleaner way. Happy Testing!
Arnošt Havelka
Development Team Lead,
Deutsche Börse
Thomas Hansen
CTO,
AINIRO.IO
Soumyajit Basu
Senior Software QA Engineer,
Encora
Nicolas Fränkel
Head of Developer Advocacy,
Api7