DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report

Testing, Tools, and Frameworks

The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Refcard #376
Cloud-Based Automated Testing Essentials
Cloud-Based Automated Testing Essentials
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks

DZone's Featured Testing, Tools, and Frameworks Resources

Test Data Management
Refcard #361

Test Data Management

End-to-End Testing Automation Essentials
Refcard #326

End-to-End Testing Automation Essentials

Along Came a Bug
Along Came a Bug
By Stelios Manioudakis
Integrating AWS Secrets Manager With Spring Boot
Integrating AWS Secrets Manager With Spring Boot
By Kushagra Shandilya
Readability in the Test: Exploring the JUnitParams
Readability in the Test: Exploring the JUnitParams
By Otavio Santana CORE
Jira Best Practices From Experts
Jira Best Practices From Experts

Railsware is an engineer-led company with a vast portfolio of building projects for companies, so when talking about Jira best practices for developers, we speak from experience. Why Do People Love Jira? Jira is by no means perfect. It certainly has its downsides and drawbacks. For instance, it is a behemoth of a product and, as such, is pretty slow when it comes to updates or additions of new functionality. Some developers also say that Jira goes against certain agile principles because—when in the wrong hands—it can promote fixation on due dates rather than delivery of product value. Getting lost in layers and levels of several boards can, indeed, disconnect people by overcomplicating things. Still, it is among the preferred project management tools among software development teams. Why is that? Permissions: Teams, especially bigger ones, work with many different experts and stakeholders, besides the core team itself. So, setting up the right access to information is crucial. Roadmaps and epics: Jira is great for organizing your project on all levels. On the highest level, you have a roadmap with a timeline. Then, you have epics that group tasks by features or feature versions. Inside each epic, you create tickets for implementation. Customization: This is Jira’s strongest point. You can customize virtually anything: Fields for your JIRA tickets. UI of your tickets, boards, roadmaps, etc. Notifications. Workflows: Each project may require its own workflow and set of statuses per ticket, e.g., some projects have staging server and QA testing on it and some don’t. Search is unrivalled (if you know SQL aka JQL in Jira): Finding something that would have been lost to history in a different project management tool is a matter of knowing JQL in Jira. The ability to add labels using keywords makes the aforementioned search and analysis even simpler. Automation: The ability to automate many actions is among the greatest and most underestimated strengths of Jira: You can create custom flows where tickets will create temporary assignees (like the back and forth between development and QA). You can make the issue fall into certain columns on the board based on its content. Move issues to “in progress” from “todo” when there’s a related commit. Post the list of released tickets to Slack as a part of release notes. Integrations and third party apps: Github, Bitbucket, and Slack are among the most prominent Jira integrations, and for good reasons. Creating a Jira ticket from a message, for example, is quite handy at times. The Atlassian Marketplace broadens your reach even further with thousands of add-ons and applications. Broad application: Jira is suitable for both iterative and non-iterative development processes for IT and non-IT teams. Jira Best Practices Let’s dive into the nitty-gritty of Jira best practices for multiple projects or for a single one. Define Your Goals and Users Jira, being as flexible as it is, can be used in a wide manner of ways. For instance, you can primarily rely on status checking throughout the duration of your sprint, or you can use it as a project management tool on a higher level (a tool for business people to keep tabs on the development process). Define your team and goals. Now that you have a clear understanding of why, let’s talk about the “who.” Who will be the primary Jira user? And will they be using it to: Track the progress on certain tickets to know where and when to contribute? Use it as a guide to learn more about the project? As a tool for tracking time for invoicing clients, performance for internal, data-driven decision making, or both? Is it a means of collaborating, sharing, and spreading knowledge across several teams involved in the development of the product? The answers to the above questions should help you define the team and goals in the context of using Jira. Integrations, Third-Party APIs, and Plugins Jira is a behemoth of a project management platform. And, like all behemoths, it is somewhat slow and clunky when it comes to moving forward. If there’s some functionality you feel is missing from the app—don’t shy away from the marketplace. There’s probably a solution for your pain already out there. Our team, for instance, relies on a third-party tool to create a series of internal processes and enhance fruitful collaboration. You can use ScriptRunner to create automation that’s a bit more intricate than what comes out of the box. Or you can use BigGantt to visualize the progress in a friendly drag-and-drop interface. Don’t shy away from combining the tools you use into a singular flow. An integration between Trello and Jira, for instance, can help several teams—like marketing and development—stay on the same page. Use Checklists in Tickets Having a checklist integrated into your Jira issues can help guide a culture that’s centered around structured and organized work as well as transparency and clarity to everyone. Our Smart Checklist for Jira offers even more benefits: You have a plan: Often times it’s hard to start a feature implementation, and without a plan, you can go in circles for a long time. Having mental peace: Working item by item is much more calm and productive than dealing with the unknown. Visibility of your work: If everyone sees the checklist progress, you are all on the same page. Getting help: If your progress is visible, colleagues can give you advice on the plan itself and the items that are challenging you. Prioritization: Once you have the items list, you can decide with your team what goes into v1, and what can be easily done later. You can use checklists as templates for recurring processes: Definition Done, Acceptance Criteria, onboarding and service desk tickets, etc., are prime candidates for automation. Moreover, you can automatically add checklists to your Jira workflow based on certain triggers like the content of an issue or workflow setup. To learn more, watch our YouTube video: “How to use Smart Checklist for Jira.” Less Is More Information is undoubtedly the key to success. That said, in the case of a Jira issue, awareness is key. What we’ve noticed over our time of experimenting with Jira is that adding more info that is either unnecessary or irrelevant seems to introduce more confusion than clarity into the process. Note: We don’t mean that Jira shouldn’t be used for knowledge transferring. If some information (links to documentation, your internal processes, etc.) is critical to the completion of a task—share it inside the task. Just use a bit of formatting to make it more readable. However, an age-old history of changes or an individual’s perspective on the requirements is not needed. Stick to what is absolutely necessary for the successful completion of a task and elaborate on that. Not more, nor less. Keep the Backlog and Requirements Healthy and Updated Every project has a backlog—a list of ideas, implementation tickets, bugs, and enhancements to be addressed. Every project that does not keep its backlog well-maintained ends up in a pickle sooner rather than later. Some of our pro-tips on maintaining a healthy backlog are: Gradually add the requirements to the backlog: If not for anything else, you’ll have a point of reference at all times, but moving them there immediately may cause certain issues as they may change before you are ready for implementation. Keep all the work of the development team in a single backlog: Spreading yourself thin across several systems that track bugs, technical debt, UX enhancements, and requirements is a big no-no. Set up a regular backlog grooming procedure: You’ll get a base plan of future activities as a result. We’d like to point out that said plan needs to remain flexible to make changes based on feedback and/or tickets from marketing, sales, and customer support. Have a Product Roadmap in Jira Jira is definitely not the go-to tool for designing a product roadmap, yet having one in your instance is a major boon, because it makes the entire scope of work visible and actionable. Additional benefits of having a roadmap in Jira include: It is easier to review the scope with your team at any time. Prioritizing new work is simpler when you can clearly see the workload. You can easily see dependencies when several teams are working on a project. Use Projects as Templates Setting up a new project can be tedious even if you’ve done it a million times before. This can be especially troublesome in companies that continuously deliver products with a similar approach to development such as mobile games. Luckily, there’s no need to do the same bit for yet another time with the right combination of tools and add-ons. A combination of DeepClone and Smart Checklist will help you clone projects, issues, stories, or workflow conditions and use them as project templates. Add Definition of Done as a Checklist to all of Your Jira Issues Definition of Done is a pre-release checklist of activities that determine whether a feature is “releasable.” In simpler words, it determines whether something is ready to be shipped to production. The best way of making this list accessible to everyone in the team is to put it inside the issues. You can use Smart Checklist to automate this process; however, there are certain rules of thumb you’ll need to follow to master the process of designing a good DoD checklist: Your objectives must be achievable. They must clearly define what you wish to deliver. It’s best if you keep the tasks measurable. This will make the process of estimating work much simpler. Use plain language so everyone who is involved can easily understand the Definition of Done. Make sure your criteria are testable so the QA team can make sure they are met. Sync With the Team After Completing a Sprint We have a nice habit of running Agile Retrospective meetings here at Railsware. These meetings, also known as Retros, are an excellent opportunity for the team to get recognition for a job well done. They can also help you come up with improvements for the next sprint. We found that the best way of running these meetings is to narrow the conversation to goods and improves. This way you will be able to discuss why the things that work are working for you. You’ll also be able to optimize the rest. Conclusion If there’s a product where there’s something for everyone—within the context of a development team—it’s probably Jira. The level of customization, adaptability, and quality of life features is an excellent choice for those teams that are willing to invest in developing a scalable and reliable process. If there’s anything missing from the app—you can easily find it on the Atlassian Marketplace.

By Oleksandr Siryi
Introduction to Automation Testing Strategies for Microservices
Introduction to Automation Testing Strategies for Microservices

Microservices are distributed applications deployed in different environments and could be developed in different programming languages having different databases with too many internal and external communications. Therefore, a microservices architecture is dependent on multiple interdependent applications for its end-to-end functionalities. This complex microservices architecture requires a systematic testing strategy to ensure end-to-end (E2E) testing for any given use case. In this blog, we will discuss some of the most adopted automation testing strategies for microservices, and to do that, we will use the testing triangle approach. Testing Triangle It's a modern way of testing microservices with a bottom-up approach, which is also part of the "Shift-left" testing methodology (The "shift-left" testing method pushes testing towards the early stages of software development. By testing early and often, you can reduce the number of bugs and increase the code quality.). The goal of having multiple stacked layers of the following test pyramid for microservices is to identify different types of issues at the beginning of testing levels. So, in the end, you will have very few production issues. Each type of testing focuses on a different layer of the overall software system and verifies expected results. For a distributed microservices app, the tests can be organized into the following layers using a bottom-up approach: The testing triangle is based on these five principles: 1. Unit Testing It's the starting point and level 1 white box testing in the bottom-up approach. Furthermore, it tests a small unit of source code functionality of microservices. It verifies the behavior of source code methods or functions inside a microservice by stubbing and mocking dependent modules and test data. Application developers write unit test cases for a small unit of code (independent functions/methods) using different test data and analyzing expected output independently without impacting other parts of the code. It's a vital part of the "shift-left" testing approach, where issues are identified in the starting phase at the method level of microservices. This testing should be done thoroughly with code coverage of more than ~90%. It will reduce the chances of potential bugs in the later phases. 2. Component Testing It's the level 2 testing of the Testing Triangle that follows unit testing. This testing aims to test entire microservices functionalities and APIs independently in isolation for individual microservice. By writing component tests for the highly granular microservices layer, the API behavior is driven through tests from the client or consumer perspective. Component tests will test the interaction between microservice APIs/services with the database, messaging queues, and external and third-party outbound services, all as one unit. It tests a small part of the entire system. For example, in component testing, dependent microservices and database responses are mocked or stubbed. In this testing approach, all microservices APIs are tested with multiple sets of test data. 3. Contract Testing The level 3 testing approach verifies agreed contracts between different domain-driven microservices. There are contracts defined before the development of microservices in the API/interface, designing what should be the response for the given client request or query. If any changes happen, then the contract has to be revisited and revised. For example, if any new feature changes are deployed, then they must be exposed with a separate version /v2 API request, and we need to make sure that the older /v1 version still supports client requests for backward compatibility. It tests a small part of the integration, like: Between microservice to its connected databases. API calls between two microservices. 4. Integration Testing It's level 4 testing which verifies end-to-end functionality. It is the next level of contract testing, where integration testing is used to test and verify an entire functionality by testing all related microservices. According to Martin Fowler, an integration test exercises communication paths through the subsystem to check for any incorrect assumptions each module has about how to interact with its peers. It tests a bigger part of the system, mostly the microservices exposing their services with API. For example: Login functionality, which involves multiple microservices interactions. It tests interactions for microservices API and event-driven hub components for a given functionality. 5. End-to-End (E2E) Testing It's the final and the level 5 testing approach in the Testing Triangle, and it is an end-to-end usability black box testing. It verifies that the entire system as a whole meets business functional goals from a user or a customer, or a client's perspective. E2E testing is performed on the external front-end (user interface (UI)) or API client calls with the help of the REST clients. It's performed on different distributed microservices and SPA (Single Page Apps)/MFE (Micro Front ends) applications. It covers testing of UI, backend microservices, databases, and their internal/external components. Challenges of Microservice Testing Many organizations have already adopted digital transformation, which uses microservice architecture. IT organizations find it challenging to test microservices applications because of their distributed nature. We will discuss the important challenges and solutions offered by some of the industry experts: Multiple Agile Microservices Teams Inter-communication between multiple agile microservices dev and test teams is really time taking and difficult. Sometimes, teams work in silos, not sharing enough technical/non-technical details, which causes communication gaps. Solutions: Testing triangle's integration and E2E testing can help address the above challenge by testing dependent microservices, which are developed by different dev teams. Microservice Integration Testing-Related Challenges Testing of all microservices does not happen parallelly. End-to-end integration testing of inter-dependent microservices is a nightmare; in reality, these microservices might not be ready for testing in a test environment. Every microservice will have its own security mechanism and test data. It's a daunting task to find failover of other microservices when they are dependent on each other. Solutions: Testing triangle's integration testing helps here by testing dependent microservices APIs. Business Requirement and Design Change Challenges Frequent changes in business and technical requirements in the agile development methodology lead to increased complexity and testing effort. It increases development and testing costs. Solutions: Testing triangle provides an effective systematic step-by-step process that reduces complexities, operational cost, and testing effort by full automation testing. Test Database Challenges Databases have different types (SQL/NoSQL like Redis, MongoDB, Cassandra, etc.), which have different structures. These structured and unstructured data types can be combined to meet particular business needs. For example, every database has a different type of test data in distributed microservices development. It's daunting to maintain different kinds of test data for different databases. Solutions: Testing triangle provides automated BDD (Behavioral Driven Design) where we can pass dynamic test data; and TDM (Test Data Management) method, which solves test database challenges by managing different kinds/formats of test data. Conclusion Testing triangle provides great testing techniques to solve challenges associated with microservices. We need to choose these systematic testing techniques with a perspective on lower complexity, faster testing, time to market, testing cost, and risk mitigation before releasing to production. This testing strategy is required for microservices to avoid real production issues. This ensures that test cases should cover end-to-end functional and non-functional E2E testing for UI, backend, databases, and different PROD and Non-PROD staging environments for reliable product releases. We have seen microservices introduce many testing challenges which can be solved with step by step (down to top) approach provided by testing triangle techniques. It's a modern cloud-native testing strategy to test microservices on a cloud. It finds and fixes maximum bugs during the testing phase until it reaches the highest level (topmost level in the triangle), which is E2E testing. Tips: Many IT organizations have started following the "Shift-left" culture and have started using a testing culture, especially in situations where identifying and fixing bugs early is important.

By Rajiv Srivastava
Observability-Driven Development vs Test-Driven Development
Observability-Driven Development vs Test-Driven Development

The concept of observability involves understanding a system’s internal states through the examination of logs, metrics, and traces. This approach provides a comprehensive system view, allowing for a thorough investigation and analysis. While incorporating observability into a system may seem daunting, the benefits are significant. One well-known example is PhonePe, which experienced a 2000% growth in its data infrastructure and a 65% reduction in data management costs with the implementation of a data observability solution. This helped mitigate performance issues and minimize downtime. The impact of Observability-Driven Development (ODD) is not limited to just PhonePe. Numerous organizations have experienced the benefits of ODD, with a 2.1 times higher likelihood of issue detection and a 69% improvement in the mean time to resolution. What Is ODD? Observability-Driven Development (ODD) is an approach to shift left observability to the earliest stage of the software development life cycle. It uses trace-based testing as a core part of the development process. In ODD, developers write code while declaring desired output and specifications that you need to view the system’s internal state and process. It applies at a component level and as a whole system. ODD is also a function to standardize instrumentation. It can be across programming languages, frameworks, SDKs, and APIs. What Is TDD? Test-Driven Development (TDD) is a widely adopted software development methodology that emphasizes the writing of automated tests prior to coding. The process of TDD involves defining the desired behavior of software through the creation of a test case, running the test to confirm its failure, writing the minimum necessary code to make the test pass, and refining the code through refactoring. This cycle is repeated for each new feature or requirement, and the resulting tests serve as a safeguard against potential future regressions. The philosophy behind TDD is that writing tests compels developers to consider the problem at hand and produce focused, well-structured code. Adherence to TDD improves software quality and requirement compliance and facilitates the early detection and correction of bugs. TDD is recognized as an effective method for enhancing the quality, reliability, and maintainability of software systems. Comparison of Observability and Testing-Driven Development Similarities Observability-Driven Development (ODD) and Testing-Driven Development (TDD) strive towards enhancing the quality and reliability of software systems. Both methodologies aim to ensure that software operates as intended, minimizing downtime and user-facing issues while promoting a commitment to continuous improvement and monitoring. Differences Focus: The focus of ODD is to continuously monitor the behavior of software systems and their components in real time to identify potential issues and understand system behavior under different conditions. TDD, on the other hand, prioritizes detecting and correcting bugs before they cause harm to the system or users and verifies software functionality to meet requirements. Time and resource allocation: Implementing ODD requires a substantial investment of time and resources for setting up monitoring and logging tools and infrastructure. TDD, in contrast, demands a significant investment of time and resources during the development phase for writing and executing tests. Impact on software quality: ODD can significantly impact software quality by providing real-time visibility into system behavior, enabling teams to detect and resolve issues before they escalate. TDD also has the potential to significantly impact software quality by detecting and fixing bugs before they reach production. However, if tests are not comprehensive, bugs may still evade detection, potentially affecting software quality. Moving From TDD to ODD in Production Moving from a Test-Driven Development (TDD) methodology to an Observability-Driven Development (ODD) approach in software development is a significant change. For several years, TDD has been the established method for testing software before its release to production. While TDD provides consistency and accuracy through repeated tests, it cannot provide insight into the performance of the entire application or the customer experience in a real-world scenario. The tests conducted through TDD are isolated and do not guarantee the absence of errors in the live application. Furthermore, TDD relies on a consistent production environment for conducting automated tests, which is not representative of real-world scenarios. Observability, on the other hand, is an evolved version of TDD that offers full-stack visibility into the infrastructure, application, and production environment. It identifies the root cause of issues affecting the user experience and product release through telemetry data such as logs, traces, and metrics. This continuous monitoring and tracking help predict the end user’s perception of the application. Additionally, with observability, it is possible to write and ship better code before it reaches the source control, as it is part of the set of tools, processes, and culture. Best Practices for Implementing ODD Here are some best practices for implementing Observability-Driven Development (ODD): Prioritize observability from the outset: Start incorporating observability considerations in the development process right from the beginning. This will help you identify potential issues early and make necessary changes in real time. Embrace an end-to-end approach: Ensure observability covers all aspects of the system, including the infrastructure, application, and end-user experience. Monitor and log everything: Gather data from all sources, including logs, traces, and metrics, to get a complete picture of the system’s behavior. Use automated tools: Utilize automated observability tools to monitor the system in real-time and alert you of any anomalies. Collaborate with other teams: Collaborate with teams, such as DevOps, QA, and production, to ensure observability is integrated into the development process. Continuously monitor and improve: Regularly monitor the system, analyze data, and make improvements as needed to ensure optimal performance. Embrace a culture of continuous improvement: Encourage the development team to embrace a culture of continuous improvement and to continuously monitor and improve the system. Conclusion Both Observability-Driven Development (ODD) and Test-Driven Development (TDD) play an important role in ensuring the quality and reliability of software systems. TDD focuses on detecting and fixing bugs before they can harm the system or its users, while ODD focuses on monitoring the behavior of the software system in real-time to identify potential problems and understand its behavior in different scenarios. Did I miss any of the important information regarding the same? Let me know in the comments section below.

By Hiren Dhaduk
Top 10 Best Practices for Web Application Testing
Top 10 Best Practices for Web Application Testing

Web application testing is an essential part of the software development lifecycle, ensuring that the application functions correctly and meets the necessary quality standards. Best practices for web application testing are critical to ensure that the testing process is efficient, effective, and delivers high-quality results. These practices cover a range of areas, including test planning, execution, automation, security, and performance. Adhering to best practices can help improve the quality of the web application, reduce the risk of defects, and ensure that the application is thoroughly tested before it is released to users. By following these practices, testing teams can improve the efficiency and effectiveness of the testing process, delivering high-quality web applications to users. 1. Test Early and Often Testing early and often means starting testing activities as soon as possible in the development process and continuously testing throughout the development lifecycle. This approach allows for issues to be identified and addressed early on, reducing the risk of defects making their way into production. Some benefits of testing early and often include: Identifying issues early in the development process, reducing the cost and time required to fix them. Ensuring that issues are caught before they impact users. Improving the overall quality of the application by catching defects early. Reducing the likelihood of rework or missed deadlines due to last-minute defects. Improving collaboration between developers and testers by identifying issues early on and resolving them together. By testing early and often, teams can ensure that the web application is thoroughly tested and meets the necessary quality standards before it is released to users. 2. Create a Comprehensive Test Plan Creating a comprehensive test plan involves developing a detailed document that outlines the approach, scope, and schedule of the testing activities for the web application. A comprehensive test plan typically includes the following elements: Objectives: Define the purpose of the testing and what needs to be achieved through the testing activities. Scope: Define what functionalities of the application will be tested and what won't be tested. Test Strategy: Define the overall approach to testing, including the types of testing to be performed (functional, security, performance, etc.), testing methods, and tools to be used. Test Schedule: Define the testing timelines, including the start and end dates, and the estimated time required for each testing activity. Test Cases: Define the specific test cases to be executed, including input values, expected outputs, and pass/fail criteria. Environment Setup: Define the necessary hardware, software, and network configurations required for testing. Test Data: Define the necessary data required for testing, including user profiles, input values, and test scenarios. Risks and Issues: Define the potential risks and issues that may arise during testing and how they will be managed. Reporting: Define how the testing results will be recorded, reported, and communicated to stakeholders. Roles and Responsibilities: Define the roles and responsibilities of the testing team and other stakeholders involved in the testing activities. A comprehensive test plan helps ensure that all testing activities are planned, executed, and documented effectively, and that the web application is thoroughly tested before it is released to users. 3. Test Across Multiple Browsers and Devices Testing across multiple browsers and devices is a crucial best practice for web application testing, as it ensures that the application works correctly on different platforms, including different operating systems, browsers, and mobile devices. This practice involves executing testing activities on a range of popular web browsers, such as Chrome, Firefox, Safari, and Edge, and on various devices, such as desktops, laptops, tablets, and smartphones. Testing across multiple browsers and devices helps identify issues related to compatibility, responsiveness, and user experience. By testing across multiple browsers and devices, testing teams can: Ensure that the web application is accessible to a wider audience, regardless of their preferred platform or device. Identify issues related to cross-browser compatibility, such as variations in rendering, layout, or functionality. Identify issues related to responsiveness and user experience, such as issues with touchscreens or mobile-specific features. Improve the overall quality of the application by identifying and resolving defects that could impact users on different platforms. Provide a consistent user experience across all platforms and devices. In summary, testing across multiple browsers and devices is a crucial best practice for web application testing, helping ensure that the application functions correctly and delivers a high-quality user experience to users on all platforms. 4. Conduct User Acceptance Testing (UAT) User acceptance testing (UAT) is a best practice for web application testing that involves testing the application from the perspective of end-users to ensure that it meets their requirements and expectations. UAT is typically conducted by a group of users who represent the target audience for the web application, and who are asked to perform various tasks using the application. The testing team observes the users' interactions with the application and collects feedback on the application's usability, functionality, and overall user experience. By conducting UAT, testing teams can: Ensure that the application meets the requirements and expectations of end-users. Identify usability and functionality issues that may have been missed during other testing activities. Collect feedback from end-users that can be used to improve the overall quality of the application. Improve the overall user experience by incorporating user feedback into the application's design. Increase user satisfaction by ensuring that the application meets their needs and expectations. UAT is an essential best practice for web application testing, as it ensures that the application meets the needs and expectations of end-users and delivers a high-quality user experience. 5. Automate Testing Automating testing is a best practice for web application testing that involves using software tools and scripts to execute testing activities automatically. This approach is particularly useful for repetitive and time-consuming testing tasks, such as regression testing, where automated tests can be executed quickly and efficiently. Automation testing can also help improve the accuracy and consistency of testing results, reducing the risk of human error. By automating testing, testing teams can: Reduce testing time and effort, allowing more comprehensive testing to be performed within the available time frame. Increase testing accuracy and consistency, reducing the risk of human error and ensuring that tests are executed consistently across different environments. Improve testing coverage by allowing for more tests to be executed in a shorter time frame, increasing the overall effectiveness of the testing process. Facilitate continuous testing by enabling automated tests to be executed automatically as part of the development process, allowing issues to be identified and resolved more quickly. Reduce testing costs by reducing the need for manual testing and increasing testing efficiency. Automating testing is an essential best practice for web application testing, as it can significantly improve the efficiency and effectiveness of the testing process, reduce costs, and improve the overall quality of the application. 6. Test for Security Testing for security is a best practice for web application testing that involves identifying and addressing security vulnerabilities in the application. This practice involves conducting various testing activities, such as penetration testing, vulnerability scanning, and code analysis, to identify potential security risks and vulnerabilities. By testing for security, testing teams can: Identify and address potential security vulnerabilities in the application, reducing the risk of security breaches and data theft. Ensure compliance with industry standards and regulations, such as PCI DSS, HIPAA, or GDPR, that require specific security controls and measures to be implemented. Improve user confidence in the application by demonstrating that security is a top priority and that measures have been taken to protect user data and privacy. Enhance the overall quality of the application by reducing the risk of security-related defects that could impact users' experience and trust in the application. Provide a secure and reliable platform for users to perform their tasks and transactions, improving customer satisfaction and loyalty. Testing for security is a critical best practice for web application testing, as security breaches can have significant consequences for both users and businesses. By identifying and addressing potential security vulnerabilities, testing teams can ensure that the application provides a secure and reliable platform for users to perform their tasks and transactions, reducing the risk of security incidents and data breaches. 7. Perform Load and Performance Testing Load and performance testing are best practices for web application testing that involve testing the application's ability to perform under various load and stress conditions. Load testing involves simulating a high volume of user traffic to test the application's scalability and performance, while performance testing involves measuring the application's response time and resource usage under different conditions. By performing load and performance testing, testing teams can: Identify potential bottlenecks and performance issues that could impact the application's usability and user experience. Ensure that the application can handle expected traffic loads and usage patterns without degrading performance or causing errors. Optimize the application's performance by identifying and addressing performance issues before they impact users. Improve user satisfaction by ensuring that the application is responsive and performs well under various conditions. Reduce the risk of system failures and downtime by identifying and addressing performance issues before they cause significant impacts. Load and performance testing are essential best practices for web application testing, as they help ensure that the application performs well under various conditions and user loads. By identifying and addressing performance issues, testing teams can optimize the application's performance, improve user satisfaction, and reduce the risk of system failures and downtime. 8. Conduct Regression Testing Regression testing is a best practice for web application testing that involves retesting previously tested functionality to ensure that changes or fixes to the application have not introduced new defects or issues. This practice is particularly important when changes have been made to the application, such as new features or bug fixes, to ensure that these changes have not impacted existing functionality. By conducting regression testing, testing teams can: Ensure that changes or fixes to the application have not introduced new defects or issues that could impact user experience or functionality. Verify that existing functionality continues to work as expected after changes have been made to the application. Reduce the risk of unexpected issues or defects in the application, improving user confidence and trust in the application. Improve the overall quality of the application by ensuring that changes or fixes do not negatively impact existing functionality. Facilitate continuous testing and delivery by ensuring that changes can be made to the application without introducing new issues or defects. Regression testing is an important best practice for web application testing, as it helps ensure that changes or fixes to the application do not negatively impact existing functionality. By identifying and addressing issues before they impact users, testing teams can improve the overall quality of the application and reduce the risk of unexpected issues or defects. 9. Document and Report Defects Documenting and reporting defects is a best practice for web application testing that involves tracking and reporting any issues or defects found during testing. This practice ensures that defects are documented, communicated, and addressed appropriately, improving the overall quality of the application and reducing the risk of user impact. By documenting and reporting defects, testing teams can: Ensure that all defects are tracked, documented, and communicated to the appropriate stakeholders. Prioritize and address critical defects quickly, reducing the risk of user impact and improving the overall quality of the application. Provide clear and detailed information about defects to developers and other stakeholders, improving the efficiency of the defect resolution process. Ensure that defects are resolved appropriately and that fixes are properly tested before being deployed to production. Analyze defect trends and patterns to identify areas of the application that require further testing or improvement. Documenting and reporting defects is a critical best practice for web application testing, as it ensures that defects are properly tracked, communicated, and addressed, improving the overall quality and reliability of the application. By identifying and addressing defects early in the development cycle, testing teams can reduce the risk of user impact and ensure that the application meets user requirements and expectations. 10. Collaborate With the Development Team Collaborating with the development team is a best practice for web application testing that involves establishing open communication and collaboration between the testing and development teams. This practice ensures that both teams work together to identify, address, and resolve issues and defects efficiently and effectively. By collaborating with the development team, testing teams can: Ensure that testing is integrated into the development process, improving the efficiency of the testing and development process. Identify defects and issues early in the development process, reducing the time and cost required to address them. Work with developers to reproduce defects and provide detailed information about issues, improving the efficiency of the defect resolution process. Identify areas of the application that require further testing or improvement, providing valuable feedback to the development team. Ensure that the application meets user requirements and expectations, improving user satisfaction and confidence in the application. Collaborating with the development team is an essential best practice for web application testing, as it ensures that both teams work together to identify, address, and resolve issues efficiently and effectively. By establishing open communication and collaboration, testing and development teams can ensure that the application meets user requirements and expectations while improving the efficiency of the testing and development process. Conclusion Web application testing is a critical process that ensures the quality, reliability, and security of web-based software. By following best practices such as proper planning, test automation, a suitable test environment, a variety of testing techniques, continuous testing, bug tracking, collaboration, and testing metrics, testers can effectively identify and fix issues before the software is released to the public, resulting in a better user experience.

By Bhavesh Patel
Pair Testing in Software Development
Pair Testing in Software Development

Software development is about cultivating differences in points of view. One of the reasons different roles exist, like product owners, designers, developers, testers, DevOps, and project managers, is to have different points of view during any life cycle. It could be a project life cycle, a product life cycle, a software development life cycle, a software testing life cycle, etc. A product owner will be business oriented. It's all about what we release and its value to the customer. A developer is more implementation driven. It's all about how to implement our features in code. A tester's point of view usually includes technical aspects and business aspects. It's all about constructively critiquing the product and giving valuable feedback to stakeholders. One way to cultivate differences in points of view is to use pairing activities. Pair programming and pair testing are two of the most popular. This article focuses on pair testing, and I will share experiences on how teams have used pair testing to their advantage. Pair Testing Per Role Pair testing can be done between testers, between developers, and between testers and developers. As long as there is a difference in focus and perspective between the people involved, pair testing will be beneficial. Pair Testing Between Testers No matter how well-planned and organized testing is, pairing between testers may find missing edge cases. Especially between testers with different levels of expertise, brainstorming cases to test in an exploratory session will help learn more about the system under test. For example, a usability tester pairing with a backend tester may supplement each other in ways that could result in interesting findings. Testers between different development teams have scheduled pair testing sessions each time a new feature is ready for testing. In some cases, pair testing was scheduled before test execution and during test-case design. Pair Testing Between Developers A frontend and a backend developer could brainstorm about why things work the way they do in use cases that require tricky implementations. Next, developers between different teams could pair-test for code interdependencies. One team used pair testing for unit tests and integration tests. Other teams tested in pairs all the way from unit and integration testing through API testing. Finally, a development team with no testers used pair testing to broaden their views about their implementations. Pair Testing Between Testers and Developers This is usually one of the most effective combinations to pair test. A developer often tests to check that the implemented code works as expected. A tester often tests end-to-end to check that the software system behaves as expected. Most importantly, the tester must also explore any kind of risks and surprises in the system. Pair testing with a developer to work through surprises and risks can be educating for all involved. The tester can learn from the developer why the system behaves that way. The developer can learn new ways to examine the system and use cases resulting in problematic behavior. For example, there was a development team of six that pair-tested altogether before release. The developers had already tested at a unit level and had also performed smoke testing at a UI level. The tester had finished verifying system behavior and exploring. They used a time-boxed session where the entire team participated for a final green light to release. Another team did whole-team pair testing after the release. After all testing activities were finished, and the feature under test was released, the team gathered for a whole-team smoke testing activity. This helped the team to gain confidence that the release caused no unpleasant surprises. Release Features Faster In a team that followed a cycle like: develop → code review → QA test → release, pair testing before code reviews and during development helped release features faster. Developers organized pair-testing sessions with testers to demonstrate how their features worked. During pair testing, they examined positive and negative scenarios. Look and feel aspects of the UI were discussed, as well as possible performance bottlenecks. When both were happy with the results from their pair testing, the developer would check the code under test. Code reviews would follow, and then thorough QA testing from the tester. Pair testing would usually catch basic issues. This reduced the development → code review → QA test cycles for fixing bugs or implementing important improvements before releasing. Fewer cycles meant faster releases. Find More Bugs Pair testing can be used to find new bugs. No matter how many tests we write, no matter how many cases we execute, we always find more bugs when we think out of the box and when we test the system in new ways. We could have an impressive amount of test cases covering all functional and non-functional requirements available. In time, test suites will catch regression bugs. Although regression bugs are important to find and fix, there could be other important bugs that remain unidentified. Brainstorming during pair testing may bring the creativity required to explore and identify new problems. Fix Bugs Faster Showing problems to colleagues in pair testing sessions appears to be more effective than issuing bugs in bug tracking systems and waiting for a colleague to find the time for bug fixing. Bugs must be tracked in bug tracking systems. In remote working environments, however, where people are working in vastly different time zones, bugs identified during pair-testing sessions are more likely to be resolved faster. Communicating and working together through a bug often leads to more effective problem-solving. Explaining Helps the Explainer Too Oftentimes, when we explain a phenomenon to the best of our abilities, we tend to understand better the phenomenon we had just explained. Not only did the people listening to us know, but we also understood better. One of the most important values that pair testing brings is that pairing improves both people's understanding. This will often lead to new ideas and solutions to problems since exchanging new ideas can boost creativity. When a developer tries to explain how things work with the new code implemented, she will likely understand things that were not so clear in the first place. As a tester explains the sequence of actions followed to reproduce a sporadic problem, hidden details of the problem may become apparent to him. Wrapping Up Pair testing can be done between any roles in a software development group. Between product owners, designers, developers, and testers and between colleagues with the same role. To get the most out of pairing, we need diversity in points of view. Diversity can lead to creativity. As long as diversity also leads to constructive feedback, pair testing can effectively improve our confidence in high-quality feature releases.

By Stelios Manioudakis
21 Best Ruby Testing Frameworks for 2023
21 Best Ruby Testing Frameworks for 2023

QAs are always searching for the best automation testing frameworks that provide rich features with simple syntax, better compatibility, and faster execution. If you choose to use Ruby in conjunction with Selenium for web testing, it may be necessary to search for Ruby-based testing frameworks for web application testing. Ruby testing frameworks offer a wide range of features, such as support for behavior-driven development, mocking and stubbing, and test suite organization, making it easier for developers to write effective tests for their Ruby-based applications. Over the past decade, it has become clear that technology will keep making huge strides. Since Ruby has maintained its popularity and usability for over two decades, it makes sense to throw some light on the best Ruby-based frameworks. Since every business needs to consider long-term benefits, picking the right Ruby automation testing framework is a big decision. The options out there can be overwhelming. In this article, let’s look at the twenty-one best Ruby testing frameworks for 2023. We will also check out microframeworks that handle some primary concerns in case you don’t need a full-fledged framework. So, are you ready to scale your business by leveraging the unmatched power of Ruby? Perfect! Let’s dive right in. Why Ruby for Test Automation? When it comes to automation testing, one can choose any of the top programming languages. Each language has advantages and limitations, depending on the project you’re working on and which one is the best fit. However, the simple answer is that Ruby is easy to learn and use. It has great support libraries for testing frameworks, databases, and other utilities, making it easy to build a full project quickly and efficiently. It also has a great community that is helpful and friendly with their advice and knowledge. Ruby’s syntax is easy to read, which makes it easier to understand what you’re doing when you need to troubleshoot or fix issues in your code. It also makes it simpler to explain the function of your code outside of the code itself because you can simply state, “this code does this,” and continue with the explanation without describing how specific methods operate internally. Advantages of Ruby For its users, Ruby has several benefits. Here are some of Ruby’s primary advantages: Secure Numerous plugins Time-efficient Packed with third-party libraries Easy to learn Business logic operations Open-source However, Ruby comes with a few limitations or restrictions as well, which are listed below: Although having a solid community, it does not have the same level of popularity as other languages like Java, C#, etc. Longer processing time. Difficult to debug a script, i.e., have a flaw that causes errors during runtime, which can be pretty frustrating for development teams. It’s challenging to adapt as it has fewer customizable features. Now, let’s dive into some of the best Ruby testing frameworks for 2023. Best Ruby Testing Frameworks for 2023 There are a variety of testing frameworks available for Ruby that make it easier to write, run, and manage tests. These frameworks range from simple testing libraries to complex, full-featured test suites. In this article, we’ll introduce twenty-one of the best Ruby testing frameworks for 2023. 1. RSpec Source: RSpec RSpec is one of the best Ruby testing frameworks and a successful testing solution for its code. With its core focus on empowering test-driven development, this framework features small libraries suitable for independent use with other frameworks. RSpec tests frontend behavior by testing individual components and application behavior using the Capybara gem. This Ruby testing framework also carries out the testing of server-side behavior. When performing Selenium automation testing with the RSpec framework, you can group fixtures and allow tests to be organized in groups. The MIT license governs its use as well as redistribution. 2. Cucumber Source: Cucumber Cucumber is a reliable automation tool and one of the best Ruby testing frameworks based on BDD. All stakeholders can easily understand its specifications since it’s all plain text. It integrates well with Selenium and facilitates hassle-free front-end testing. On the other hand, you can test API and other components with the help of databases and REST and SOAP clients using client libraries. Creating fixtures couldn’t be easier! Making a fixtures directory and creating the fixture files are the only things left to do. You can also make the grouping of fixtures possible inside these directories while performing Selenium automation testing with the Cucumber framework. 3. Test::Unit Source: Test::Unit Primarily used for unit testing, Test::Unit belongs to the xUnit family of Ruby unit testing frameworks. It makes fixture methods available via the ClassMethods module and offers support for group fixture methods. Test::Unit is included in the standard library of Ruby and requires no third-party libraries. It supports only a subset of the features available in other major testing frameworks, such as JUnit and NUnit. However, it provides enough functionality to help programmers test their applications at a unit level. 4. Capybara Source: Cabybara Capybara is an automation testing framework written in Ruby. It can easily simulate scenarios for different user stories and automate web testing. In other words, it mimics user actions such as parsing HTML, receiving pages, and submitting forms. It supports WebDrivers like RackTest, Selenium, and Capybara-WebKit. It comes with Rack::Test support and facilitates test execution via a simple and clean interface. Its powerful and sophisticated synchronization features enable users to handle the asynchronous web easily. Capybara locates relevant elements in the DOM (Document Object Model), followed by the execution of actions such as links and button clicks. You can easily use Cucumber, Minitest, and RSpec with Capybara. 5. Minitest Source: Minitest Minitest boasts high readability and understandability compared to many other best Ruby testing frameworks. It offers an all-in-one suite of testing facilities such as benchmarking, mocking, BDD, and TDD. Even though it’s comparatively smaller, the speed of this unit testing framework is incredible. If you are looking forward to asserting your algorithm performance repeatedly, Minitest is the way to go. Its assertion functions are in the xUnit/TDD style. It also offers support for test fixture functions and group fixtures. Users can easily test different components at the backend. 6. Spinach Source: Spinach Spinach is a high-level framework that supports behavior-driven development and uses the Gherkin language. It helps define an application’s executable specification or the acceptance criteria of libraries. It makes testing server-side behavior easier, but the same is not true for the client side. The inbuilt generator method generates input data before running tests for each. However, it doesn’t define specific data states for a group of tests. In other words, Spinach doesn’t support fixtures and group fixtures. 7. Shoulda Source: Shoulda Shoulda comprises two components, Shoulda Context and Matchers. The former facilitates enhanced test naming and grouping, whereas Shoulda Matches offer methods to write assertions that are far more concise. The framework allows the organization of tests into groups. Shoulda Matches holds compatibility with Minitest and RSpec. Shoulda Context holds the same relationship with the test unit and Minitest. 8. Spork Source: Spork Spork is one of the best Ruby testing frameworks that forks a server copy every time testers run tests. As a result, it ensures a clean state of testing. The most significant benefit is that the runs don’t corrupt as time passes and are more solid. Thanks to the proper handling of modules, it can also efficiently work with any other Ruby framework of your choice. Some testing frameworks it supports include RSpec, Cucumber, and Test::Unit. You don’t need an application framework for Spork to work. At the initial level, you might not notice the automatic loading of some files since they load during the startup. Sometimes, changes and projects can call for a restart. 9. Aruba Source: Aruba Aruba is a Ruby testing framework that allows testing of command line applications with Minitest, RSpec, or Cucumber–Ruby. Detailed documentation is available to help users get started with the framework. Although Aruba doesn’t fully support Windows, it has proven successful on macOS and Linux in CI. Only RSpec tests can run flawlessly on Windows. It supports versions 4 and above until 8. The supported Ruby versions include CRuby 2.5, 2.6, 2.7, 3.0, and 3.1 and JRuby 9.2. 10. Phony Source: Phony Every phone number on the planet will eventually be able to be split, formatted, or normalized with Phony. In other words, this gem is responsible for normalizing, formatting, and splitting E164 numbers, including a country code. It only works within international numbers like 61 412 345 678. The framework has been widely used in Zendesk, Socialcam, and Airbnb. It makes use of approximately 1 MB for each Ruby process. Normalization is responsible for removing a number’s non-numeric characters. On the other hand, the format is responsible for formatting a normalized number depending on the predominant formatting of a country. 11. Bacon Source: Bacon Bacon is a feature-rich small clone of RSpec that has a weight of 350 LoC. It offers support for Knock, Autotest, and TAP. The first public release came out on January 7th, 2008, followed by a second one on July 6th. The third public release was on November 3rd, 2008, and the fourth came out on December 21st, 2012. Before your context’s first specification, you must define before and after. It’s easy to define shared contexts, but you can’t execute them. However, you can use them with recurring specifications and include them with other contexts, such as behaves_like. 12. RR Source: RR Originally developed by Brian Takita, RR is one of the leading test double Ruby testing frameworks, offering a comprehensive choice of double techniques and terse syntax. If you already use the test framework, RR will hook itself onto your existing framework once you have loaded it. Available through the MIT license, this framework works with Ruby 2.4, 2.5, 2.6, 2.7, 3.0, and JRuby 1.7.4. The frameworks it supports include Test::Unit through test-unit-rr, Minitest 4 and 5, and RSpec 2. When using RR, you can run multiple test suites via rake tasks. 13. Howitzer Source: Howitzer Howitzer is a Ruby-based acceptance testing framework focused solely on web applications. The core aim of this framework is to fasten the pace of test development and offer the necessary assistance to users. It provides support for the following: Operating Systems: macOS Linux Windows Real Browsers: Internet Explorer Firefox Google Chrome Safari Edge Mail Services: Gmail Mailgun Mailtrap CI Tools: Jenkins Teamcity Bamboo CircleCI Travis Github Actions The most significant benefits of using this framework include quick installation, the fast configuration of test infrastructure, intuitiveness, and the choice of the BDD. 14. Pundit Matchers Source: Pundit Matchers If you want to test Pundit authorization policies, the RSpec Matchers set is the way to go. Available under the MIT license, Pundit Matchers offers an easy setup and a hassle-free configuration. Installation of Pundit gems and RSpec–rails are the primary requirements to use the framework. For the test strategy, this framework makes assumptions regarding your policy spec file structure after declaring a subject. You can also test multiple actions at once. 15. Emoji-RSpec Source: Emoji-RSpec Emoji RSpec is a framework better known as custom emoji formatters. These formatters are meant for use along with the test output. Emoji-RSpec 1.x offers complete support for 2.0 and backward aid for versions 1.9.2 and 3.0.x, which calls for users to maintain support for 1.8.7. It allows pull requests but prevents the addition of new formats. 16. Cutest Source: Cutest Cutest is a Ruby testing framework focused primarily on isolated tests. Testers run every test file in a way that facilitates avoiding the shared state. After finding a failure, it offers a detailed report of what went down and how you can pinpoint the error. Using the scope command guarantees there wouldn’t be any sharing of instance variables between tests. The prepare command facilitates the execution of blocks before every test. The setup command executes the setup block before every test and passes the outcome to the test block as a parameter. 17. RSpec Clone Source: RSpec Clone RSpec Clone is a minimalistic Ruby testing framework that has all the necessary components for the same. Available under the MIT license, this framework helps lower code complexity and avoid false positives and negatives. Thanks to its alternative syntax, it helps in preventing interface overload. With the RSpec clone, users have a structure to write code behavior executable instances. You can also write these examples in a method similar to plain English, including a DSL. Whatever your project settings, you can run rake spec for a project spec. 18. Riot Source: Riot Riot is one of the best Ruby testing frameworks for unit testing that is contextual, expressive, and fast-paced. Since it doesn’t run teardown and set up sequences before every test and after its completion, the speed of test execution is higher. In general, you should always avoid mutating objects. But when you are using Riot, that’s precisely what you must do. You can also call setup multiple times. It also doesn’t matter how many times you use this. 19. Turnip Source: Turnip Turnip is a Ruby testing framework for integration and acceptance testing. It’s a Gherkin extension for RSpec that helps solve problems while using Cucumber to write specifications. In other words, it is an open-source gem that performs end to end testing of frontend functionality and components. You can also use Turnip for testing server-side components and behavior. When you integrate with RSpec, this framework can access the RSpec-mocks gem. You can also declare example context and groups by directly integrating Turnip into the RSpec test suite. 20. TMF Source: TMF TMF joins the list of the many minimalistic Ruby test frameworks. It belongs in the unit testing category and is a small testing tool. All you need to do is copy the entire code to be done. This framework just uses two methods for testing. They are: Stub Assert The best part about TMF is that, even though it’s a minimal testing tool, testers can efficiently perform testing of various backend components. It’s perfect for tests that don’t require a hefty feature set. 21. Rufo Source: Rufo Rufo is a Ruby formatter with the primary intention of being used through the command line for auto-formatting files on demand or saving. There is a single Ruby code format, and testers have to ensure the adherence of their code to it. It supports Ruby versions 2.4.5 and higher. You can even use Rufo to develop your plugins. The default configuration of this framework preserves decisions. This enables team members to use their text editor of choice without the whole team having to switch to it. However, the framework offers support for limited configurations. Executing Selenium Ruby Automation Testing on the Cloud You can execute Selenium Ruby automation testing on a cloud by using a cloud-based Selenium Grid, such as LambdaTest. This allows you to run your tests on various browser and operating system combinations without maintaining a large infrastructure. LambdaTest is a cross browser testing platform that supports all best Ruby testing frameworks like RSpec, Capybara, Test::Unit, etc. It lets you perform Selenium Ruby automation testing across 3000+ real browsers and operating systems on an online Selenium Grid. Here are the steps: Step 1 Sign up for free and Login to the LambdaTest platform: Step 2 Click on the “Automation” tab present in the left navigation, which provides you with the following options: Builds Test archive Analytics Choose the language or testing framework that’s present on the UI: Step 3 You can choose any framework of your choice under Ruby and configure the test: If you are a developer or tester looking to improve your Ruby skills, the Selenium Ruby 101 certification from LambdaTest may be a valuable resource. Summing It Up! Ruby has transformed the web world and will continue to do so. But to fully use its potential, choosing the best Ruby testing framework appropriate for your requirements is vital. In this article, we have mentioned the twenty-one best Ruby testing frameworks for 2023 by being as comprehensive as possible regarding functionality, productivity, and efficiency. Now, you have a wide array of outstanding Ruby frameworks at your disposal. Since we have already done extensive shortlisting for you, all you need to do is go for the one that meets your needs. If you think we missed something, sound off in the comments below.

By Veethee Dixit
Automated Performance Testing With ArgoCD and Iter8
Automated Performance Testing With ArgoCD and Iter8

Say you are building a cloud app. Clearly, you will unit-test the app during development to ensure it behaves the way you expect. And no matter how you develop your app, your favorite unit testing tool will make it easy to author, execute, and obtain results from your tests. What about testing your app when you deploy it in a Kubernetes cluster (test/dev/staging/prod)? Does the app handle realistic load conditions with acceptable performance? Does the new version of the app improve business metrics relative to the earlier version? Is it resilient? In this article, I will introduce Iter8 and one of its new features, AutoX. Iter8 is an open-source Kubernetes release optimizer that can help you get started with testing Kubernetes apps (experiments) in seconds. With Iter8, you can perform various kinds of experiments, such as performance testing, A/B/n test, chaos injection test, and more. Iter8 recently introduced a new feature: AutoX. AutoX, short for automatic experiments, allows you to perform the above experiments automatically by leveraging Argo CD, a popular continuous delivery tool. We will explore automatically launching performance testing experiments for an HTTP service deployed in Kubernetes. You can familiarize yourself with Iter8 on Iter8’s official website. AutoX Releasing a new version of an application typically involves the creation of new Kubernetes resource objects and/or updates to existing ones. AutoX can be configured to watch for such changes and automatically launch new experiments. You can configure AutoX with multiple experiment groups and, for each group, specify the Kubernetes resource object that AutoX will watch and one or more experiments to be performed in response to new versions of this object. Let us now see this in action using a Kubernetes HTTP service and configuring AutoX, so whenever a new version of the service is released, AutoX will start a new HTTP performance test that will validate if the service meets latency and error-related requirements. Download Iter8 CLI Shell brew tap iter8-tools/iter8 brew install iter8@0.13 The Iter8 CLI provides the commands needed to see experiment reports. Setup the Kubernetes Cluster With ArgoCD As mentioned previously, AutoX is built on top of ArgoCD, so we will also need to install it. A basic install of Argo CD can be done as follows: Shell kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Deploy Application Now, we will create an httpbin deployment and service: Shell kubectl create deployment httpbin --image=kennethreitz/httpbin --port=80 kubectl expose deployment httpbin --port=80 Apply the Version Label Next, we will assign httpbin deployment the app.kubernetes.io/version label (version label). AutoX will launch experiments only when this label is present on your trigger object. It will relaunch experiments whenever this version label is modified. Shell kubectl label deployment httpbin app.kubernetes.io/version=1.0.0 Setup Kubernetes Cluster With Iter8 AutoX Next, we will configure and install the AutoX controller: Shell helm install autox autox --repo https://iter8-tools.github.io/hub/ --version 0.1.6 \ --set 'groups.httpbin.trigger.name=httpbin' \ --set 'groups.httpbin.trigger.namespace=default' \ --set 'groups.httpbin.trigger.group=apps' \ --set 'groups.httpbin.trigger.version=v1' \ --set 'groups.httpbin.trigger.resource=deployments' \ --set 'groups.httpbin.specs.iter8.name=iter8' \ --set 'groups.httpbin.specs.iter8.values.tasks={ready,http,assess}' \ --set 'groups.httpbin.specs.iter8.values.ready.deploy=httpbin' \ --set 'groups.httpbin.specs.iter8.values.ready.service=httpbin' \ --set 'groups.httpbin.specs.iter8.values.ready.timeout=60s' \ --set 'groups.httpbin.specs.iter8.values.http.url=http://httpbin.default/get' \ --set 'groups.httpbin.specs.iter8.values.assess.SLOs.upper.http/error-count=0' \ --set 'groups.httpbin.specs.iter8.values.assess.SLOs.upper.http/latency-mean=50' \ --set 'groups.httpbin.specs.iter8.version=0.13.0' \ --set 'groups.httpbin.specs.iter8.values.runner=job' The configuration of the AutoX controller is composed of a trigger object definition and a set of experiment specifications. In this case, the trigger object is the httpbin deployment, and there is only one experiment, an HTTP performance test with SLO validation associated with this trigger. To go into more detail, the configuration is a set of groups, and each group is composed of a trigger object definition and a set of experiment specifications. This enables AutoX to manage one or more trigger objects, each associated with one or more experiments. In this tutorial, there is only one group named httpbin (groups.httpbin...), and within that group, there is the trigger object definition (groups.httpbin.trigger...), and a single experiment spec named iter8 (groups.httpbin.specs.iter8...). The trigger object definition is a combination of the name, namespace, and the group-version-resource (GVR) metadata of the trigger object. In this case, httpbin, default, and GVR apps, deployments, and v1, respectively. The experiment is an HTTP SLO validation test on the httpbin service. This Iter8 experiment is composed of three tasks, ready, http, and assess. The ready task will ensure that the httpbin deployment and service are running. The http task will make requests to the specified URL and will collect latency and error-related metrics. Lastly, the assess task will ensure that the mean latency is less than 50 milliseconds and the error count is “0.” In addition, the runner is set to “job” as this will be a single-loop experiment. Observe Experiment After starting AutoX, the HTTP SLO validation test should quickly follow. You can now use the Iter8 CLI to check the status and see the results of the test. The following command allows you to check the status of the test. Note: You need to specify an experiment group via the -g option. The experiment group for experiments started by AutoX is in the form autox-<group name>-<experiment spec name>, so, in this case, it would be autox-httpbin-iter8: Shell iter8 k assert -c completed -c nofailure -c slos -g autox-httpbin-iter8 We can see in the sample output that the test has been completed, there were no failures, and all SLOs and conditions were satisfied. Plain Text INFO[2023-01-11 14:43:45] inited Helm config INFO[2023-01-11 14:43:45] experiment completed INFO[2023-01-11 14:43:45] experiment has no failure INFO[2023-01-11 14:43:45] SLOs are satisfied INFO[2023-01-11 14:43:45] all conditions were satisfied The following command allows you to see the results as a text report: Shell iter8 k report -g autox-httpbin-iter8 You can also produce an HTML report that you can view in the browser: Shell iter8 k report -g autox-httpbin-iter8 -o html > report.html The HTML report will look similar to the following: Continuous and Automated Experimentation Now that AutoX is watching the httpbin deployment, the new version will relaunch the HTTP SLO validation test. The version update must be accompanied by a change to the deployment’s app.kubernetes.io/version label (version label); otherwise, AutoX will not do anything. For simplicity, we will change the version label to the deployment to relaunch the HTTP SLO validation test. In the real world, a new version would typically involve a change to the deployment spec (e.g., the container image), and this change should be accompanied by a change to the version label: Shell kubectl label deployment httpbin app.kubernetes.io/version=2.0.0 --overwrite Observe the New Experiment Check if a new experiment has been launched. Refer to the previous “Observe Experiment” section for the necessary commands. If we were to continue to update the deployment (and change its version label), then AutoX would relaunch the experiment for each such change. More Things You Can Do Firstly, the HTTP SLO validation test is flexible, and you can augment it in a number of ways, such as adding headers, providing a payload, or modulating the query rate. To learn more, see the documentation for the http task. AutoX is designed to use any Kubernetes resource object (including those with a custom resource type) as a trigger object in AutoX. For example, the trigger object can be a Knative service, a KServe inference service, or a Seldon deployment. AutoX is designed to automate a variety of experiments. For example, instead of using the http task, you can use grpc task in order to run a gRPC SLO validation test. Here is the documentation for the grpc task as well as a tutorial for gRPC SLO Validation. Furthermore, you can add additional tasks that ship out-of-the-box with Iter8 in order to enrich the experiments. For example, you can add a slack task so that your experiment results will be posted on Slack. That way, you can automatically have the latest performance statistics after every update. Here is the documentation for the slack task as well as a tutorial for using the Slack task. You can also automate experiments that are not from Iter8. For example, a Litmus Chaos chaos experiment is available on the Iter8 hub, which can also be configured with AutoX. Lastly, recall that you can provide multiple groups and experiment specs so AutoX can launch and manage a whole suite of experiments for multiple Kubernetes applications and namespaces. Takeaways AutoX is a powerful new feature of Iter8 that lets you automatically launch performance experiments on your Kubernetes applications as soon as you release a new version. Configuring AutoX is straightforward and requires specifying a trigger Kubernetes resource object and the experiments you want to associate with this trigger object. After trying out the tutorial, consider trying it out on your own Kubernetes apps.

By Alan Cha
I Don’t TDD: Pragmatic Testing With Java
I Don’t TDD: Pragmatic Testing With Java

We're building a Google Photos clone, and testing is damn hard! How do we test that our Java app spawns the correct ImageMagick processes or that the resulting thumbnails are the correct size and indeed thumbnails, not just random pictures of cats? How do we test different ImageMagick versions and operating systems? What’s in the Video 00:00 Intro We start the video with a general overview of what makes testing our Google Photos clone so tricky. As in the last episode, we started extracting thumbnails from images, but we now need a way to test that. As this is done via an external ImageMagick process, we are in for a ride. 01:05 Setting Up JUnit and Writing the First Test Methods First off, we will set up JUnit 5. As we're not using a framework like Spring Boot, it serves as a great exercise to add the minimal set of libraries and configuration that gets us up and running with JUnit. Furthermore, we will write some test method skeletons, while thinking about how we would approach testing our existing code and taking care of test method naming, etc. 04:19 Implementing ImageMagick Version Detection In the last episode, we noticed that running our Java app on different systems leads to unexpected results or just plain errors. That is because different ImageMagick versions offer a different set of APIs that we need to call. Hence, we need to adjust our code to detect the installed ImageMagick version and also add a test method that checks that ImageMagick is indeed installed, before running any tests. 10:32 Testing Trade-Offs As is apparent with detecting ImageMagick versions, the real problem is that to reach 100% test coverage with a variety of operating systems and installed ImageMagick versions, you would need a pretty elaborate CI/CD setup, which we don't have in the scope of this project. So we are discussing the pros and cons of our approach. 12:00 Implementing @EnabledIfImageMagickIsInstalled What we can do, however, is make sure that the rest of our test suite only runs if ImageMagick is installed. Thus, we will write a custom JUnit 5 annotation called EnabledIfImageMagickIsInstalled that you can add to any test methods or even whole classes to enable said behavior. If ImageMagick is not installed, the tests simply will not run instead of display an ugly error message. 16:05 Testing Successful Thumbnail Creation The biggest problem to tackle is: How do we properly assert that thumbnails were created correctly? We will approach this question by testing for ImageMagick's exit code, estimating file sizes, and also loading the image, and making sure it has the correct amount of pixels. All of this with the help of AssertJ and its SoftAssertions to easily combine multiple assertions into one. 23:59 Still Only Works on My Machine Even after having tested our whole workflow, we still need to make sure to call a different ImageMagick API for different versions. We can quickly add that behavior to support IM6 as well as IM7, and we are done. 25:53 Deployment Time to deploy the application to my NAS. And this time around, everything works as expected! 26:20 Final Testing Thoughts We did a fair amount of testing in this episode. Let's sum up all the challenges and pragmatic testing strategies that we learned about. 27:31 What’s Next We'll finish the episode by having a look at what's next: multithreading issues! See you in the next episode.

By Marco Behler CORE
Testing Challenges Related to Microservice Architecture
Testing Challenges Related to Microservice Architecture

If you are living in the same world as I am, you must have heard the latest coding buzzer termed “microservices”—a lifeline for developers and enterprise-scale businesses. Over the last few years, microservice architecture emerged to be on top of conventional SOA (Service Oriented Architecture). This much more precise and smaller architecture brought in many benefits. Enough to make the business more scalable in a fly-by paralleling development, testing, and maintenance across various independent teams. Considering how different this approach is from the conventional monolithic process, the testing strategies that apply are also different. With different testing strategies emerge different testing challenges. By far, everyone in the tech world is aware that microservice architecture is useful in delivering a more responsive and agile application. Some major organizations, such as Netflix, Nike, Facebook, etc., have backed their performance based on this architecture. Key Challenges for Testing Microservices 1. Integration Testing and Debugging To write effective integration test cases, a quality assurance engineer should have thorough knowledge regarding each of the various services that a software is delivering. Analyzing logs across multiple microservices can be very twitchy and mentally taxing. 2. Struggling Coordination With so many independent teams working simultaneously on improving different functionalities, it becomes very challenging to coordinate the overall development of the software. For instance, it is tough to spot an idle time window for extensive testing of the entire software. 3. Decoupling of Databases Each microservice helps to establish a single business capability and should have its separate database. However, if that isn’t feasible, then you may also know that in some applications, there may not exist a necessity for decoupling databases. So, a sound evaluation is required to judge which microservice needs decoupling and which doesn’t. 4. Re-Architecturing a Software Product It can be very tiresome for a software architect to redesign the working of an application following microservices, especially if we are talking about an enterprise with gigantic and compound systems. 5. Complexity The complexity of software is directly proportional to the number of microservices the product is delivering or adding. 6. Performance Tracing If you are transitioning from monolithic to microservice architecture, a large number of tiny components are bound to be generated. These components should communicate consistently. Performance tracing for business transactions could turn out to be humongous. 7. Difficult to Visualize After Effects Involving numerous distinctively functioning teams requires a top-notch interface for communication. If all interfaces aren’t properly updated in the software, then it may doom the collaboration. It becomes very strenuous to consider the after-effects of bringing any enhancement to the existing communication platform. 8. Increases Flexibility Microservices indeed provide developers the freedom to not be dependent on a specific programming language, increasing their flexibility. However, you would have to face the hassle of maintaining multiple libraries and database versions. 9. Cleaning Up Software Libraries As quoted by Fowler-Rigetti, “You have some script running on a box somewhere doing God knows what, and nobody wants to go clean that up; They all want to build the next new thing.” With a variety of developers from different microservice teams, there are numerous ways for performing a single action. Deploying custom scripts from different languages happens very often that we forget about a piece of code. This results in recreating that feature by some other custom script belonging to some other language. Effective maintenance and management are needed to overcome this. 10. Prioritization Having a great number of microservices at your disposal becomes vital to prioritize these services in terms of resource allocation. You cannot afford to launch an unnecessary number of resources in a microservice team that is responsible for a relatively small functionality. How To Overcome Such Challenges Specific API endpoints: API endpoints must be provided by every microservice to communicate synchronously or asynchronously with other microservices. These endpoints work on HTTP verbs like GET requests, POST requests, DELETE requests, etc. Each microservice has to let the other services know exactly what pattern should be followed for the appropriate routing of the request. Usually, it is a REST endpoint for facilitating synchronous communication, but it could also be a WSDL endpoint for facilitating asynchronous communication. The formats of these APIs have to be published to other microservice teams, so they know how to connect to your microservice. Once the routing is published and passed on to every microservice team, then a standardized communication takes place among the system, boosting the efficiency of the integrated software. Every microservice is responsible for its own data model. Ideally, each database model should be 100% decoupled from another. The idea behind this is to know what persistence model is needed for the team working on facilitating a single microservice. Autonomous selection of technically sound staff for every microservice. Hiring effective developers, testers, quality analysts, business analysts, and project managers will bring you the key to success. Not all microservices are bound to provide some form of UI. Some are there to support the integrated interaction, such as the middleware team. Standardized development practices along teams call for a bigger investment on a platform basis. This is where cloud-based providers come into the picture, like AWS (Amazon Web Services), Heroku, Google cloud, etc. Therefore, if you are planning a small-scale organization and not envisioned to go for scalability anytime soon, then you are better off with microservice architecture. Make it a necessity to correlate calls with the help of various methods like IDs, tokens, or headers. Also, when logging in to locate a bug, we need to make sure about correlating events across all platforms to avoid ambiguity in this stateless, independently distributed architecture. DevOps need to be more integrated than they ever were. Security needs to be more robust and unbiased as the diversified structure provides hackers the opportunity to hit the soft target of your system. Fault tolerance should be optimized, and consistent monitoring must be performed. Effective use of caching would also help speed up response times by reducing the number of requests the software will aim to meet. Also, if you are planning on developing a feature to aid your respective microservice. You need to make sure it doesn’t affect the functionality that is being delivered by some other microservice team. Your enhancement must support the entire pre-existing functionality of the application. Conclusion You need to have excellent monitoring tools to display the working of your software. Effective logging and documentation may seem exhausting but are indispensable for software maintenance and enhancement. We don’t intend to criticize microservice architecture; however, we want you to be aware of them in detail before it gets deployed into your organization. Microservice architecture will definitely boost the scalability of your business development, bringing a top-notch product to the market. All you need is a little precaution regarding the pros and cons of its implementation. Remember, prevention is better than cure!

By Harshit Paul
What Is Testing as a Service?
What Is Testing as a Service?

Testing as a Service (or TaaS) is an outsourcing model in which an independent service provider undertakes testing activities instead of a company, providing ready access to the right tools, experts, and automation test environments. How Does Testing as a Service Work? TaaS can assume various shapes and forms, but the basic principle remains consistent. For example, a company engages an external service provider to conduct testing, which is typically utilized for automated processes (since they require massive amounts of resources and effort if done manually) and may suggest a single portion of the testing. If the business lacks the necessary resources (e.g., technology) to conduct a thorough checkup on its own, it may also consider utilizing software testing as a service model. TaaS is not an option if a deep analysis of all the hardware, software, and services is required to run the company and execute business operations. Companies prefer TaaS when the time to perform testing is restricted, there is a lack of testing infrastructure, or there is an extensive level of automation. A vendor suggests customized testing solutions, automating nearly half of the test cases while reducing testing time and cost (the cloud provides tools and infrastructure). The overall procedure is as follows: Test scenarios of what needs to be tested are created. Test environments are configured. Tests are prepared and executed inside the existing test environment. Finally, performance is monitored and analyzed. The provider and the client cooperate together to improve the product, enhance its performance, and achieve high-quality results in the future. So, what does TaaS do, and why does it get so much attention? The secret to TaaS is that it refers to a wide range of testing techniques, supporting various aspects of the app testing process while offering a few significant benefits, such as faster delivery, reduced costs, and solutions tailored to the client’s demands. What Does TaaS Typically Include? Testing as a Service can be classified into two main categories, functional and non-functional testing. In addition, there are minor categories within these two groups, depending on the objectives they have. Cloud Testing Cloud testing as a service focuses on testing the company’s cloud resources and apps that reside in the cloud to guarantee clients can securely access the platform over the Internet. Quality Assurance Testing Quality Assurance testing as a service ensures that the final version of the product meets the requirements before it is released to the public. In addition, the vendor offers testing solutions to eliminate flaws and ensure quality. Penetration Testing Penetration testing as a service is when a vendor performs mock attacks (simulated cyberattacks) to evaluate a company’s security system. This form of testing-as-a-service (TaaS) is a part of a more comprehensive security program that exposes and addresses hidden weaknesses in the system’s defense against cyberattacks before hackers exploit them. Unit Testing Unit testing as a service focuses on evaluating the functionality of the smallest unit in the system, a given piece of coding. Typically, a weak part is checked first since it is an easy getaway for the program to get infected. Graphic User Interface (GUI) Testing GUI testing as a service, or Graphic User Interface testing as a service, is utilized to evaluate the user-facing side of the software. In other words, it is testing from a user’s perspective across the expected platforms and devices. A service provider can find the defects that your clients will mention if they use the system and discover a way to transform the user interface to make it better. Regression Testing Regression testing as a service focuses on those elements that have already been checked. It happens when the system changes to confirm the existing features have not been affected by the new ones. API Testing API testing as a service, also known as Application Program Interface testing, answers the question of whether the program meets functionality, security, and reliability expectations by sending requests to various API endpoints and comparing the current response to the expected result. Load Testing Load testing as a service is a part of performance analysis in which a reaction to heavy usage volumes is evaluated by applying the desired load variations and simulating real-user scenarios. A provider looks for weak spots in a system in order to exclude them and improve the response time, as well as define the possible traffic for the app to run without failures or unexpected exits. Performance Testing Performance testing as a service refers to overall application performance testing, in which a team of professionals verifies that the app behaves as it should under the expected workload, eliminating bottlenecks if they emerge. The software’s speed, scalability, and stability under different loads are prioritized. The efficiency of the performance testing can be dramatically increased if the process is outsourced with TaaS and, as a result, automated. Integration Testing Integration testing as a service is when a service provider examines how distinct code units interact or integrate with one another. Instead of assessing each component individually, a vendor analyzes how all of them work together as a combined entity. Functional Testing Functional testing as a service covers testing the entire existing functionality as well as how the system operates. Other types of functional analysis, such as GUI and user acceptance testing, may also fall under the category. Localization Testing Localization testing as a service is done to control whether the settings are correct and fulfill the expectations in a foreign locale (country and culture-specific adjustments). It is conducted to eliminate errors associated with adaptation when software is localized for usage in a new region. With the help of professional localization tools, the provider verifies that the product functions flawlessly in every market for every user. The human perspective, we believe, can never be automated in this form of testing. That is why it is critical to have a professionally designed team to ensure that each user has an equally enjoyable experience with your product. What to Consider Choosing a TaaS Provider? There are a few things you should keep in mind when ordering Testing as a Service (TaaS): Define Your Testing Needs Have a clear understanding of your testing needs, or at least set your priorities and consult a potential provider about the ins and outs of a testing project. This includes identifying the types of testing you need (e.g., functional, performance, security), the scope of the testing (e.g., specific features or components), and the desired outcomes of the testing (e.g., identifying and fixing bugs, improving performance). Focus on the Testing Provider’s Expertise Look for a testing-as-a-service provider that has experience and expertise in the types of testing you need. This may include specialized knowledge of specific technologies or frameworks and experience testing similar applications or systems. Evaluate the TaaS Provider’s Processes and Tools It’s important to understand how the TaaS provider will approach the testing process and what tools and technologies they will use, including real-life devices.

By Anna Smith

Top Testing, Tools, and Frameworks Experts

expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano
expert thumbnail

Thomas Hansen

CEO,
Aista, Ltd

Obsessed with automation, Low-Code, No-Code, and everything that makes my computer do the work for me.
expert thumbnail

Soumyajit Basu

Senior Software QA Engineer,
Encora

Hello all, My name is Soumyajit Basu. I am a software engineer by profession and by passion. I love exploring the world of technology. Very new in DZone and as well a blind follower of DZone, as I believe they have done wonderful work in bringing every technology lover to the same platform where each and everybody can learn from each other. I personally believe in a [1:n] relationship which in lay man terms if I put, I am one but I can learn from many. In short, I am loving myself as a DZone contributor and would like to scale up myself in this platform from the help of everybody here. Kindly go through my articles and provide me with valuable inputs so that I can deliver you better articles in the near future. I’d rather be a failure at something I love than a success at something I hate. – George Burns
expert thumbnail

Vitaly Prus

Head of software testing department,
a1qa

Vitaly Prus is the Head of software testing department at a1qa. With 13 years of experience in quality assurance, he has gained vast knowledge in both talent management and working with clients in a management role. One of his main activities as of now is implementing company policy and directing a strategy towards the profitable growth and operation of the company. Vitaly is also responsible for managing teams and projects, coordinating department working activity, QA and related IT processes consulting as well as establishing processes, introducing QA into software development life cycle, evaluating the software quality, and setting up teams. Now, QA department led by Vitaly consists of more than 175 engineers who have successfully completed over 200 projects across healthcare, retail, eCommerce, insurance, and other industries.

The Latest Testing, Tools, and Frameworks Topics

article thumbnail
How to Identify Locators in Appium (With Examples)
This Appium testing tutorial focuses on the Appium automation tool to automate Android and iOS applications using different locators in Appium.
March 31, 2023
by Wasiq Bhamla
· 485 Views · 1 Like
article thumbnail
Advantages and Disadvantages of Test Automation
In this article, we will explore both the advantages and disadvantages of test automation.
March 31, 2023
by Pooja N
· 1,998 Views · 1 Like
article thumbnail
How To Create a Background Service in Android
Not everything needs to run in a focussed application.
March 30, 2023
by Nilanchala Panigrahy
· 23,312 Views · 1 Like
article thumbnail
How to Create a Custom Layout in Android by Extending ViewGroup Class
Learn to create a custom Layout manager class to display a list of tags.
March 30, 2023
by Nilanchala Panigrahy
· 43,941 Views · 2 Likes
article thumbnail
How to Monitor TextView Changes in Android
In this tutorial, we will see how to monitor the text changes in Android TextView or EditText.
March 30, 2023
by Nilanchala Panigrahy
· 5,698 Views · 0 Likes
article thumbnail
Android Keyboard Hacks: Hide the Keyboard and Customize Actions
Use these code hacks to control the appearance and behavior of Android's keyboard.
March 30, 2023
by Nilanchala Panigrahy
· 12,462 Views · 1 Like
article thumbnail
How To Install Oceanbase on an AWS EC2 Instance
In this article, I will walk you through how to install OceanBase on an AWS EC2 instance. This is the first in a series of articles where I demonstrate how to integrate OceanBase into your applications.
March 30, 2023
by Wayne S
· 1,080 Views · 1 Like
article thumbnail
Android Third-Party Libraries and SDK's
See Android third-party libraries and SDKs.
March 30, 2023
by Nilanchala Panigrahy
· 32,586 Views · 1 Like
article thumbnail
Natural Language Processing (NLP) in Software Testing: Automating Test Case Creation and Documentation
Explore the transformative power of Natural Language Processing (NLP) in revolutionizing software testing by automating test case creation and documentation.
March 30, 2023
by Jacinth Paul
· 1,351 Views · 1 Like
article thumbnail
Tackling the Top 5 Kubernetes Debugging Challenges
Bugs are inevitable and typically occur as a result of an error or oversight. Learn five Kubernetes debugging challenges and how to tackle them.
March 29, 2023
by Edidiong Asikpo
· 2,279 Views · 1 Like
article thumbnail
Effective Jira Test Management
This article explores the effective use of Jira in software testing with a QA workflow in mind and additional tools and Jira plugins.
March 29, 2023
by Oleksandr Siryi
· 1,391 Views · 1 Like
article thumbnail
Rapid Debugging With Proper Exception Handling
In this article, you will learn when to use and when NOT to use exception handling using concrete examples.
March 29, 2023
by Akanksha Gupta
· 1,515 Views · 1 Like
article thumbnail
Building a REST API With AWS Gateway and Python
Build a REST API using AWS Gateway and Python with our easy tutorial. Build secure and robust APIs that developers will love to build applications for.
March 29, 2023
by Derric Gilling CORE
· 2,800 Views · 1 Like
article thumbnail
7 Ways for Better Collaboration Among Your Testers and Developers
The collab between developer and tester is crucial to timely deliver your web application. Read more and find out 7 ways to achieve it. (Psst.. Look out for #4)
March 28, 2023
by Praveen Mishra
· 1,448 Views · 2 Likes
article thumbnail
Understanding What Is Required to Start With Test Automation for Junior Testers
Tips and tricks to help junior testers start with test automation.
March 28, 2023
by Mirza Sisic
· 1,618 Views · 1 Like
article thumbnail
Introduction to Shift Left Testing
Shift-left testing improves the efficiency and effectiveness of their software development processes.
March 28, 2023
by Anshita Bhasin
· 1,766 Views · 1 Like
article thumbnail
Automated Performance Testing With ArgoCD and Iter8
In this article, readers will learn about AutoX, which allows users to launch performance experiments on Kubernetes apps, along with code and visuals.
March 28, 2023
by Alan Cha
· 8,319 Views · 4 Likes
article thumbnail
How Can Digital Testing Help in the Product Roadmap
This article explains the importance of a product roadmap and how digital experience testing can help in creating the product roadmap.
March 28, 2023
by Anusha K
· 2,337 Views · 1 Like
article thumbnail
Web Testing Tutorial: Comprehensive Guide With Best Practices
In this article, readers will deep dive into web testing to help you understand its life cycle, elements, angles, the role of automation, and much more.
March 27, 2023
by Rhea Dube
· 1,633 Views · 1 Like
article thumbnail
Navigating Progressive Delivery: Feature Flag Debugging Common Challenges and Effective Resolution
Lightrun's Conditional Snapshot and Logs allow developers to create virtual breakpoints dynamically without impacting performance or sacrificing security.
March 27, 2023
by Eran Kinsbruner
· 3,409 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: