Over a million developers have joined DZone.

Best Practices for Establishing Automated Continuous Testing

DZone's Guide to

Best Practices for Establishing Automated Continuous Testing

Automated continuous testing can satisfy the demand on QA to test faster. What are best practices for establishing this method while avoiding higher cost?

· DevOps Zone ·
Free Resource

Best practices for getting to continuous deployment faster and with dramatic results in reduced outage minutes, development costs, and QA testing cycles. Brought to you by Rainforest QA.

With the advent of the DevOps model of continuous integration and continuous deployment, there is high demand on QA organizations to test faster and test continuously. Continuous integration requires that every time somebody commits any change, the entire application is built and a comprehensive set of automated tests is run against it. We all know very well that automated testing is the only way to make a feedback loop faster and reduce the workload on testers. So, what is the best practice for establishing the automated continuous testing strategy and avoid significant cost on delivery due to an inefficient framework and poor practices?

  • Architect the test automation framework to integrate easily with any type of continuous integration and SCM tool so that CI migration can be less hassle in case your project/program has changed its strategy with a CI or SCM tool.
  • Test implementation should call through an application driver layer, to actually interact with the system being tested. The application driver layer has an API that knows how to perform actions and return results. Automation at the API level is less fragile and easier to manage. This will support quality to shift left.
  • Implement Test Driven Development (TDD) to reduce development cost and prove that developed code is working as expected, in turn enabling faster deployment to production.
  • Ensure that your automated script runs on all the supported browsers and handled exceptions specific to the browser. Run your test regularly to identify intermittent failure due to UI or environment changes.
  • Handpick the tool whose scripts can be version controllable and for comparing and merging. Align the test script version with the development process of mainline and branching based on major release and features to support the deployment pipeline
  • Strive to ensure that tests are atomic so that the order in which they execute does not matter and can be run in parallel.  An atomic test executes steps it needs and then tidies up behind itself with a record of pass or fail.
  • Implement try {} catch{} ..finally{} to handle exceptions in the script or in the application being tested.
  • Utilize tools like Apache Ant or Maven to manage test build and reporting. It will help to define the target test suite along with various parameters like different browsers and environment to be passed for the run. This will allow us to easily to control the script with tools like Jenkins or TFS.
  • Maintain separate test builds for smoke test, regression test, etc. and tie them to the development build. This will allow you to establish continuous automated testing without monitoring the environment deployment notification or any verbal communication. For example, Jenkins allows you to create test builds and tie them to any development build. Whenever a build is completed by development team, it will automatically trigger the test build and run the relevant smoke or regression test in a remote virtual machine or cloud desktop.

I welcome any discussion to understand your experience and challenges in implementing above best practices.

Discover how to optimize your DevOps workflows with our on-demand QA solution, brought to you in partnership with Rainforest QA.

continous integration ,devops ,test automation ,tdd

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}