Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Five Principles for Engineering High Quality Software

DZone's Guide to

Five Principles for Engineering High Quality Software

This checklist will help you develop high quality applications with five actionable ideas about code coverage, testing, and refactoring.

· DevOps Zone
Free Resource

Learn more about how CareerBuilder was able to resolve customer issues 5x faster by using Scalyr, the fastest log management tool on the market. 

Experience has taught users to avoid the latest versions of software applications, until the inevitable maintenance releases and patches have been released. While everyone is aware of the Software Quality Gap that exists between initial release V, and stable release V.n, not much progress is being made toward solving the problem.

This article will discuss five actionable ideas to help development groups close the Quality Gap:

  1. Use code coverage analysis to measure testing completeness

  2. Improve test coverage with unit tests

  3. Make tests easy to run, and test results easy to understand

  4. Implement automated, parallel, and change based testing

  5. Constantly re-factor code to improve maintainability

1. Use Code Coverage Analysis to Measure Testing Completeness

All organizations that develop software perform system testing prior to release to ensure that it will work properly when deployed by the end user. However, the challenge with system testing is often to ensure that the testing is complete. Many development organizations will develop test procedures that map to the written requirements for the application (if available) or that map to the user documentation. This type of testing will exercise nominal paths through the code, but is unlikely to test boundary, or error conditions. This method often provides only 60 or 70 percent code coverage.

The only way to ensure that system testing is complete is to collect and analyze the code coverage data during testing in order to determine which parts of the application have been executed by each system test -- and more importantly, which parts of the application have not been executed by any test. As a result, code coverage analysis is the best way to determine exactly how complete application testing is and is the only reliable metric for testing completeness. It can be used across the entire application life-cycle from single developer testing through final release testing.

While achieving "100% code-coverage" does not prove that an application is perfect, it is a critical component of engineering high quality software. In fact, many of the standards associated with safety-critical software development mandate code coverage as part of the development process. These include DO-178B/C (Aerospace), ISO 26262 (Automotive), IEC 61508 (Industrial Controls), FDA and IEC 62304 (Medical Devices) and the CENELEC standard (Rail Applications).

Image title

Figure 1: Analyzing source code coverage is the best way to measure the completeness of test activities.

2. Improve Test Coverage with Unit Tests

Once the process of measuring code coverage has begun, it will likely reveal that existing tests provide significantly less than 100% coverage. This coverage gap results from testers focusing on nominal use cases and not on error cases or boundary conditions.

The obvious way to close the coverage gap is to add additional functional tests, but it is likely that 20 -30 percent of the application code is really difficult to test with functional tests in a production environment, since it is difficult to inject the faults required to trigger the error handling.

Image title

Figure 2: Coverage gaps result from testers focusing on nominal use cases, and not on error cases or boundary conditions.

It’s no surprise that most critical bugs that occur in the field are the result of an odd combination of stimulus to the application that was never anticipated. Enter the fabled Heisenbug, a bug that disappears or alters its behavior when one attempts to probe or isolate it. For C programmers, these are thought to be the result of uninitialized auto variables, and are a source of frustration because simply observing the code appears to be altering it.[1]

This is where using low-level unit testing is critical. Unit testing is an important part of building a robust and error free application because it allows the tester to more easily stimulate the low-level functionality of the application, and prove that the low-level requirements have been implemented properly.

When performing unit testing, parameters are set by which these units of code are run in isolation with mock objects and subjects. As a result, it provides a number of benefits including helping developers discover issues during the development process, which makes them easier to address and fix. Unit tests also allow fault injection to the testing of error handling in ways that are impossible in a production environment.

3. Make Tests Easy to Run, and Test Results Easy to Understand

In theory, it sounds like a simple plan: make tests easy to run and the test results easy to understand. In practice, however, this can be a challenge. Historically, different flavors of tests are built and maintained by different engineers, often using different tools, such as:

  • Developer Tests that are used to prove correctness of the low-level building blocks of an application
  • Integration Tests built to prove the correct functioning of complete sub-systems
  • System Tests to prove correctness from an end-user point of view

Image title

Figure 3: The key to enabling this work-flow is a common test collaboration platform, which captures all tests, along with their pre-conditions and expected results.

When tests are partitioned this way, each flavor of tests is owned and maintained by a different group of engineers rather than being shared across all members of the development team. In fact, in many organizations, it is likely not possible for a QA engineer to run a Developer Test or for a developer to run a System Test.

In order to improve software quality, these barriers need to be broken down so that it is possible for any member of the development team to run any test, at any time, on any version of the application. The key to enabling this work-flow is a common test collaboration platform that captures all tests along with their pre-conditions and expected results.

Engineers should be able to run a single test, or all tests with the “click of a button.” In addition, it is essential that engineers are able to quickly debug failing tests.

4. Implement Automated, Parallel, and Change Based Testing

Once testing completeness is improved by code coverage analysis, and tests are deployed across the entire organization, the next step is to ensure that tests run fast. One of the reasons that tests are partitioned between multiple groups is that a complete system test might take hours, or possibly days, to run.

How can we decrease test time while still ensuring testing completeness? By building a testing infrastructure that is scalable, using parallel and change-based testing.

Individual tests must be atomic, small, and fast. Too often test suites become tightly coupled over time with new tests simply being inserted into existing tests. This makes tests fragile and test maintenance time consuming. A simple thought to keep in mind when designing tests is that each test should define its own pre-conditions -- not rely on the output of other tests.

Image title

Figure 4: Each test should define its own pre-conditions not rely on the output of other tests.

Beyond the benefits of test maintenance, re-architecting tests to be atomic enables:

  • Change-Based Testing, running only those tests affected by each software change
  • Parallel Test Execution, running hundreds of individual tests at the same time

While many organizations have developed a software build system that allows for unattended incremental application building, most have not implemented incremental testing. Too often, testing is performed periodically rather than constantly and incrementally with complete automation.

Change-Based Testing

Change-based testing (CBT) analyzes each set of changes to the code base, and intelligently chooses the sub-set of all tests that are affected by those changes. This results in complete testing in a fraction of the time of a full test run. In addition, change-based testing provides an accessible means for implementing a rigorous continuous integration (CI) development process; during the check-in phase of CI, CBT provides an efficient means to verify the build and detect problems early.

Parallel Test Execution

To improve speed even further, parallel testing, or integrating the test platform with a continuous integration server and virtualized test machines, can reduce total test times from hours to minutes -- or minutes to seconds.

5. Constantly Re-Factor Code Bases to Improve Maintainability

Code refactoring is the process of restructuring application components without changing its external behavior (API). Without re-factoring, applications code becomes overly complicated, and hard to maintain over time. As new features and bug fixes are bolted onto existing functionality, the original elegant design is often causality.

Code re-factoring improves code readability and reduces complexity to reduce maintenance cost. Code refactoring, executed well, offers the promise of resolving hidden, dormant, or undiscovered computer bugs or vulnerabilities in the system by simplifying the underlying logic and eliminating unnecessary levels of complexity.

Image title

Figure 5: Building tests to formalize the expected behavior enables organizations to confidently refactor these fragile modules

One of the biggest impediments to re-factoring is the lack of tests which formalize the existing behavior.

Every application has fragile and buggy parts that developers are hesitant to change for fear of breaking existing functionality. The only way to confidently refactor these fragile modules is to ensure that tests are being built to formalize the expected behavior.

Conclusion

Over the last thirty years, there have been a steady flow of tools, design patterns, and development paradigm shifts. Many of these have promised improved quality without increased time or effort. However, there is clearly not a silver bullet that provides this improved quality at no “cost.”

Improved software quality is everyone’s job. The only sensible way to improve software quality is to improve the effectiveness of software testing.The five steps presented in this paper can be implemented by any development organization of any size with an automated and easy to use test collaboration platform for the project established.

[1] Hristov, Ivan. September 16, 2012. Chasing Heisenbugs from an AKKA actor integration test with awaitility. https://honeysoft.wordpress.com/category/heisenbug/

Find out more about how Scalyr built a proprietary database that does not use text indexing for their log management tool.

Topics:
software quality ,code coverage ,unit testing ,continious integration ,testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}