Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Importance of Testing From Developer's Point of View

DZone's Guide to

Importance of Testing From Developer's Point of View

The developer also should understand the processes that should be considered when developing to make the software totally reliable.

Free Resource

The Nexus Suite is uniquely architected for a DevOps native world and creates value early in the development pipeline, provides precise contextual controls at every phase, and accelerates DevOps innovation with automation you can trust. Read how in this ebook.

Introduction

Nowadays our world is totally computerized, everywhere there is software that plays a major role and makes things simple. So it's every developer's responsibility to develop software that is reliable and of course has zero defects. Even the testing people play a major process in testing but the developer also should understand the processes that should be considered when developing to make the software totally reliable.

Testing for Developers

Testing is the process of executing a program or system with the intent of finding errors and it will systematically uncover various classes of errors in a minimum amount of time and with a minimum amount of effort.

Testing shows that the software appears to be working as stated in the specifications and the data collected using testing can also provide an indication of the software's reliability and quality. Also it cannot show the absence of a defect, it can only show whether software defects are present.

Testing is an activity that must be done during the software development cycle prior to the release into production as the process of demonstrating that defects are not present. And the process of showing that a program performs all the intended functions correctly before being released into production.

Testing is the process of establishing confidence that a program does what it is supposed to do and during the maintenance cycle of the software, testing is still extremely important and needs to be done every time the software is modified.

Quality Assurance

Quality Assurance refers to the set of planned and systematic set of activities that ensure processes in the organization meet certain standards like ISO 9000 or SEI CMM that provide a certain degree of confidence in the quality of the software being released.

Quality Assurance Activity

Setting and Maintaining guidelines for processes in line with International Standards, Performing internal audits to ensure adherence to procedures and so on are some of the QA activities but studying software specifications, developing test plans, test cases, executing tests, creating automated test scripts and reporting defects are some of the testing activities.

Validation and Verification in Testing

Verification

Verification refers to the set of activities that ensure that software correctly implements a specific function, imposed at the start of that phase and here some activities are done at each and every stage of the software development cycle using requirements analysis, design, coding and so on.

ValidationValidation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements and activities are typically activities done once the coding has been completed to verify that the software adheres to functional requirements.

VandV

StaticTesting


When To Do Testing

Testing activities can be started as soon as the SRS has been prepared where test planning can be initiated and progressed along with the SDLC through the design and coding phases by developing test designs and test cases as soon as the coding is completed, the focus can be on "Test Execution" .

This approach of testing early in the SDLC will contribute to meeting deadlines without compromising the testing activities.

Testing Types

There are several types of testing, they are:

  • White Box Testing
  • Black Box Testing
  • V-Model Testing
  • Unit Testing
  • Acceptance Testing
  • Integration Testing
  • Acceptance Testing
  • Load/Stress Testing
  • Soak Testing
  • Smoke Testing
  • Volume Testing
  • Concurrency Testing
  • Regression Testing

White Box Testing

White Box Testing is testing the application at the code level. This is the structural or logic driven testing on the source code of the application, to ensure:

  1. Debugging
  2. Memory Leakage
  3. Code Complexity
  4. Code Review
  5. Code Coverage

White Box or Glass Box Tests is the Structural Testing that is based on the knowledge of the internal structure and the logic and it is usually logic driven.

White Box Testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that:

  1. Guarantee that all independent paths within a module have been exercised at least once
  2. Exercises all the logical decisions on their true and false sides
  3. Executes all loops at their boundaries and within their operational bounds
  4. Exercises internal data structures to ensure their validity

Memory Leak

Memory Leak test techniques focus on the execution of the application that attempt to find instances where the application is not releasing or freeing up allocated memory. This test technique is valuable in program debugging as well as in testing a complete software release.

A Memory Leak is present wherever a program fails to free allocated memory. It allocates then allocates more to replace the memory it already has. Memory Leaks are the most common type of defect but difficult to detect. Errors can be identified before they cause major problems like performance degradation or a dead lock condition.

Code Complexity

Code complexity aids in identifying complex areas and aids in evaluating the maintainability and reliability of modules.

Code Review

Code Review is used to assist in a static review of code against coding standards set within the organization.

Code Coverage

Code Coverage identifies the level of the code covered in testing. It looks for branch coverage and instruction coverage and gives the % of the code that has been covered. It is a useful metric that could be a measure of the extent of testing that has been done.

Black Box Testing

Data-driven testing to ensure an application's functionality meets the requirements specification and the application gets input and the expected outputs are known, but the contents / program code is not known or irrelevant. Some techniques are:

  1. Test grouping/ Equivalence Classes
  2. Boundary analysis
  3. Worst case analysis

Black box testing is Functional Testing based on external specifications without knowledge of how the system is constructed usually process and/or data driven. It attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing.

Black box testing attempts to find errors in the following categories.

  1. incorrect or missing functions
  2. interface errors
  3. errors in data structures or external database access
  4. performance errors
  5. initialization and termination errors

Equivalence Classes

Equivalence classes may be defined depending on the following guidelines:

  1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
  2. If an input condition specifies a member of a set, then one valid and one invalid equivalence class is defined.
  3. If an input condition is boolean then one valid and one invalid equivalence class is defined.

Boundary Value Analysis

BVA can be applied to both structural and functional testing levels. It defines the three types of data good, bad and on the border and uses values that lie on the boundary and maximum/minimum values. Its analysis always include plus/minus one boundary values.

BVA leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are the following four steps:

  1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
  2. A cause-effect graph is developed.
  3. The graph is converted to a decision table.
  4. Decision table rules are converted to test cases.

V-Model Testing

V-model testing is the most widely recognized model for conducting software testing. It tracks the development cycle and associated testing tasks at each phase. It consists of Unit Testing, Integration Testing, System Testing and Acceptance Testing that based on the way the software is built.

Unit Testing

Unit testing involves testing the smallest unit or block of code. A software unit is defined as a collection of code segments that make up a module or function. A Unit can be the screens for a GUI application, be the smallest block of code for embedded / system level software. For GUI applications let us see what constitutes Unit Testing.

Unit Testing for GUI application comprises:

  • Verifying field level validations (for example a Name field should accept only a string, an age field only a numeric between 1 and 100 and so on)
  • Application Logic (for example Interest Rate calculations and Insurance Premium calculations based on input data in that unit)
  • GUI/Cosmetic Issues (for example size and position of windows, bitmap size, size and position of the form)

Integration Testing

Integration Testing tests the application as a whole, while identifying the problems with program communication. It should be done to test the interface among the modules or units. Database updating as well as the functional problems that are not found by the unit testing are identified in Integration Testing.

Acceptance Testing

Acceptance Testing is testing the system as a whole to ensure that it meets the stated business requirements. Very often done by the customer. It is a type of testing to test whether it works within the defined constraints that ensure that the system meets the needs of the organization and the end user/customer to validate that the right system was built.

Load/Stress Testing

Load testing is testing the software at anticipated load levels for the purpose of identifying problems in resource contention, response times and so on.

In stress testing a considerable load is generated as quickly as possible to stress the application and analyze the maximum limit of concurrent users the application can support.

Soak Testing

Soak Testing involves loading the application with concurrent users over a period of time to study application performance parameters.

Smoke Testing

Smoke test focuses on test automation of the system components that make up the most important functionality. Instead of repeatedly re-testing everything manually whenever a new software build is received, smoke test is used to verify the major functionality of the system.

The script will automatically walk through each step that the test engineer would otherwise have done manually. A Smoke test ensures that no effort is wasted in trying to test an incomplete build.

Volume Testing

Volume Testing tests the performance and behavior of software under a large volume of data in the database. In Volume Testing, application response time and general performance parameters can be checked.

Concurrency Testing

Concurrency testing is the execution of a task with the purpose of finding the errors due to the simultaneous input, Concurrent Resource Access and program communication.

Occurrence of deadlocks at the database layer due to simultaneous input can be tested for any scenario that demands testing for simultaneous input can be termed concurrency testing. For instance, when 20 users access a banking application simultaneously with the same username and password. Only one of them should be allowed to login.

Regression Testing

Regression Testing is the testing of software after a modification has been made to ensure the reliability of each software release. It's a process of testing after changes have been made to ensure that changes did not introduce any new errors into the system. It applies to systems in production undergoing change as well as to systems under development.

Here the test data must be maintained and data conversion and data dependencies may be required when the tests cannot use previous versions of test data. The greater the difference between versions, the less effective the regression test. Should maintain stable baseline version for comparisons.

Tips for doing Regression Testing:

  1. Control the scope of Testing.
  2. Build a reusable test bed of data.
  3. Use automated tools.
  4. Base the amount of regression testing on risk.
  5. Build a repeatable and defined process for Regression Testing.

The following is how to track the defects:

  1. Identify the Test failure
  2. Report the Defect
  3. Fix the Problem
  4. Analyze the problem
  5. Reproduce the problem
  6. Problem complete

The DevOps Zone is brought to you in partnership with Sonatype Nexus.  See how the Nexus platform infuses precise open source component intelligence into the DevOps pipeline early, everywhere, and at scale. Read how in this ebook

Topics:
devops ,testing ,unit testing ,validation ,quality assurance ,integration testing ,acceptance testing ,performance testing ,sdlc ,dynamic testing

Published at DZone with permission of Selva Ganapathy Kathiresan, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}