The Core Activities of Performance Testing
A list of seven core actions that occur in successful performance testing processes, and what factors to consider for each action.
Join the DZone community and get the full member experience.
Join For FreeThere are seven core activities that occur in a successful performance testing process. Before implementing them, it is important that we understand these seven concepts.
The core activities are as follows:
Identification of Test Environments
Here we identify the physical test and production environment for the software application. It also identifies the tools and resources that are available to the test team. The environment, tools and resources here refer to the configurations and settings of the hardware, the software and the network. A thorough understanding of this test environment enables better test planning and design. This identification process needs to be periodically reviewed during the testing process.
The key factors to consider for test environment identification are as follows:
- Hardware and machine configurations
- Network Architecture and user location
- Domain Name System Configuration
- Software installed.
- Software licenses
- Storage capacity and data volume
- Levels of logging
- Load Balancing
- Load Generation and Monitoring tools
- Volume and type of network traffic.
- Scheduled processes, updates and backups.
- Interaction with external systems.
Identification of Performance Acceptance Criteria
This step involves identifying or estimating the performance characteristics of the software application. This starts with noting the performance characteristics which are rendered as good performance by the stakeholders. The main characteristics of a stable performance are Response Time, Resource Utilization and Throughput.
The key factors to consider for identification of performance acceptance criteria are as follows:
- Business Requirements and obligations
- User Expectations
- Industry Standards and Regulatory Compliance Criteria
- Service Level Agreements (SLAs)
- Resource Utilization Limits
- Workload Models
- Anticipated load Conditions
- Stress Conditions
- Performance Indicators
- Previous Releases
- Competing Applications
- Objectives of Optimization
- Safety and scalability
- Schedule, Budget, Resources and Staffing
Plan and Design tests
When you design and plan a test for quantifying the performance characteristics, real world simulations should be created. This will generate significantly relevant and useful results that will help the organization to take informed business decisions. If this is not the test objective, then the most valuable usage scenarios should be explicitly determined.
The key factors to consider in planning and designing tests are as follows:
Obligated usage scenario
Usage scenarios implied by the testing goals
Most common usage scenarios.
Performance critical Usage scenarios
Technical Usage scenario
Stakeholder Usage scenario
High Visibility Usage Scenario
Business Critical Usage ScenarioConfiguration of Test Environment
Innumerable issues arise from network, hardware, server operating systems and software compatibility. Configuring the test environment needs to be started early. This ensures that the configuration issues are resolved before the testing is begun. Additionally, periodic reconfiguration, updates, enhancements should be carried out throughout the project lifecycle.
The key factors to consider in configuring the test environment are as follows:
Determine the maximum load that can be generated before reaching a load bottleneck.
Verify the synchronization of all the system clocks from where the data resources are collected.
Validate the load testing accuracy against the different hardware components.
Validate the load testing accuracy against the server clusters.
Validate the distribution of load by monitoring the resource utilization across servers.
Implementation of Test Design
The biggest challenge in performance testing is to execute a realistic test with simulated data in a way that the application being tested cannot differentiate between real data and simulated data.
The key factors to consider for implementation of test design are as follows:
Ensure the correct implementation of the test data feeds.
Ensure the correct implementation of transaction validations.
Ensure the correct handling of hidden information fields and special data
Validate the key performance indicators.
Ensure the proper population of variables for request parameters.
Consider request wrapping in test scripts to measure the response time for requests.
Consider the script to match the designed test compared to changing the test to match the script.
Evaluate the generated results against those expected. This validates the script development.
Execute Tests
The process of executing test cases is dependent on the tools, resources and the environment. It can be said to be a combination of the following tasks:
Coordinating the execution of tests.
Validating the tests, configurations and the data environments.
Executing tests.
Validating and monitoring the scripts and data while executing.
Reviewing the results on test completion
Archiving the tests, test data, test results and related information for later use.
Logging activity times for later identification.
The key factors to consider while executing tests are as follows:
- Validate the execution of tests for completed data.
- Validate the use of correct values of data for realistic simulation of the business scenario.
- Limit the test execution cycles and review them after each cycle.
- Execute the same test multiple times to determine the factors accounting for the difference.
- Observe any unusual behavior while test execution.
- Set up warning to the team before executing tests.
- Do not carry out extra processes on the load generating machine while generating a load.
- Simulate ramp up and cool down periods.
- Executing a test can be held up when a point of diminishing returns is reached.
Analyze, Report, and Retest
The main aim of executing tests is more than the results. Conclusions need to be derived from them along with the consolidated data to support the conclusions. This process requires analysis, comparisons and reporting.
The key factors to consider are follows:
Analyze the data both individually and collectively.
Analyze and compare the results to determine the inward or outward trend of the application under test.
If any fixes are made, then validate the fix by repeating the test.
Share the results of the test and make the raw data available to the team.
Modify tests if desired objective is not met.
Exercise caution while reducing test data as valuable data cannot be lost.
Report early and often.
Report visually and intuitively.
Consolidate data correctly and summarize them effectively.
Intermediate reports should include priorities, issues and limitations for the next execution cycles.
Conclusion
The above testing activities occur at different stages of the testing process. It is very important to understand the importance and objective of each activity elaborately to be able to design them to best fit the project context.
Opinions expressed by DZone contributors are their own.
Comments