DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Debugging With Confidence in the Age of Observability-First Systems
  • Implementing BCDR Testing in Your Software Development Lifecycle
  • The 10 Laws of Testing
  • Tips for Efficiently Testing and Validating Your Program

Trending

  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  • Designing for Sustainability: The Rise of Green Software
  • Designing AI Multi-Agent Systems in Java
  • Power BI Embedded Analytics — Part 3: Power BI Embedded Demo
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. Establishing a Performance Testing Strategy

Establishing a Performance Testing Strategy

Establish your strategy.

By 
Sam Kent user avatar
Sam Kent
·
Oct. 28, 19 · Analysis
Likes (2)
Comment
Save
Tweet
Share
8.9K Views

Join the DZone community and get the full member experience.

Join For Free

Strategy

It's important to have a strategy when testing performance.
You may also like: Common Mistakes In Performance Testing

Establishing a performance testing strategy is the first and most important step in performance testing. It helps you define:

  • The Performance Testing Scope.
  • The Load Policy.
  • The Service Level Agreements (SLAs) and Service Level Objectives (SLOs).

It's never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made. Typically, the most fruitful 10-15% of test scenarios uncover 75-90% of the significant problems.

Risk-Based Testing

Risk assessment provides a mechanism by which you can prioritize the test effort. It helps determine where to direct the most intense and deep test efforts, where to deliberately go lighter (to resource conserve for the more intense scenarios). Risk-based testing can identify significant problems more quickly/earlier in the process by helping focus on the testing of the riskiest aspects of a system.

For most systems, performance and robustness problems occur in these areas:

  • Resource-intensive features.
  • Timing-critical/sensitive uses.
  • Probable bottlenecks (based on internal architecture/implementation).
  • Customer/user impact, including visibility.
  • Previous defect history (observations noted across other similar systems during live operation).
  • New/modified features and functionality.
  • Heavy demand (heavily used features).
  • The complexity of the feature set.
  • Exceptions.
  • Troublesome (e.g., poorly built/maintained) system components.
  • Platform maintenance.

Here is a list of questions presented by industry-expert Ross Collard to help you identify different performance risks:

Situational View

  • Which areas of the system operation, if they have inadequate performance, most impact the bottom line (revenues/profits)?
  • What uses the system is likely to consume (resources per event) regardless of event occurrence frequency? Resource consumption should be significant for each event, not high in aggregate simply because the event happens frequently, and thus the total number of events is high.
  • What areas of the system can be minimally tested for performance without imprudently increasing risk, to conserve the test resources for the areas which need heavy testing?

Systems View

  • Which system uses are timing-critical/sensitive?
  • Which uses are most popular (e.g., happen frequently)?
  • Which uses are most conspicuous (e.g., have high-visibility)?
  • What circumstances are likely to cause a heavy demand on the system from external users (e.g., remote visitors to a public website who are not internal employees)?
  • Are there any notably complex functions in the system (e.g., exception handling)?
  • Are there any areas in which new and immature technologies have been used, or unknown/untried methodologies?
  • Are there any other background applications sharing the same infrastructure, and are they expected to interfere or compete significantly for system resources (e.g., shared servers)?

Intuition/Experience

  • What can we learn from the behavior of the existing systems that are being replaced such as workloads/performance characteristics? How can we apply this information in the testing of the new system?
  • What has been your prior experience with other similar situations? Which features, design styles, sub-systems, components or systems aspects typically have encountered performance problems? If you have no experience with other similar systems, skip this question.
  • What combinations of the factors, which you identified by answering the previous questions, deserve a high-test priority? What activities are likely to happen concurrently, causing heavy load/stress on the system?
  • Based on your understanding of the system architecture and support infrastructure, where are the likely bottlenecks?

Requirements View

  • Under what circumstances is heavy internal demand likely (e.g., by the internal employees of a website)?
  • What is the database archive policy? What is the ratio of data added/year?
  • Does the system need to be available 7×24?
  • Are there maintenance tasks running during business hours?

The answers to these questions will help identify:

  • Areas that need to be tested.
  • Test types required to validate app performance.

Component Testing

Once the functional areas required for performance testing have been identified, de-compose business steps into technical workflows that showcase technical components.

Why should business actions be split into components? Since the goal is to test the performance at an early stage, listing all important components will help to define a performance testing automation strategy. Once a component has been coded, it makes sense to test it separately to measure:

  • Response time of the component.
  • Maximum calls/second that the component can handle.

Moreover, component testing supports JMS, APIs, Services, Messages, etc. allowing scenarios to be easily created and maintained. Another major advantage of this strategy is that the components' interfaces are less likely to be impacted by technical updates. Once a component scenario is created, it can be included in the build process, and feedback on the performance of the current build can be received.

After each sprint, it is necessary to test the assembled application by running realistic user tests (using several components). Even if the components have been tested, it is mandatory to measure:

  • System behavior with several business processes running in parallel.
  • Real user response time.
  • Sizing/availability of the architecture.
  • Caching policy.

The testing effort becomes more complex with the progression of the project timeline. In the beginning, the focus is on app quality, then concentrated on the target environment, architecture, and network. This means that performance testing objectives will vary depending on the project's timeline.

Performance Testing Environment

It is imperative that the system being tested is properly configured and that the results obtained can be used for the production system. Environment and setup-related considerations should remain top-of-mind during test strategy development. Here are a few:

  • What data is being used? Is it real production data, artificially generated data, or just select random records? Does the volume of data match production volume forecasts? If not, what is the difference?
  • How are users defined? Are accounts set with the proper security rights for each virtual user, or will a single administrator ID be re-used?
  • What are the differences between the production and the test environments? If the test system is just a subset of production, can the entire load or just a portion of that load be simulated?

It is important that the test environment is an exact replication of the production environment (some differences may remain, which is acceptable). Even if tests are executed in the production environment with actual production data, it would only represent one moment in time. Other conditions/factors would need consideration as well.

Devise a Test Plan

The test plan is a document describing the performance strategy. It should include:

  • Performance risk assessments highlighting performance requirements.
  • Performance modeling (logic explanation(s) calculating the different load policy framework).
  • Translation of the primary user journey into components.
  • Description of all other user journeys with specific think time/business transaction metrics.
  • Dataset(s).
  • SLA(s).
  • Test description(s) being executed to validate the performance.
  • Testing environments.

The test plan is a key artifact of a well-designed and executed performance testing strategy, acting as evidence that a team has satisfactorily accounted for the critical role performance plays in the final end-user experience.

In many cases, project teams ensure the delivery of performance test plans as part of feature work during planning and development cycles by requiring them as "Definition of Ready" ingredients. Though each feature story or use case may not require the same level of performance testing, making the thought process a hard requirement for completion leads to better systems and an improved mindset over the end-to-end quality of what the team delivers.


Further Reading

An Introduction to Load Testing

Performance Testing: The Basics [Infographic]

Getting Started With Performance Testing, for Developers

Testing Test plan Risk-based testing Production (computer science)

Published at DZone with permission of Sam Kent. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Debugging With Confidence in the Age of Observability-First Systems
  • Implementing BCDR Testing in Your Software Development Lifecycle
  • The 10 Laws of Testing
  • Tips for Efficiently Testing and Validating Your Program

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!