DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • The Essential Guide To Effective Software Testing
  • How To Organize End-To-End Quality Assurance and Testing for E-Commerce Apps
  • How to Avoid Defect Leakages in Software Testing
  • Measuring Page Speed With Lighthouse

Trending

  • Docker Base Images Demystified: A Practical Guide
  • AI Meets Vector Databases: Redefining Data Retrieval in the Age of Intelligence
  • Accelerating AI Inference With TensorRT
  • Doris: Unifying SQL Dialects for a Seamless Data Query Ecosystem
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. Queuing Theory for Non-Functional Testing

Queuing Theory for Non-Functional Testing

In this article, explore the basics of queuing theory for non-functional software testing, its benefits and limitations, a case study, and tools to use.

By 
Stelios Manioudakis, PhD user avatar
Stelios Manioudakis, PhD
DZone Core CORE ·
Apr. 08, 24 · Analysis
Likes (6)
Comment
Save
Tweet
Share
14.5K Views

Join the DZone community and get the full member experience.

Join For Free

Queuing theory is a branch of mathematics that analyzes how waiting lines (queues) form and behave in systems. In the context of non-functional software testing, it provides a valuable tool for understanding how a system performs under varying loads. By analyzing queue lengths, waiting times, and server utilization, queuing models can help predict potential bottlenecks and performance issues before they occur in real-world use.

This article starts with the basics of queuing theory for non-functional software testing. Benefits and limitations will be addressed. As a case study, we will explore a sample of queuing models that could be appropriate for mobile gaming applications. Finally, a set of tools that can be used will be explored with their pros and cons.

Key Concepts in Queuing Theory

Queuing theory provides mathematical models that can be used for non-functional testing. First, we will explain basic queuing concepts to understand how to use them.

  • Arrival rate (λ): This refers to the average number of tasks or requests entering the system per unit of time. For example, it could represent the number of customers arriving at a bank per minute or the number of network packets arriving at a router per second.
  • Service time (μ): This represents the average time it takes for a resource to complete a task. In a bank, it might be the average time a teller spends with a customer. In a network, it could be the average time it takes to process a data packet.
  • Queue length (L): This refers to the number of tasks waiting for service at any given time. It is directly related to the arrival rate, service time, and the number of available resources.
  • Number of servers (S): This refers to the resources available to handle tasks. In a bank, it's the number of tellers available to serve customers. In a network, it could be the number of processing cores in a server or the number of available network channels.
  • Queue discipline: This defines how tasks are selected from the queue for service. Some common disciplines include:
    • First-In-First-Out (FIFO): The first task to enter the queue is the first to be served. This is often used in situations like checkout lines or waiting rooms.
    • Priority queue: Certain tasks are assigned a higher priority and are served before lower-priority tasks, even if they arrive earlier. This is used in situations where certain tasks are critical and require immediate attention.
    • Shortest Processing Time (SPT): The task with the shortest expected service time is served first. This can be beneficial for minimizing average waiting times overall.

Applications in Non-Functional Testing 

A sample of applications in non-functional software testing includes the following:

Load Testing

By simulating realistic user loads with specific arrival rates and service times based on queuing models, load testing tools can evaluate system performance under stress. This helps identify potential bottlenecks like overloaded servers, slow database queries, or inefficient code execution. By analyzing queue lengths and waiting times during the load test, you can pinpoint areas where the system struggles and implement improvements before deployment.

Capacity Planning

Queuing theory models can be integrated with testing tools to determine the system's breaking point or optimal resource allocation. The model can predict how the system behaves with different numbers of servers, allowing you to find the sweet spot between adequate performance and cost-effective resource utilization. This helps ensure the system can handle expected user traffic without compromising performance or incurring unnecessary costs for over-provisioning.

Performance Benchmarking

Queuing models can be used to compare the performance of different system configurations or architectures. By simulating the same workload on different system setups, you can evaluate which configuration delivers the best performance in terms of waiting times and server utilization. This can be particularly helpful when choosing between different hardware or software options.

Other Useful Concepts in Queuing Theory

Little's Law

This fundamental relationship in queuing theory states that the average number of tasks in the system (L) is equal to the average arrival rate (λ) multiplied by the average waiting time (W). This allows you to estimate one of these values if you know the other two.

Kendall-Lee Notation

This notation is a standardized way to describe queuing systems based on arrival distribution, service distribution, number of servers, and queue discipline. Understanding this notation helps classify different queuing models and choose the appropriate one for analysis.

Open vs. Closed Queuing Systems

Open queuing systems allow tasks to enter and leave the system. Closed queuing systems have a fixed number of tasks circulating within the system. Choosing the right model depends on the system being analyzed.

Limitations of Using Queuing Theory

As with all theories, queuing theory is based on assumptions. The benefits that we can enjoy by employing queuing theory in non-functional testing are largely affected by how realistic are those assumptions.

Simplified Assumptions

Queuing models often rely on simplifying assumptions to make the math tractable. These assumptions include:

  • Stable arrival rates and service times: Real-world systems might experience fluctuations in arrival rates and service times. Queuing models might not accurately reflect such dynamic behavior.
  • Infinite queues: In reality, queues might have a finite capacity. If the queue becomes full, new arrivals might be rejected, leading to system instability. Queuing models with finite queues can be more complex but offer a more realistic representation.

The Case of Mobile Gaming

Mobile games, especially those with online multiplayer components or microtransaction systems, often involve interactions that we can model using queuing theory. We will analyze a list of possible queuing models appropriate for mobile games. The list is not exhaustive but it can explain the reasoning for using different models and their benefits.

1. M/M/1 Queuing System With Network Latency

In mobile games with online multiplayer features, players may connect to a central server to interact with each other. This scenario can be modeled as an M/M/1 queuing system, where players are the arriving entities, and the server acts as the single server.

Incorporating network latency into the model allows developers to analyze the impact of delays on player experience and design strategies to mitigate them. Understanding the queuing behavior helps optimize server capacity and network infrastructure to minimize latency and enhance the gameplay experience.

2. M/G/1 Queuing System for In-Game Purchases

Mobile games often include in-game stores where players can make purchases using real or virtual currency. The arrival of purchase requests and the service times for processing these transactions may not follow exponential distributions typical of M/M/1 systems.

An M/G/1 queuing system, where the service time distribution is generalized, can be more appropriate for modeling in-game purchase transactions. Analyzing this model helps game developers optimize the payment processing system, streamline transaction flows, and manage server resources efficiently.

3. Finite Source Queuing Models for Limited Resources

Many mobile games have limited resources, such as virtual items, game levels, or server capacity. Players may need to wait in a queue to access these resources, especially during peak usage periods.

Finite source queuing models, such as the M/M/c/K model (with c servers and a finite queue of size K), are suitable for analyzing scenarios where there are constraints on the availability of resources. By understanding queue dynamics and resource utilization, developers can implement strategies to balance resource allocation, reduce wait times, and optimize player satisfaction.

4. Dynamic Queueing Models for Matchmaking

Matchmaking algorithms are crucial for ensuring balanced and enjoyable gameplay experiences in multiplayer mobile games. These algorithms often involve queuing mechanisms to match players with similar skill levels or preferences.

Dynamic queueing models, such as the M/M/1/K queue with dynamic arrival rates or the Erlang queuing model with variable service rates, can be employed to optimize matchmaking systems. By adjusting queue parameters dynamically based on player behavior, game developers can achieve faster and fairer matchmaking results. This may lead to increased player engagement and retention.

Queuing Theory Saves the Launch Day

A mobile game development company was gearing up for the highly anticipated launch of their latest game. Based on pre-registration numbers and social media buzz, they were expecting a massive influx of players on launch day. Their concern was twofold: ensuring smooth gameplay for all users and avoiding server crashes due to overload.

The development team decided to use queuing theory to create a model of their game server infrastructure.

Model Derivation

They identified the game server system as an M/M/c queuing system, meaning:

  • M: Player arrivals follow a Poisson distribution (random and independent).
  • M: The time it takes to process a player's request (e.g., joining a game, updating the game state) follows a Poisson distribution (random and independent).
  • c: Represents the number of available game servers (acting as multiple queues)

Key Performance Metrics

Using queuing theory formulas, they calculated the following metrics:

  • Arrival rate (λ): Estimated based on pre-registration data and industry benchmarks for similar game launches
  • Service time (μ): Measured by analyzing the average time it takes to process player requests during internal testing
  • Server utilization (ρ): ρ = λ / (c * μ). This metric indicates how busy each server is on average.

Model Analysis

The key aspect of the model was understanding how server utilization (ρ) would change with different server configurations (number of servers, "c").

  • High server utilization (ρ > 0.8): Signifies servers are overloaded, leading to potential queueing delays, slow gameplay, and increased risk of crashes
  • Low server utilization (ρ < 0.5): Indicates underutilized servers, which might be cost-inefficient, but ensures smooth gameplay

Taking Action

Using the queuing model, the team conducted a series of tests:

Scenario 1: Existing Server Configuration

The model predicted a server utilization exceeding 80% during peak launch hours, potentially leading to performance issues and frustrated players.

Scenario 2: Adding 20% More Servers

The model showed a utilization dropping to around 65%, offering a significant performance improvement while maintaining some buffer for unexpected player surges.

Scenario 3: Doubling the Servers

Utilization dropped further to around 40%, but the additional server cost might not be justified if player growth was slower than anticipated.

The Decision

Based on the model's predictions, the team decided to add 20% more servers to their existing infrastructure. This provided a significant performance boost without incurring excessive costs. Additionally, they implemented auto-scaling rules to automatically provision additional servers if player traffic exceeded pre-defined thresholds.

The Outcome

Launch day arrived, and the company saw a record number of players joining the new game. However, thanks to the queuing model and proactive server scaling, the servers handled the load efficiently. Players experienced smooth gameplay without major delays or crashes.

Tool Selection

There are various testing tools available that incorporate queuing theory principles. Choosing the right tool depends on the complexity of the system, the desired level of detail, and the specific testing goals. The following list is by no means exhaustive. 

  • Microsoft Excel with queuing model add-ins:
    • Pros: Free, readily available for most users, easy to learn basic formulas
    • Cons: Limited functionality, error-prone with complex models, not ideal for large-scale testing
  • Online queuing model calculators:
    • Pros: Free, user-friendly interface, good for quick estimations
    • Cons: Limited model options, may not capture specific system details, limited customization
  • JMeter:
    • Pros: Open-source, robust load testing capabilities, supports basic queuing theory integration for user load simulation
    • Cons: Setting up queuing models can be complex, requires scripting knowledge for advanced features
  • Apache JMeter plugin - queueing theory:
    • Pros: Extends JMeter functionalities with queuing theory models, allows for analyzing server utilization and wait times
    • Cons: Relies on JMeter's learning curve, additional configuration needed for queuing features
  • AppDynamics
    • Pros: Commercial tool with a good user interface, offers performance monitoring with queuing theory insights (queue lengths, wait times)
    • Cons: Subscription-based cost, may require training for advanced features
  • AnyLogic:
    • Pros: Powerful simulation software, integrates with queuing models to create complex scenarios, provides detailed performance reports
    • Cons: Steeper learning curve, requires modeling expertise, significant cost for commercial licenses

Wrapping Up

Queuing theory is a valuable option for optimizing performance and resource allocation in various software development scenarios. By understanding the core queuing models and their limitations, development teams can leverage testing tools. They can analyze server utilization, identify potential bottlenecks, and make data-driven decisions. Our task could be to ensure smooth gameplay for a mobile game launch, to optimize infrastructure for a rapidly growing company, or to simply choose the best cloud platform for an application. In any case, queuing theory may empower developers to navigate the complexities of system load and create a seamless user experience.

Functional testing Load testing Non-functional testing Mathematical model Queueing theory

Opinions expressed by DZone contributors are their own.

Related

  • The Essential Guide To Effective Software Testing
  • How To Organize End-To-End Quality Assurance and Testing for E-Commerce Apps
  • How to Avoid Defect Leakages in Software Testing
  • Measuring Page Speed With Lighthouse

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!