DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Popular
  3. Open Source
  4. Technical Solutions Used for Performance Testing and Tuning

Technical Solutions Used for Performance Testing and Tuning

In this third segment of our Performance Research Guide, we asked 14 execs about their solutions to performance testing and tuning.

Tom Smith user avatar by
Tom Smith
CORE ·
Jan. 04, 18 · Interview
Like (2)
Save
Tweet
Share
4.73K Views

Join the DZone community and get the full member experience.

Join For Free


To gather insights on the current and future state of Performance Testing and Tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are the technical solutions you use for performance testing and tuning?" Here's what they told us: 

Open Source

  • We develop a lot of open source tools for developers to use. Toolkits are easy to use for summaries and queries. There are a lot of good reference books for learning and training. Monitoring and management with GUI. If you look at the design of a database, no tool tells you what schema to build for a particular (e.g., e-commerce, financial, healthcare) application. Use best practices. Run Explain for new queries to see how they can be optimized.
  • We use JMeter based tests, and API-driven testing using Python.
  • Alyvix is an open source software for synthetic monitoring and relies on Python 2.7, OpenCV, Tesseract and RobotFramework. Alyvix is licensed under GNU GPL v3. 
  • There are a lot of items to check in a test and tuning phase. Everything from logging, monitoring, alerting, instrumenting, profiling and ultimately testing need to be covered. We use a lot of open source products such as JMeter or Gatling to simulate workload. As a load testing platform, we have the luxury of being able to eat our own dog food, which helps us identify our own performance bottlenecks or issues in production. For tuning, we tend to vary between commercial platforms to assist with identification and analysis of defects, but we also use tools which are native to the system we are working on.

Proprietary

  • We use our own solutions.
  • In addition to using our own solutions, we use Zabbix and Sensu. 
  • We use our own proprietary solutions.
  • Amazon, open source, and our own. Constantly improving and developing. Measure everything, especially cloud performance.

Other

  • Think what you can shift left for quality and performance. Can you automate the real conditions users will see? Run tests in the lab for your benchmark but build capabilities in the lab to replicate the real life of different personas (e.g. frequent airline traveler). Provide a “wind tunnel” test which measures the responsiveness of the apps. Enable logs (HAR files) to be handed off to developers with the information on what needs to be fixed, what was downloaded from the network and the ability to see if it was downloaded incorrectly.

  • We use a combination of active synthetic monitoring (HTTP, VOIP) plus network related passive and active monitoring. Active is associated with the application layer. Add passive monitoring of the internet. Active monitoring of network paths or topology. Network monitoring is typically passive data. Passive monitoring of network infrastructure means you have to own the infrastructure. Organizations must shift to active because they own less infrastructure. To understand what makes up the UX you need to do active monitoring.
  • Mainly Splunk log monitoring and alerting.
  • Our average customer has 3.2 testing solutions. We provide Rest APIs so we’re able to integrate with everything. Open XML, Jenkins is everywhere, there’s some Bamboo. We see Chef and Puppet in the DevOps pipeline but only 15%, others are using homegrown solutions. 75% of our clients are on their DevOps journey but only 10% are fully implementing a DevOps methodology.
  • During the development of the product, automation of reproducible workloads is the most important aspect of testing. A battery of tests is executed on each beta release but we also collect data on a limited subset of tests on a more frequent basis. These are executed on a range of machines and where regressions are found, a deeper analysis is conducted. QA conduct independent tests on a defined schedule as the development team may halt continual testing to debug a specific problem. My team's priority is to resolve regressions as they are found whereas QA's priority is to identify as many regressions that exist as possible and report them. In terms of the tools used for analysis, it depends on the workload. Usually, the first step is to identify what resources are most important for a workload and then use monitors for that area. For example, a CPU-bound workload may begin with a high-level view using top, an analysis of individual CPU usage using mpstat and an analysis of CPU frequency usage and C-state residency using turbostat. It does not stop there – for latency issues we may use information from /proc/ to get a high-level view of how long workloads take to be scheduled on a CPU and ftrace to get a more detailed view of the chain of events involved when waking a thread to run on a CPU. To get an idea of where time is being spent, we would use perf but on occasion, we'd also use perf to determine how, when and why a workload is not running on a CPU. Depending on the situation, ftrace may be a more appropriate solution for answering that question. The tools used vary depending on the resources that are most important to a workload. IO-intensive workloads may start with iostat but can require blktrace in some instances. We avoid enabling all monitors in all situations as excessive monitoring can disrupt the workload and mask problems. Using the data, we then form a hypothesis as to why performance may be low, propose potential solutions and then validate them in a loop until the desired performance is achieved.

Here’s who we spoke to:

  • Dawn Parzych, Director of Product and Solution Marketing, Catchpoint Systems Inc.
  • Andreas Grabner, DevOps Activist, Dynatrace
  • Amol Dalvi, Senior Director of Product, Nerdio
  • Peter Zaitsev, CEO, Percona
  • Amir Rosenberg, Director of Product Management, Perfecto
  • Edan Evantal, VP, Engineering, Quali
  • Mel Forman, Performance Team Lead, SUSE
  • Sarah Lahav, CEO, SysAid
  • Antony Edwards, CTO and Gareth Smith, V.P. Products and Solutions, TestPlant
  • Alex Henthorn-Iwane, V.P. Product Marketing, ThousandEyes
  • Tim Koopmans, Flood IO Co-founder & Flood Product Owner, Tricentis
  • Tim Van Ash, S.V.P. Products, Virtual Instruments
  • Deepa Guna, Senior QA Architect, xMatters
Open source

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • DZone's Article Submission Guidelines
  • What Was the Question Again, ChatGPT?
  • API Design Patterns Review
  • Microservices Discovery With Eureka

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: