As software professionals, we all have a very good, unified understanding of what engineering and testing team skill sets are and how different they are from one another. But many of us have a difference of opinion when it comes to performance testing versus performance engineering.
Though performance testing comes under the umbrella of testing, in many aspects performance testing is very different from the usual functional testing stuff. Be it the effort estimation strategy, test planning, the defect management cycle, or the tools knowledge requirement, performance testing is quite different. From test management perspective, there are quite a lot of differences that needs to be exhibited in the management style as well.
Performance testing is not a type of automation testing where test scripts are created using a tool and automated test runs are scheduled. In functional or automation testing, test coverage becomes a very important factor, whereas, in performance testing, test accuracy becomes essential. Realistic simulation of end user access patterns both quantitatively and qualitatively is a key factor towards successful performance testing but, unfortunately, this is not measured or expressed using a metric. This has led to a state where anyone who knows using a performance testing tool can do performance testing.
What Is Performance Testing?
So finally, let’s get to the definition. Performance testing is a type of testing that simulates the realistic end user load and access pattern in a controlled environment in order to identify the responsiveness, speed, and stability of the system. It usually requires reporting of system performance metrics like transaction response time, concurrent user loads supported, server throughput, etc., along with additional metrics reporting the software and hardware layer-specific performance metrics like browser performance, code performance, server resource utilizations, etc., that help in analyzing the potential bottlenecks that affects the system performance and scalability.
So, yes, performance testers should know how to report the performance metrics of different layers while conducting a performance test. In a 3-tier architecture, the performance of individual tiers — web server, application server, and DB server, client-side / browser performance, network performance, server hardware performance, etc. — needs to be measured and reported. This cannot be considered an engineering activity. Deep dive analysis of why a layer specific performance metrics doesn’t meet the SLA can be considered as an engineering activity.
Usually, the confusion starts when it comes to performance bottleneck analysis. I agree there is a thin boundary line exists. Whether it’s a job of performance tester or engineer?. Here is my point of view on this topic – Sophisticated performance monitoring tools and Application Performance Management (APM) tools are used independently or integrated with performance testing tools itself to measure and monitor the performance of various tiers (in software layer) and the infrastructure server resource utilizations (in hardware layer) with clearly reported metrics. Hence, it’s the responsibility of a performance tester to measure and monitor the performance of end to end system during performance tests and report the observations and findings. The basic straight forward analysis and experience-based analysis can be performed by a performance tester to reconfirm the performance problems and observations.
Now if the findings require deep dive analysis, say a specific transaction or method reported to have high response time or a server resource is over-utilized that needs to be debugged further to fine tune it will be the responsibility of the performance engineer. Application capacity planning, performance modeling and prediction analysis, infrastructure sizing analysis, etc., are also core responsibilities of a performance engineer. Measuring and monitoring several parameters that can impact overall system performance will be the responsibility of the performance tester.
What Is Performance Engineering?
Let’s start with the definition for performance engineering, it is a discipline that involves systematic practices, techniques, and activities during each phase of software development lifecycle (SDLC) to meet performance requirements. It strives to build performance standards by focusing on the architecture, design, and implementation choices.
Hence, performance needs to be proactively thought throughout the software development phases right from the requirements phase. In this proactive mode, a performance engineer is involved right from the initial SDLC phase to ensure the system is built with performance standards. There are several techniques available to validate performance at each SDLC stage even when a testable system is not available.
In reactive mode, when a system is tested for its performance snf found to be not scalable, i.e doesn’t meet the non-functional requirements related to response time SLAs, user scalability levels, etc., then a performance engineer usually tries to understand the metrics reported by the performance tester and perform a deep dive analysis on the specific layer(s) which doesn’t meet the SLAs. In this case, depending upon the bottleneck reported, specific SMEs like DB architect, WebSphere specialist, network engineer, etc., can be involved to analyze the performance issue in detail to provide tuning recommendation.
Engineering Career Path for Successful Performance Testers
Performance Testers after gaining good performance testing experience, who possess great interest towards problem analysis and tuning end up having a career path in performance engineering. They are usually not specific technology experts rather they have a good understanding of what to be tuned on what circumstances and seem to have knowledge on various parameters that have to be looked and tuned to have performance and scalability.
These engineers usually have the skills listed below:
- Possess good experience in reviewing system architecture / deployment architecture and providing suggestions for better performance
- Good knowledge in developing strategies for assuring web / mobile application performance throughout SDLC phases
- Good experience in various Performance testing tools like HP LoadRunner, Jmeter, NeoLoad, etc.
- Good experience in measuring/monitoring performance of various layers involved in end-to-end system.
- Experience in analyzing the application traffic patterns using tools like Omniture, DeepLogAnalyzer, etc.
- Experience in performance monitoring tools and APM tools like Perfmon, HP Sitescope, CA Introscope, Dynatrace, AppDynamics, etc.
- Good experience in using profiling tools like Jprofiler, Jprobe, Jconsole, VisualVM, HP Diagnostics, etc, including GC / JVM analysis tools and heap/thread dump analysis tools.
- Experience in DB profiling tools like Statspack / AWR / SQL profiler,etc.
- Experience in front-end web performance analysis using Yslow, WebPageTest, Ajax Dynatrace, PageSpeed, etc.
- Experience in performance prediction / modeling analysis during early SDLC phases.
- Experience in Capacity planning/sizing through Queuing Theory principles & tools like TeamQuest, Metron-Athene, etc.
A person with the above skillset can also be called a performance engineer; they don't necessarily need to have core development skills. Also, every performance engineer might not have skills across all technologies. Based on their practical experience and technical exposure , though they might tend to have knowledge in specific technology, they would carry a better understanding of what can make your system scalable and highly available.
A successful Performance Center of Excellence(PCOE) should have engineers with the above qualities. They would have better confidence to guide towards performance assurance rather than people who knows how to execute performance tests using a tool. My sincere advice would be don’t call the COE as testing only or engineering only, because it will look incomplete from the customer's point of view. Let your customer’s problem statements drive the project to do performance testing or engineering services.
A successful PCOE should comprise of both performance testers and performance engineers complimenting each other with their skillset. Looking from a customer standpoint, their online business application needs to have compliance against their performance NFRs (Non-Functional Requirements). To ensure this, as a performance SME, you need to do testing to measure the performance metrics and certify the system for performance and scalability followed by deep dive analysis and tuning in case (only if) performance SLAs are not met. Unless your COE has capabilities to do both, your COE will not look completely from the customer's point of view.