Improving App Performance With Isolated Component Testing
Improving App Performance With Isolated Component Testing
Performance is just as important as functionality. Virtualized services, multiprotocol test harnesses, and synthetic data help teams execute and scale performance tests.
Join the DZone community and get the full member experience.Join For Free
Container Monitoring and Management eBook: Read about the new realities of containerization.
Today’s composite applications can have hundreds of failure points (memory leaks, socket exceptions, open connections) all compounded when third-party services and APIs are thrown into the mix — not to mention the added complexity of when the request has to make it through the spaghetti mess of a complex ESB to a legacy system or database in the back end that is never available for testing.
It’s also not enough just to do more unit testing. Sure, it can help developers find more mistakes in your code, but it won’t tell you things like whether they misunderstood the requirements or whether it will stand up to conditions they did not think about. Each component needs to be stress-tested and optimized before being integrated into the bigger system. Systems behave differently when they are put under load and connected with the rest of the infrastructure. For instance, you might find that a test case that always worked with one user will fail with 100 real concurrent users if two of the users get each other’s account information back from the middle tier.
Including performance budgets at the component level enables teams to practice performance engineering in a manner congruent with the reality of modern composite application architectures. Each service and component must meet the goal for the aggregate performance of an application. The expected response times or service levels are “budgeted” or “decomposed” out to each component. The performance service levels are verified and enforced at the component level...but how?
This is where a modern testing framework like CA’s DevTest comes into play. Tools like CA’s DevTest combine service virtualization and a multiprotocol test harness in a single solution. The same test cases your team created for unit, functional, and regression testing can be used for load and stress tests that can be run continuously. Dependencies can be virtualized using real condition load with incredible efficiency or the need to buy additional hardware for a testing.
As shown in this example below from a leading telco customer, performance testing a nearly finished solution falls short. In the top line, we see the completed solution (made up of order, look-up, and price steps) coming back with a poor response time of 4.0 seconds, which is way over the 2.1 second SLA. What happens next? Teams typically throw more hardware at the problem, perhaps trying to install more servers, memory, storage, etc., in the test lab, but more often than not, they fail to solve the root problem.
Under the timeline above is the “after” example, showing how the team decomposed that SLA — giving each part of the solution its own performance budget to tune at the component level. CA’s DevTest allows teams to isolate each of those components by virtualizing the surrounding lab environments (and the expected or observed response times) of other components in the system. Using this method, you can determine, for instance, that the pricing app is delivering excess time to the overall solution; and because you set a budget for the response time, you can take individual corrective steps to tune each component in isolation, faster and at far lower infrastructure cost.
Service virtualization captures and creates realistic, highly scalable and responsive versions of virtual services for any dependent system the performance lab needs to connect to, using whatever tools they currently use for load testing without requiring coding or reconfiguring new hardware to represent those systems in the performance lab.
Once a virtual service environment (or VSE) has been established, anyone in the development and testing organization can get access to the virtual service to use whenever they need them. No new hardware, no extra access requests, no waiting to start testing or for these multiple teams to have a very current, realistic way to simulate the rest of the application so that performance and load testing can be conducted at a component level — just as if that component were hooked into the rest of the solution.
In CA service virtualization, companions can be configured for virtual service think scale, batch response, and recurring think time scale to vary the performance of the virtual service over a period of time. Think time specs specify the response latency. This can be scaled using think time scale during deployment or using companions. High think time specs or think time scale can simulate slow network behavior.
Running thousands of performance tests requires lots of test data. Not only do you need a lot of data, but you also need a lot of data loaded with things like credit card numbers, social security numbers, and other PI data. Using production data for testing is expensive to mask and inefficient to generate. You also risk exposing sensitive data, and it typically provides poor coverage (10 to 20 percent).
Synthetic test data generation reduces by 50 percent the time wasted when manually searching for or creating test data. It also allows you to enhance existing test data sets or to create missing test data. Additionally, synthetic test data, driven by requirements, allows you to shift testing left, spot defects earlier and improve the quality of testing.
Synthetic test data can be directly injected into a virtual service. This also eliminates the need to create or maintain data manually for a virtual service. Referential integrity of data is maintained by creating it directly from an API specification (i.e., WSDL), which helps create more stable environments that are free from cross-system dependencies. In addition, on-demand testing environments can be created without risk of noncompliance, because no live data is exposed.
In summary, performance is just as important as functionality in today’s 24/7 world. Component-level performance done prior to live integration leads to higher confidence during integration and user acceptance testing. There is no other way to ensure that the application actually fulfills a complex workflow and meets your business requirement. Virtualized services, multiprotocol test harnesses, and synthetic data give DevTest teams the tools to successfully execute and scale performance tests at the component level. CA Technologies offers many of the tools needed to automate and scale performance testing as part of a continuous testing effort. Their abilities have been further strengthened with the acquisition of BlazeMeter. Check out more information on excuse-free testing visit here.
Published at DZone with permission of Heather Peyton , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.