Treating UI Testing Like a Shared Service Test
It's time to give UI testing the same treatment that API testing gets.
Join the DZone community and get the full member experience.
Join For FreeYears ago, when Service-Oriented Architecture (SOA) started, developers started building services as SOAP (Simple Object Access Protocol) vs. old Enterprise JavaBeans (EJBs). The key driver for the adoption of SOAP was that SOAP-based services made it easy for anyone to consume the service. This drove massive change in the way new code was developed. The SOAP Stack eliminated the need to install .dll’s and .jars on client devices. IDL files also no longer had to be compiled into stubs. Developers could easily share code across application boundaries – it was almost as good as free beer.
In addition to changing the way developers worked, testing was also changed. QA was challenged to move past manual testing and testers were now responsible for testing services which required new skills. Testers needed to understand how XML documents conformed schemas, data types became ultra-important (UIs don’t have Boolean fields like XML), data structure changes and versions invalidated tests constantly. Given all the noise of "build it and they will come," SOAP services got too complex and then Burton Group: SOA is dead; long live services was announced in January 2009, and with it the realization of converting an EJB to a SOAP action via a framework like JAX-RPC does not really create a consumable service.
The one thing that really stuck in my mind as great and amazing was the concept of how consumers should provide tests to validate their consumption of a shared service. The core principle is if you consume a shared service you owe the producer of the service a test case that validates your consumption of the service. Looking back, this makes perfect sense, but back in 2007, this was crazy talk.
A consumer would say, "Not only do I have to work with another team and rely on them to mess up my schedule, but I have to write a test case and give it to them to ensure they understand how I am consuming their poorly-written services that provide 57 different bits of functional data in one call?" (Back then, microservices were not even a glimmer in anyone’s eye). This was heresy and crazy talk; the tax of writing code to test code meant that writing the functionality vs. reusing a service was a common complaint. Looking back, the concept we call Continuous Validation Service in iTKO LISA was just a really good idea. We preached SOA and the need for sharing your functional expectation for a service via an automated test case that could be run on a daily or hourly basis to prevent unintended consequences was golden.
How Do You Test a Single Service Shared by Many?
If you have five consumers of a complex SOA service then a change to part of the service could break a consumer, and the only way you would know this is by having a breaking test case. For example, a customer balance service, back in the day, could show the customer type, full address, list off accounts, and balances. Two consumers could be interested in getting a mailing address (and not care about the balance), two could be getting a specific account balance, and the last one could be calculating the balance across all accounts. In 2006, this was common and a good use of “reusable services.” Obviously, we know better now, and we would have three or more services to serve up the data in a more micro or specific manner.
Interestingly, in this is the paradigm that automated test cases are still valid, they should represent the consumer of the service and expected behavior. This is true for one ugly legacy SOAP service (lipstick on an existing overly complex EJB) or for a modern-day microservice. As a consumer, a service I owe the producer an automated test that represents my use of the data and/or logic they can make sure that they don’t change the service and break my consumption.
Why don’t we think this way in large shared enterprise processes? Why don’t we think this way when validating a shared user interface that is mission critical for a company?
Testing Down a Path of Despair and Production Problems
Now that I am focusing on helping people test the user interface as a first-class citizen of automated testing I have noticed some really bad behavior in user interface testing. People try to write the automated tests the same way they write the logic in an ERP implementation and this is horrible and leads you down a path of despair and production problems.
Let’s look at modern day ERP implementations, there is huge emphasis on common modules and one shared instance. People don’t want to keep different finance, supply chain, and human resources modules for each country and division of the company that needs different processing rules and data handling. In modern ERP implementations one super shared finance module will prevent needing to do monthly rollup’s and accruals of accounting data. This business request has caused a ton of complexity in rules and configuration in the finance module. I have noticed that QA now wants to follow the same path, the path to disillusionment and distraction in my opinion. I want one shared test case for finance that all countries will share and they will have to go through complex versioning, source management, and then merging of completing updates. This is common in development and huge toolchains have been written to help version, compare, merge, roll back, and unit test every change to prevent bad things. Do you really want that level of complexity in your UI functional testing? At what point is an over shared test case just another point of confusion and maintenance nightmare?
Continuous Validation
This is when I go back to 2007 and look at the simplicity of a continuous validation and having every consumer build an automated test case that reflects their expectations of a service. This was simple and effective back in 2007 and over ten years later, this is an even better idea given the maturing of agile, DevOps, and Continuous Testing.
Why should I test shared enterprise level UI workflows and business processes like a service?
- No need to recreate the complexity of code rules in test cases
- It would be too easy to invalidate or ignore a consumer’s expected results
- Detect unintended consequences
- Every consumer creating test supports CI/CT models
1. No Need to Recreate the Complexity of Code Rules in Test Cases
I am sure there is an opinion out there supporting the business desire to have uber shared complex processes in an ERP package, and there is a number saying they create a maintenance nightmare. From a testing perspective, we don’t want to write thousands of lines of code to test something; at that point you just have another system of code that contains more bugs to test and resolve. Tools like Worksoft Certify allow you to create tests for the UI easily without code and they are self-documenting. It is too easy to ask a business user to show you how they use a set of UI screens for a string test and then just run the test because the UI objects are automatically created and maintained separately. When UI tests are not brittle and handle changes well testing is simplified and maintenance is greatly reduced.
2. It Would Be Too Easy to Invalidate or Ignore a Consumer's Expected Results.
In modern-day Agile development there are some funny rules like, “All stories are independent and can be taken off the backlog in any order”. This implies that tests are also independent and can be written when the story is added to the sprint. But, not every story has holistic acceptance criteria. A core premise of Agile development is each developer creates their own unit tests based on acceptance criteria and then testers create their tests plans from the acceptance criteria and associated tests that are more functional in nature. Thus, core to agile is the notion that you only worry about the functionality you are responsible for and don’t worry about another story’s acceptance criteria.
Consider a shared test for validating inventory shipment at 25 different stores that span different countries. If I have one shared test all of a sudden, I need to understand that one shared asset has to simultaneously validate 25 stores or 113 acceptance criteria representing business rules in different countries. Just stop the madness now and write an automated test to cover the acceptance criteria for each country or type of store and not try to create the uber test case that ignores other consumers logic and validation.
3. Detect Unintended Consequences
If you do the right thing and write your test cases to align to your stories and acceptance criteria explained above, then the next logical step is to run them, and run them often. Of course, this does imply you have automated tests vs. manual test cases for your acceptance criteria. Making testing part of your core Continuous Integration and Continuous Testing is imperative and this means that you will detect changes in shared screens that break your consumption. This could be as simple as changing overtime hours in the system was a globally breaking change vs. specific to a country. Configuration data along with code changes break different consumers of a UI screen in a large end to end process. Don’t think of testing configuration data changes as less important than code changes. They can all disrupt business and the end user does not care why.
4. Every Consumer Creating Test Supports CI/CT Models
Yes! Of course they do. Where else would all of those core regression tests come from that we get to run with every build. Most done criteria does not include “and make sure I don’t break the business.” Every consumer of a shared UI or process provides a safety net of regression that make CI and CT amazingly successful. The key to remember is just like badly-written services many UI screens have way too much data and processing behind them and a new UI that is a mash-up of microservices has tons of rules behind them. Many times the only way to make sure changes to a microservice used in a mashup or an end-to-end process does not break the core business process it to test at the functional UI level. Providing an automated test case specific to a process is the only way to validate the orchestration of the services working together.
Rethink UI Testing
When we rethink UI testing and apply the lessons learned from service testing we can create great test libraries that identify defects before they impact production. The key is to not treat the operationalizing of tests for shared APIs and UIs differently. Both need to be tested using the same rigor and ensure that expected user functionality is delivered and that the next changes do not break anything. Having each consumer of a UI or service add tests to the continuous testing that is run daily is the best safety net preventing business processes from failing.
Opinions expressed by DZone contributors are their own.
Comments