During my career as a Java developer, I've observed the rise of microservices. Microservices are really just the idea of a component but at the process level. When we used to write objects and interface with those objects using method invocations, we now have processes and we interface with those processes over the network.
This has given us quite a few benefits. The first is that we can now compose a system from components written in different languages. We can use off-the-shelf components (an example would be Consul). We can scale components independently. You might have a Compute microservice that scales indefinitely, and a Scheduler microservice where only one master must be active at a time. We can even think about different processes being composed within a Docker container that outwardly exhibit the contract of a microservice (an example of this is Nginx being used as a circuit breaker, as opposed to Hystrix). We can upgrade these components independently.
There are some tricky issues to deal with, too. Network calls are expensive. Every network call adds to the user's call latency. This can quickly become unacceptable. The other issue is testing. How do you test that these processes work together? They are potentially written in different languages, and quite likely you don't have the source code. Unit testing is only appropriate for software components that you develop. And even then, unit tests don't test that components integrate.
The approach I've seen is that developers write tests referred to as Journey Tests, Smoke Tests, Acceptance Tests, Black Box Tests, and End to End Tests. All of these tests are really talking about roughly the same thing. The domain over which these tests operate is not the technical domain, but rather the user's domain, meaning that they are expressed in the user's terms. They are Acceptance Tests in this sense. There is a correspondence between these tests and user stories. They behave as a user would, doing things like logging in, adding personal details, and doing stuff in a single test. It's a test that involves many steps and they all have to be performed in order. They are referred to as Journey Tests because of this feature.
And they are horrifically expensive. They are so expensive precisely because of all those network calls. This is the reason for the pyramidal shape of the now famous Test Pyramid. The slow tests are few in number and are placed at the top of the pyramid, while the tests that are cheap to run, e.g. unit tests, are placed at the base of the pyramid. The pyramid shape is the result.
Unfortunately, you can't completely get away from these journey-based acceptance tests. Not that you really want to. They offer some very powerful advantages. The first is that they treat the underlying systems as a black box. The tests have as little dependency and knowledge of the inner workings as possible, which means that the tests can finally support the developers when they do extensive refactoring of the underlying systems. The tests remain unchanged, and so there is no "change risk" if the old tests pass (and there is sufficient coverage). The second advantage, as I've already said, is that they are expressed in user terms, so they have real value for a user or product owner.
If we could solve the performance issues, we would want a great many of these tests.
Well, if wishes were horses, we would all ride. You can't have components at the network level without playing the networking costs. But you can have better tools for structuring these tests, so that you get the most out of these tests, in as few tests as possible.
That is the purpose of Cascade.
Cascade requires that the developer define steps in Java code that are linked together via annotations. Each step has an action and a validation method. Cascade will then generate tests from these steps by permuting the steps. Consequently, you end up with a process of action, validation, action, validation, etc. until a journey is complete.
I can illustrate this graphically. Let's say we have four web pages that are linked like so:
This is a graph where each vertex is a web page and each edge is a hyperlink, or possibly a form post, between two web pages. It is really a sitemap for the website in graph form. Cascade considers this graph to be a State Machine. Cascade refers to each vertex as a state in the state machine. And an edge is a transition in the state machine.
So why suddenly am I using the 'state machine' terminology? The definition of the State Machine is particularly important. Automated tests require deterministic systems. They cannot test otherwise. Certain factors affect many subject systems and prevent them from being state machines. In particular; date based behavior, random behavior, and event-based or scheduler-based behavior are all challenges for any automated testing as they affect the subject under test. Unpredictable changes in state will break your tests. This might seem like quite a requirement that Cascade asks of you, but you are probably already dealing with these issues.
But an application isn't merely a number of web pages. There is the data manifested on those web pages to consider as well. Within a web page, there may be a great deal of logic to do with presenting data. The data can affect how pages are linked together, so they affect the definition of the state machine. To solve this problem, Cascade defines a scenario as the realization of a state. A parallel concept is that of a class with data that is now an object. Cascade can then generate the same journey in terms of states and transitions, but with different data. Let me continue with the example.
We may have different scenarios for each webpage. I will illustrate a realization of the webpages using colored circles. These colored circles are now scenarios as they contribute data to a given State.
As you can see, there are two different ways to get to the yellow scenario, and there are a few possibilities for scenarios to include in the journey tests for the first two states.
Cascade will generate journey tests like so:
The blue and red scenarios are terminating scenarios. Scenarios can be marked as terminating and an example of this would be the failed login scenario. Journeys are not generated after terminators.
We have two journeys that might arrive at the yellow scenario. The one journey is slightly longer than the other as it includes the green scenario.
So you can generate a great many journey tests, by simply permuting steps. This isn't really very valuable though. What is valuable, however, is that Cascade can generate the order of each Step. What I mean by this is that it can rank steps according to their rarity in the total test set. It can then generate a significance value for each journey test by summing the order of each step included in that journey, and then order the journey tests appropriately. It can then take the first N tests until all steps have been included at least once.
What you have then is a perfectly balanced set of journey tests that maximize coverage while keeping the total number of tests to a minimum. In the case of the example above, some of the journeys may be removed.
Cascade can execute the algorithm above but accept as its definition of Completeness, a set of steps that includes every scenario, every state or every transition. At these different levels of completeness, a different number of tests will be generated.
In summary, by structuring steps in this way, we have created a model of the state machine that your subject system exhibits. This is a very powerful structure. It can be used in many ways.
The first way this state machine model can be used is, as I've already described, to find the minimal set of paths through the state model so that every scenario, state, and transition is covered.
The second way is to generate reports that display this state machine model.
The third is to generate paths, or tests, that only pass through a particular state or scenario.
So, points to take away:
Microservices require a Black Box testing framework. You can't test a heterogeneous set of processes operating together otherwise.
Testing microservices over the network is horrifically expensive.
Acceptance tests have value for the user and product owner. Their definition is "pure" in the sense that they are unaffected by the underlying implementation.
Cascade offers a testing framework optimized to generate journey tests taking into account their cost, both in terms of time to execute and also the volume of code artefacts necessary to have a complete set of journey tests that have reasonable coverage over the subject application.
The State Machine that Cascade models can be exploited in other ways, such as generating reports and filtering tests.