I’ve yet to see two development environments that are alike. But even if there is no cookie cutter approach to software delivery, there are standard approaches, and methodologies that are consistent throughout modern software development and that frame nearly all environments.
Because there is a big move in software testing to go from purely manual testing (a non-technical process) to a fully automated deeply technical one, how QA processes are set up, and how it fits into the overall delivery chain is very important. Let’s take a look at the two most common architectures for test automation, and why they may or may not be the best approach.
1. Siloed: In the siloed reference architecture, each aspect of the delivery chain is broken into isolated components. The benefit of a siloed reference architecture is that it scales more easily. It is easier to fine-tune because you can optimize individual components without impacting other aspects of the delivery chain. And it has better isolation among the team, which can encourage focus. See the following example from clogeny.com:
The first problem with such a setup, however, is one of its benefits—isolation. This encourages barriers between teams, and less communication as each team owns its individual parts. For example, developers will care about everything up to static code analysis, but not after. And QA may not even have visibility into code analysis, unit testing, and exception monitoring.
The second problem is that there are more points of failure. Each isolated component is something additional to manage. And let’s face it, modern processes also introduce more overhead, and more places where things can break.
And finally, it is heavily reliant on integrations, which is not always a bad thing. Modern development tools are very good at providing integration points. But if you do not give integration a lot of attention, then this type of delivery chain will cause a lot of manual effort as code moves from one stage to another, which is counter to the overall goal of automation.
In the very specific architecture presented above, there are additional problems I see, which are the expansion of the delivery chain, and how late that things like integration and functional testing are occurring. It does not fit with the “shift left” goals of modern development. It clearly does not put the emphasis on functional testing that it should. As you can assume, each step down the way means that there is higher and higher confidence in what is already released. This simply cannot be assumed, and does not give a lot of opportunity for building a testing strategy—only maintaining current strategy.
2. Centralized: The centralized approach is basically the opposite of the above siloed approach, where the delivery chain is segmented only by environments, and the greatest amount of testing happens in the integration environment. Ideally, the integration environment is flexible enough to allow testing to happen as frequently as every developer commit. See the representation below, which I really like:
There are a few common misconceptions about this approach. The first is the definition of “environment.” Environment does not need to equal a single set of servers. Environments, especially in the containerized world, could be many sets of infrastructure and their own configurations. In fact, with functional testing browser grids, you need to have integration environments that support many paralyzed instances in order to achieve test coverage and speed. In this depiction they call it the “continuous integration server” which is very singleton, and not realistic; it should really be replaced with “Integration process.” Integration really represents an abstraction of infrastructure that houses the processes, use test cases, and oversight. The test suite, use cases, can be code analysis, unit testing, and functional testing all at the same time in parallel. Depending on how optimized the test suite is, this can be done many times a day, and many organizations are doing that.
But this also means that the delivery chain really needs to support full-stack deployments, where the existing integration servers are torn down every release and replaced with new ones. The best way to achieve this is by not doing it yourself, and letting a cloud-based testing and provisioning service do it for you.
The integration process usually starts with a webhook in your source repo or a release automation tool. Code is to deployed some set of servers upon every commit, and run the test suite.
The challenge of such an environment is first who owns it. In the most ideal situations, I’ve seen QA as the steward of the integration environment, and everyone on the team has access to it. So QA makes sure that the test suites run, that the testing follows the test strategies, and makes sure that the required functional test cloud, unit test infrastructure, and code analysis tools are procured and set up. But this means that QA teams need to concern themselves with all aspects of quality, not just functional tests, and that they need to be focused on strategy and automation, not execution.
I have a bias towards the integration approach. Organizations favor one approach or another, and there is no one size fits all. The approach favored can have as much to do with company culture and history as technology. Unfortunately, the specific development stacks also have a fairly large impact on how the architecture is set up. If you are unable to identify where you fit, then it probably means the team needs to spend time thinking about their overall delivery chain as much as application releases.