Microservices Testing Strategy (Part 2): Microservices Style of Testing
In this post, we look at one way of going about testing microservices that allows dev teams to 'fail fast' and debug more efficiently.
Join the DZone community and get the full member experience.Join For Free
Welcome to Part 2 of this microservices testing strategy series. In the first part, we saw how old-style monolithic testing is not going to help find issues faster in a microservices world. In this post, we are going to focus on how testing should evolve as product architecture and development practices change.
As we saw in the first post, product architecture is moving towards breaking into smaller services (systems), and so our testing strategy should be focusing on and treating each service as an individual component of failure. Testing each component functionality individually will help in detecting issues faster as it will tell us exactly where to find an issue. Let us call these tests System Tests or Component Tests.
System tests are test suites focusing on individual service functionalities and not end-to-end product tests. These tests assume that dependent services (not a third-party technology stack) under test are either faked or mocked. These faked or mocked services return predefined, expected responses based on different requests. These tests ensure any failure is found are to changes in the service under test because all dependent services are mocked. Interesting, isn't it?
Let us look at a visual example of how the system test enables microservices architecture to be a faster delivery mechanism. We will continue with the same example we worked with in Part 1, where there are two services, a UI service and a backend service, which interact with a database.
Now, we will see how the System Test strategy can be applied here. First, we will focus on writing tests for the backend services API, which supports CRUD functionalities. We will write tests for each API and call these APIs with expected inputs and validate expected outputs. These tests can run whenever there is a change in the backend service which will ensure there is no failure in the functionalities, and, if there are any failures, that we know where to look for these failures. As a backend service does not have any dependent services, we don't require any mocked services.
Now let us focus on writing tests for UI services which have all the UI related functionalities of calling backend APIs and showing appropriate messages. We will write tests for each UI page in which we will launch each page and perform actions on it and validate the expected UI changes. These tests can run whenever there is a change in the UI service which will ensure that there is no failure in the UI functionalities, and, if there are any failures, that we know where to look for these failures. Unlike the backend service, the UI service is dependent on the backend service for its functionalities to work. Hence, we will write a mock backend service which will just return valid output responses for all APIs required by the UI service. As the responses of the backend service are mocked, it reflects any failure attributes to changes made in the UI service.
In the above diagram, we can avoid using the database by storing responses in the mocked service itself. But, by doing that, adding any new mocked responses will require changes to the code in the mocked service and deployment.
With the above example, we saw that a microservices architecture requires a radical shift in testing strategy for faster failures, debuggability, and releasing only those services which are changing, with more confidence. System Tests are one of those tests which enables you to achieve that.
In the next part of this series, I will cover the kinds of failures which are not traceable using System Tests and what kind of tests we need to write specific to those failures.
Published at DZone with permission of Chirag Naik. See the original article here.
Opinions expressed by DZone contributors are their own.