Best testing strategies in a Microservice architectureChandra Kiran Mamidala
Listen on the go!
Today, there is no dearth of information related to microservices on the web about what they are and how organisations are transforming their development architecture. However, there are very few articles that talk about the test strategies that must be followed while testing microservice architectural solutions and applications. This blog aims to help you provide the information.
Before I continue to the Strategy, Test Areas, Test Types, etc., let us understand some of the definitions of microservices.
A microservice is a software development technique—a variant of the service–oriented architecture (SOA) architectural style that structures an application as a collection of loosely–coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight.
A microservice architecture builds software as suites of collaborating services. Microservices are often integrated using REST over HTTP. They connect with each other over networks and make use of “external” datastores.
Microservice architecture, or simply microservices, is a distinctive method of developing software systems that try to focus on building single-function or single–purpose modules with well-defined interfaces and operations. This trend has grown popular in recent years as enterprises look to become more Agile and move towards a DevOps and continuous testing approach. Microservices can help create scalable, testable software that can be delivered very often; some of them as frequent as weekly and daily.
Microservices architecture allows admins or users to load just the services that are required, which improves deploy times, especially when packaged in containers.
Microservices provide changes only when and where they are needed. With a microservices architecture, the application monitors each functional component.
Why do you need a special strategy to test microservices?
You need a different strategy to test microservices as they follow a different architecture and have a lot of integrations with other microservices within one’s organisation as well as from the outside world (3rd party integrations). Additionally, these require a high amount of collaboration among different teams/squads developing individual microservices. Moreover, they are single purpose services and are deployed independently & regularly.
Before I discuss the various strategies, let us take a look at a self-explanatory diagram to understand the composition of microservices.
Different types of testing need to be performed at different layers of microservices. Let us look at them using an illustration of three microservices collaborating/working closely. Assume Microservices A, B, and C are working together along with many other Microservices (not shown in picture). The following diagram illustrates all testing types and their boundaries by taking one microservice (Microservice B) as an example.
A unit test exercises the smallest piece of testable software in the application to determine whether it behaves as expected.
Unit tests are typically written at the class level or around a small group of related classes. The smaller the unit under test, the easier it is to express the behaviour using a unit test. With unit testing, you see an important distinction based on whether or not the unit under test is isolated from its collaborators.
Referring to the above images, unit tests are written for the internal modules of a given microservice (for each internal module of a microservice). Test doubles, stubs, and mocking can be incorporated to replace any dependencies. Usually, your developers write Unit Tests and as a QA engineer you may want to make sure they do write some unit tests as part of their code commits.
Unit testing alone does not provide guarantee about the behaviour of the system. You must address the coverage of the internal modules when they work together to form a complete Microservice. You must also address the interactions the modules make with remote dependencies.
An integration test verifies the communication paths and interactions between the modules and between their dependencies.
Integration tests collect modules together and test them as a subsystem in order to verify that they collaborate as intended to achieve some larger piece of behaviour. Example of integration tests between modules are tests verifying the communication between Repositories & Service Layer, Service Layer & Resources, etc. Integration test examples for external dependencies include the tests between data stores, caches, and other microservices.
You often end up writing an extended version of your unit tests on the internal modules and start calling them as Integration Tests. Thus, it is a good idea to delegate the job of writing automated integration tests to your developers because it is easy for them to use the existing unit test frameworks to extend for Integration Tests. It needs them using the real instances of the internal modules involved (Ex: API Resources, Service Layer, DBs, and Message Queues, etc.). Testers can choose to do this activity provided the infrastructure team is ready to give you an integration environment with all the real dependencies deployed.
In both cases, write just a handful of test cases, do not automate an exhaustive list as the complete functional testing will be covered by other testing types (Component Tests). These tests should just act like a health check activity between the integration layers with a small test suite size.
With Unit and Integration Testing, you can gain confidence on the quality of the internal modules that make up the microservice, but you cannot guarantee that the microservice works together as a whole to satisfy the business requirements. You must end-to-end test the integrated microservice by isolating it from external dependencies and other collaborating microservices.
A component test verifies the end-to-end functionality of the given microservice in isolation by replacing its dependencies by test doubles and/or mock services.
In a microservice architecture, the components are the services themselves. By writing tests at this granularity, the contract of the API is driven through tests from the perspective of a consumer. Isolation of the service is achieved by replacing external collaborators with test doubles and by using internal API endpoints to probe or configure the service.
Referring to the above microservice diagram, component tests focus on one microservice at a time. For example, testing Microservice A in isolation, and then B, and then C, and so on. If Microservice A closely collaborates with Microservice B and you want to test Microservice A in isolation, then you would replace Microservice B with a mock service, integrate the mock with Microservice A, and then test it end-to-end.
Testing such components in isolation provides a number of benefits:
- By limiting the scope to a single component, it is possible to thoroughly perform an acceptance test for the behaviour encapsulated by that component whilst maintaining tests to execute more quickly.
- Isolating the component from its peers using test doubles/mocks avoids any complex behaviour they may have. It also helps to provide a controlled testing environment for the component, triggering any applicable error cases in a repeatable manner.
QA team is expected to define the coverage, write test scenarios, and automate them all. You can use any of the common API testing tools such as SoapUI, ReadyAPI, REST-assured, and HTTP Client. You can see this as a typical web service testing. This involves an extensive use of mock services to replace the dependencies to test microservice in isolation. Wiremock, MockLab, MockableIO, ServiceV are a few examples.
With Unit, Integration, and Component Testing, you can achieve high coverage of modules that make up the microservice. But the quality of the complete solution is not assured unless your tests cover all the microservices working together. Contract Testing of external dependencies and end-to-end testing of the whole system help provide this.
Contract Test is a test at the boundary of an external service verifying that it meets the contract expected by consuming service.
If two microservices or a microservice and its dependent services are collaborating, then there forms a contract between them. If Microservice A and B are collaborating where Service A interacts with Service B by sending/seeking some data and Service B is responding to Service A for all its incoming requests, then there forms a contract between Service A and B. Here, Service A is known as a consumer and Service B as provider.
Contract testing takes place in two steps:
- First, the consumer will publish a contract for its provider. This contract looks like a typical API schema (can be a json file) with all the possible requests, response data and formats. This includes headers, body, status codes, Uri, path, verb, etc. At the end of first step, the contracts are published to the provider directly or to a central location.
- Second step is where the provider accesses the contracts given by its consumer(s) and verifies them against their latest code. If they find any failures, then it could be due to some breaking changes provider has made to their own API schema
This way the provider team can keep all their consumers (different teams within their organisation or different clients/customers) updated on the schema changes. This will help the consumer plan the change management well ahead.
QA teams from both the parties are expected to produce contract tests. Contract testing also employs an interim mock service acting as a bridge between consumer and provider since the testing takes place in two steps in different timelines. PACT is a great example to implement the contract testing.
Contract tests are not functional tests, they are defined for the entire microservice and they focus more on input/output formats and types rather than actual values. Hence, the suite size is expected to be smaller than component tests suites.
The system as a whole meets business goals irrespective of the component architecture in use. The whole system should be treated as a black box and the tests must be exercised as much on the fully deployed system as possible, manipulating it through public interfaces such as GUIs and service APIs. Strictly no concept of mocking is observed here.
As a microservice architecture includes more moving parts for the same behaviour, end-to-end tests provide value by adding coverage of the gaps between the services. You do not have to write tests for each of the microservices involved. You just need to find a few high–level business work flows or user journeys (end-to-end journeys) and automate them. Any UI and API automation library can be used. Selenium in combination with REST-assured is a best example test framework.
Writing and maintaining end-to-end tests can be very difficult, write as few end-to-end tests as possible.
While the importance of performance testing is widely known, it is worth mentioning at what level you perform this for microservices. It is recommended that you do it at two levels:
- Microservice level (for each microservice when it is deployed)
- System level (when all microservices are deployed to work together)
It is better to have separate performance environments to do this. QA team can use any light weight tool such as JMeter and Locust to measure the performance of each microservice. I recommend approaching specialised performance teams to do a detailed performance analysis of the entire system when all the microservices are deployed and integrated to work together.
Microservices Test Pyramid
Unit Tests are at the bottom of the pyramid with comparatively larger test suites in size. They are written for each microservice and for all of their internal modules/components. The execution time is considered very low, but the scope of testing is higher with granular coverage.
Component Tests are second to Unit Tests in size as this is nothing but the full–functional test suite for all microservices. You will cover the tests for all possible cases, boundary values, edge cases, positive and negative cases, etc.
Contract Tests suites have a considerably good size with the look and feel similar to component test suites, but the coverage is limited just to the contracts, which means the input and output formats and types. You do not have to test the functionality. Example tests for an API would be a test for 200 response, another for a 400 response, and one for 415 response if applicable for each resource URI/endpoint.
Integration Tests usually are only a handful by just writing 2 or 3 tests for each integration among the internal modules and external APIs. The size is considerably small.
End-to-end Tests are very small–sized suites with test scenarios concentrating on the major work flows and user journeys. The execution time is high as it may involve GUIs, but the scope is limited.
With the understanding of what microservices are, how to test them, what testing types to incorporate, where to execute the tests w.r.t the environments, you are good to go with proposing a best test plan and strategy that fits your organisational/client needs. But this is subject to customisations and modifications based on many factors such as cost involved in implementing all the testing types, resource capacity & availability, time & project delivery time-scales, cost of test automation engineers, infrastructure and hardware (e.g.: cloud), budget, and many more.
In the next part of this blog, you will understand when to run different test suites in the CI & CD using the Quality Gates model.