Written by Patryk Kurczyna
Kotlin/Java Developer
Published December 12, 2023

Integration Testing Deep Dive Part II

 

1. Introduction

In the first part of this article we implemented many integration tests covering the most common external services that most applications use, including databases and REST APIs.

We also looked at the exact setup that is needed to create an efficient integration tests suite for your application, using Gradle, Spock and Spring Boot.

This part will continue the exploration of various external services that are often integrated in common backend microservices, such as: Kafka (consumers and producers), AWS SQS, Storage (Google Cloud) and SMTP protocol for sending emails.

Finally, we will make an attempt of organising the entire configuration into what can be defined as a comprehensive integration testing framework, leveraging the capabilities offered by Spock Extensions.

2. Kafka Consumer testing

Nowadays, many applications consume messages from Kafka or a different message bus. Setting up and maintaining your own Kafka cluster for integration test purposes is a bit cumbersome. Hence, assistance can be obtained once again from the Testcontainers, by using its Kafka module.

Let’s implement a Kafka consumer in the application that will read messages from the given topic and handle them. We can later create test scenarios to ensure the handling logic is correct. 

Kafka consumer

To write Kafka consumer, an additional Spring dependency is required. Let’s add the Testcontainers Kafka module as well.

Assuming the topic contains user events for registering or deleting user entries, the UserRepository from the previous article needs enhancement by adding a delete method.

Now, a model class is required to represent the events.

Finally, thanks to spring-kafka library we have a very simple yet powerful way of creating the Kafka consumer.

KafkaListener annotation guarantees that the Kafka consumer will be started in a separate thread, and it will handle UserEvent events from the topics.users kafka topic. Ensure the topic placeholder gets the correct value in the application.yml config file:

Last, but not least, is Kafka configuration. Spring provides a bunch of different properties for Kafka, but only a few that are most needed will be used.

The following are the most important:

  • kafka.bootstrap-servers -> shortly, it serves as the Kafka cluster URL; the application can take it, for instance, from the environment variable KAFKA_BOOTSTRAPSERVERS
  • kafka.consumer.group-id -> Group Id of the consumer, it’s usually the application name
  • kafka.consumer.key-deserializer and kafka.consumer.value-deserializer -> Key and value deserializers for our events. In this case, keys are simple Strings and values are in JSON format

Adding this configuration will help Spring autoconfigure Kafka, so now the consumer (@KafkaListener) can work properly.

Integration Test

Testing scenarios

Two testing scenarios can be outlined for the Kafka user consumer:

  1. User registration
    • Prepare registration event
    • Send the event to Kafka (using utility Kafka producer)
    • Verify that the user has eventually been stored in the DB

Eventually word is the key here. The Kafka consumer operates asynchronously, without a precise way to determine the exact timing of its actions.

  1. User deletion
    • Insert the user to the db
    • Prepare deletion event
    • Send the event to Kafka (using utility Kafka producer)
    • Verify that the user has eventually been deleted from the DB

Test setup

Similar to WireMock, the aim is to decouple specific Testcontainers Kafka logic from the tests. Hence, a KafkaMock utility class will be created to manage everything related to Kafka.

  • Defining the docker container to be used
  • Starting and stopping docker container
  • Exposing the bootstrapServers url that Kafka is running on

Now, we have to manage our mock in the IntegrationTestBase:

TestPropertySourceUtils has also got to be updated to expose kafka.bootstrapServers property that can be used in the application-itest.yml:

As you can see, our application now uses an embedded Kafka container (in the integration test scope) that we spin up using Testcontainers.

Two other interesting properties in the config above are:

  • consumer.group-id -> it’s a good practice to override this value for integration tests
  • producer section -> To send test Kafka events, creating a simple Kafka producer is essential. You can use the KafkaRestTemplate Spring utility for it, which can be auto-configured by Spring when the necessary properties are provided.

Then, it can be autowired in the IntegrationTestBase class so you might use it in the tests.

Test implementation

Let’s examine the specifics outlined in this context.

Initially, the value of the property topics.users is injected into the class property usersTopic. This is crucial for determining the designated topic when sending the event.

Secondly, KafkaTemplate is used to send the event:

kafkaUsersProducer.send(usersTopic, id.toString(), event)

by providing the topic name, event key and event value.

Lastly, we verify that the user is inserted (or deleted) using the DbTestClient you already know.

However, the most crucial aspect is performing this verification within a PollingConditions.eventually() block. This is the mechanism that allows testing asynchronous behaviour. Spock will run the verification block multiple times before it succeeds or the timeout elapses. In this case scenario, 5 seconds is a pretty safe bet – the logic is relatively simple, making it reasonable to expect that event handling will finish within 5 seconds.

3. Kafka Producer testing

Similarly to testing Kafka consumers, the application may produce some events, so we might want to test if those events are sent properly. This can be done quite easily by enhancing our setup a bit.

Kafka producer

Let’s imagine a broadcast event needs to be published every time the user is registered. Implementing a UserBroadcast service that will publish relevant events to Kafka is feasible.

As you remember, KafkaTemplate needs additional producer configuration, so it is advisable to add it to the application.yml file:

The config is exactly the same as defined in the itest profile, so there is no need to override it anymore. Therefore, it is suggested to remove the producer section from the application-itest.yml.

Let’s also add BroadcastEvent to our model:

Now, you can inject a UserBroadcast component to the UserController and publish the broadcast event every time the user is registered.

Integration Test

Testing scenario

Our testing scenario is relatively simple. However, a clever workaround will be necessary to ensure its functionality. Let’s enhance the testing scenario from the controller test (ITestUsers) we defined before:

Add new user:

  • Call the POST /api/users endpoint
  • Verify the response status
  • Verify that the user is added to the DB
  • Verify that the broadcast event has been published

Test setup

But how to verify Kafka events being published? Numerous methods are available, but my approach involves intercepting the events produced by our application, storing them in memory, and subsequently verifying if they were properly sent. To do that, you may create another Kafka consumer, in the itest scope, that will imitate the consumer of our UserBroadcast events.

@Profile('itest') means that the component will only be created in the itest scope and will not affect our application in production. As you know, @KafkaListener will run the consumer in a separate thread and every time we get the event, we will record it in our KafkaMock: KafkaMock.recordMessage(consumerRecord.topic(), consumerRecord.value())

The getTopicMessages method is exposed for test purposes, in order to get all messages produced to the topic in question

KafkaMock will store those messages in a ConcurrentHashMap in memory.

Now, let’s adjust the user registration test case.

Test implementation

As a result, another and section was added, in which a PollingConditions.eventually block is defined. In this verification step, the test consumer is checked to receive precisely one broadcast message, sent to the broadcastTopic, for the user (identified by its id) in question.

4. Storage (Google Cloud)

Another common pattern in the contemporary microservices is the integration with various cloud services such as Amazon Web Services (AWS) or Google Cloud.

One of the services that is widely adopted is cloud storage. In this particular chapter the emphasis will be put on the Google Cloud Storage, however AWS S3 testing is very similar.

Pictures Controller and Service

Let’s imagine there is a need to provide a functionality of uploading pictures in the system that you build. In an effort to accomplish this, you may think of implementing the following controller and service for uploading pictures using Google Cloud Storage.

The setup requires additional dependencies to declare:

And a GCS bucket configuration in the application.yml file:

Integration Test

Testing scenario

Assume following testing scenario:

  • There is a picture to be uploaded to GCS
  • The upload endpoint is called, with the file to upload as a multipart body
  • Response is successful
  • The file is uploaded to GCS bucket

Test setup

The latter step is the most essential one, therefore you need to find a way of setting up a mocked Google Cloud Storage bucket for the test and be able to verify its content afterwards.

Unfortunately, Testcontainers does not provide a dedicated module for Google Cloud Storage, thus you might opt for using the GenericContainer and set up a mocked GCS yourself. Favourably, there are numerous open-source libraries available for the task, one of which is Fake GCS Server. It provides an emulator for Google Cloud Storage API, alongside with the Java SDK.

Let’s use it in practice.

Firstly, create a GcsMock with the use of Fake Gcs Java SDK:

start and stop methods are already familiar, they are going to be used to manage the testcontainer lifecycle. You should also expose the port the GcsMock is running on as an application property. All this should be done in the setupSpec and cleanupSpec methods of the IntegrationTestBase:

Having GCS container in place, the application needs to “know” where to find it, thus it’s necessary to configure Storage Spring Bean accordingly:

The gcsContainerPort is injected from the application property gcs.port which points to the GCS testcontainer, you can see that the host and credentials for the Storage service is overridden accordingly. The @Primary annotation instructs Spring Boot to assign this @Bean definition a highest priority. Moreover, @TestConfiguration annotation means that this configuration class will only be applied in the test scope.

There is one more important step to make in order to complete the setup, which is importing the test configuration class for all the tests, by adding relevant @Import annotation in the IntegrationTestBase:

Now, you’re almost ready to implement the integration test scenario defined before, but let’s add another utility component, that will allow interaction with the Google Cloud Storage inside the tests.

It provides methods for creating and deleting buckets, but also for querying all objects (blobs) in the bucket, which are going to be used later.

Last step is to create a relevant bucket before the tests, and clean it up after they are executed.

Test implementation

The end result is clean and concise. Using a multipart request body to send the actual file with the HTTP request allows for verifying whether or not it has been uploaded successfully to the Google Cloud Storage bucket.

Please bear in mind that the file pictures/yosemite.png has to be on the classpath, for instance in the src/itest/resources directory.

5. Sending Emails

Another widely adopted feature that can be looked at is email sending using Simple Mail Transfer Protocol (SMTP). This chapter will concentrate on presenting how it can be effectively tested.

Email sender

Below is an illustrative example implementation of the email sending component in Spring Boot.

The Email class represents the email object with essential data. The aforementioned service uses JavaMailSender SDK for sending emails via SMTP. It has to be properly configured and Spring offers a convenient way of doing that, by specifying required configuration properties in the application.yml file.

We need to declare additional dependencies for the application as well:

Integration Test

Testing scenario

An actual testing scenario could be straightforward:

  • There is an email to be sent
  • DefaultEmailService’s send method is called
  • Email is delivered to the particular recipient, identified by the email address
  • Verification that the delivered email contains all the requisite data

Test setup

To be able to perform the verification from the last bullet point of the testing scenario you have to inspect the data (emails) being sent over SMTP. One of the tools that allows that is GreenMail. It’s an open source, intuitive and easy-to-use test suite of email servers for testing purposes.

Let’s add it to our test suite:

Configuration:

The provided code exemplifies the instantiation and lifecycle management of the GreenMail object. Once it’s operational, you are able to adjust the application’s SMTP configuration on the itest profile, thereby enabling the utilisation of GreenMail in lieu of an actual SMTP server.

Test implementation

In the verification block of the above test, the GreenMail utility method getReceivedMessages is used. This method returns the array of MimeMessage objects, representing emails intercepted by GreenMail. To assert the presence of desired data within the email, the utility method GreenMailUtil.getBody() is invoked. Additionally, various email headers can be verified through this process.

6. AWS SQS queues

As an illustrative example of the AWS services, let us closely examine the Simple Queue Service (SQS). It can serve diverse purposes and constitutes a common integration point of many applications.

Email sending events

AWS SES enables clients to track emails sending at a granular level by setting up email sending events.  They can be published to various output streams but one of them is an SQS queue (via SNS).

Assume that your application needs to consume and handle those events.

Let’s write a simple consumer code.

SQSEmailEventsReceiver

Firstly, more dependencies are required to be defined:

Conveniently, nothing more is needed to implement a SQS message receiver in Spring Boot:

EmailEvent class represents an event object from SES, in real life it’s usually much more complex, but you can use this simple representation for example purposes.

SqsEmailEventReceiver provides one method handle, decorated with a @SqsListener annotation which instructs Spring Boot to spawn a separate thread with a consumer of the queue passed as a parameter.

The handling logic is just logging the received event to make it simple.

sqs.queue-name property has to be defined in the application.yml file:

Integration Test

Testing scenario

Assume following testing scenario:

  • Email event is sent to the SQS queue
  • Event is being processed
  • Eventually the event is successfully consumed (it’s not visible in the queue anymore)

Test setup

When dealing with AWS services, it is advised to use Testcontainers LocalStack Module  which is a fully functional local AWS Cloud stack and has built-in support for SQS among others.

Let’s add a dependency to localstack module:

Now, we’re ready to spin up the LocalStack docker container with SQS support:

As in most of the previous examples, the application needs to know that it should connect to the mocked SQS server in the itest scope. To do that, let’s override SqsAsyncClient bean definition to point to the LocalStack container. Add the following @TestConfiguration class to IntegrationTestBase:

What’s new in the example above is the AWS credentials and region that need to be specified. Luckily, localstack is equipped with suitable methods to retrieve those properties.

Last, but not least we need an utility component to facilitate interaction with the SQS queue. We might introduce SqsTestClient for that purpose:

Presented class utilises the SqsAsyncClient component to communicate with SQS (remember that it is configured to be tied with the LocalStack container). AWS SDK can be used to implement methods convenient to use in the test implementation such as:

  • createQueue
  • isQueueEmpty
  • processingEventsSize

sqs.queue-name has to be defined in the application-itest.yml configuration file:

Moreover, SqsTestClient needs to be injected in the IntegrationTestBase:

That is all the setup necessary for employing the integration test for the SQS queue consumer.

Test implementation

As always, the exact implementation is straightforward when all of the necessary pieces are in place. Following the event being sent to the queue, we can make an assertion that it is successfully consumed by verifying if the queue is eventually empty – this means that no exception has been thrown in the process. Of course, if the handling logic was more sophisticated, you would be able to verify other conditions related to the database or application state.

7. Optimisations

In the preceding chapters of this article, we have examined numerous patterns that could form a robust foundation for integration testing. Nevertheless, there remains a lot of room for improvement. As you probably noticed, we didn’t avoid certain logical duplicates and boilerplate code. Among the repetitive tasks, the lifecycle management of our mocks stands out, as it is consistently executed in a similar manner for the majority of them. Furthermore, evaluating the overall execution time of the integration test suite reveals potential for optimization

A total of 12 test cases distributed across 8 test classes were executed, with an allotted time of two minutes. This feels like an amount to improve.

Let us explore one of the potential ways to address the aforementioned challenges.

Spock Extensions

Spock comes with a powerful extension mechanism, which allows you to hook into a specification’s lifecycle to enrich or alter its behaviour. While there exists a range of built-in extensions with valuable functionalities, this chapter will focus on Spock’s custom extensions.

Spock offers two distinct types of custom extensions: global extensions and annotation-driven local extensions – we will make use of both of them.

For a comprehensive understanding of Spock extensions, the documentation provides further insights, all you should know for now is that we need them to run custom code for our mocks in the specific moments during the integration tests suite run.

Objectives

There is multiple objectives that we will try to achieve:

  • Improve tests execution time
  • Make mock services easily pluggable
  • Mitigate redundancy and boilerplate code
  • Consolidate mock configuration within a user-friendly framework
  • Decouple tests logic from the configuration

Implementation

To achieve the aforementioned objectives, it would be a good idea to create a common interface (Groovy trait) for all the mocks in our suite.

Each mock within the suite has to implement this interface. start, stop and cleanup methods are simple lifecycle management hooks. propertiesToRegister defines the test properties that each mock exposes and need to be added to application context. It is noteworthy that both the cleanup and propertiesToRegister methods have empty default implementation.

Let’s examine the MailMock as an illustrative example of the implementation for the Mock interface.

Other examples can be found in this package.

Next step is to create an utility class that will gather all mocks and could be used to manage their lifecycle collectively.

It empowers the user to initialise a list of mocks to be managed in the application.

Annotation driven local extension

Let’s create a Spock extension dedicated to initiating all mocks and registering essential properties prior to the first test execution.

To make mocks easily pluggable we can take advantage of a custom annotation.

Service enumeration represents each of the mocks, while the Mocks annotation, designed with flexibility in mind, accepts a singular parameter – services, configuring which services are going to be used.

Finally it’s time to implement the actual extension:

The extension leverages two pivotal hooks: addSharedInitializerInterceptor executed preceding the shared initializer of the annotated specification, and addCleanupInterceptor, executed subsequent to the cleanup phase of the specification. You can read more about Spock interceptors in the documentation.

Furthermore, the initialization of all mocks is orchestrated through the MockEnvironment.init() method, with the services parameter extracted from the Mocks annotation.

That is how the extension can be used in practice:

It is impressive that this single line of code is everything required for managing the lifecycle of all of the services that we need for the tests.

Global extension

Finally, and of no lesser importance, upon the completion of all tests, a cleanup of the test infrastructure should be performed. As annotation-driven extensions lack a dedicated hook executed after all tests, we shall resort to using a global extension for this purpose.

The stop() method precisely fulfils the requirement, ensuring all mocks are stopped upon the conclusion of the entire test suite.

To activate the global extension we need to add its name to META-INF/services/ org.spockframework.runtime.extension.IGlobalExtension file:

Conclusion

Having both extensions in place, the final form of the IntegrationTestBase class is notably more concise:

Additionally, because of the fact that the mocks are started only once, before the first test is executed, the total execution time decreased significantly as well:

Ultimately, we have effectively realised all predefined objectives. The Spock extensions have proven to be a robust mechanism for systematically organising and consolidating the integration tests setup.

8. Afterword

This article concluded the comprehensive guide to integration testing of JVM backend services. with a predominant focus on Spring Boot, Kotlin, and Spock. In this part particularly, we looked at the more complex integration patterns, including Kafka, email sending, Google Cloud Storage and AWS Simple Queue Service.

Finally, we wrapped up with the chapter that introduced you to the Spock Extensions, effectively encapsulating the test infrastructure, streamlining the setup, and significantly improving test execution time.

I firmly believe that by adopting the approach outlined in this and preceding part of the article, one can develop a robust and highly efficient integration testing framework. This framework can be seamlessly extended to incorporate plentiful external services beyond those aforementioned.

Useful links

Written by Patryk Kurczyna
Kotlin/Java Developer
Published December 12, 2023