If you’ve ever woken up in the middle of the night and had to fix a bug in your production code, or if you’ve ever worked on the same project for months on end, you know the importance of testing.

Traditionally there are three or four levels of testing in the software development world: unit tests, integration tests, and end-to-end tests. Unit tests validate that your code’s explicit logic does what you wanted it to do, while end-to-end tests ensure your app will withstand real-life scenarios, including interaction with third parties and data sources. Then there are integration tests, which are less well-defined and are the focus of this blog post.

Sometimes there is another level of testing – smoke tests – which are done in staging environments as part of the deployment process. These are done to ensure that an existing production setup works well with the new version of the service.

Testing hierarchy

What are integration tests?

It depends on whom you’re asking. Some readers will have their own definition (and feel free to share yours in the comments below.) Personally, I envision them as end-to-end tests with dependencies that can (and should) be faked. Integration tests should not be concerned with implementation, but rather with the direct dependencies you want to test.

Direct dependencies are dependencies that your service relies on in order to run properly, even if they don’t necessarily provide business value to what your service does. A direct dependency can be the networking framework you use, the object structure you define in your endpoints, or your configuration reading mechanism. All of these are critical for your service to work, but are not suitable for unit testing since they are not part of the explicit logic of your code.

On the other hand, indirect dependencies are usually other APIs in your architecture, which may be developed by another team in your company or by a third-party service.

A good integration test will focus on testing direct dependencies. A few examples can be: testing that a service can listen on the right port, testing that the service exposes the right endpoints, testing that the service validates the body of incoming requests, or testing that the service interacts properly with external APIs. In order to perform such tests, you would have to fake your indirect dependencies.

So far so good, but why is this a problem in the world of microservices? Microservices are small logical units that interact with each other via HTTP protocol, providing the development team with freedom to replace different parts and logic as the system evolves. Unfortunately, the smaller your units are, the more dependencies they have. That makes faking indirect dependencies a complex task.

Here we’ll check out a demonstration of integration tests using Docker Compose. We’re going to fake dependencies quickly with an open source docker image called stubby4j.

Case study

Let’s take a look at an example using this code. In this example, we’ll create a Node.js service that registers a new office dog to the system. The service saves the dog in a database, creates a schedule for which days of the week the dog is allowed in the office, and notifies the owner via email.

Let’s do a breakdown of what we see in the repository: Under ‘src’, you can find the code of the service. You can see that most of the code is actually clients that call other services, just as you would expect. The service doesn’t have so much explicit business logic, and that’s why there are almost no unit tests inside the tests folder (plus it’s not the focus of the example). Let’s take a look at scripts section in the package.json file:

"scripts": {
   "test": "mocha test/unit/*.test.js",
   "test-integration": "docker-compose -f ./test/integration/docker-compose.yaml up --force-recreate --build --remov
e-orphans --exit-code-from integration-tests integration-tests && docker-compose -f ./test/integration/docker-compose.yaml down --remove-orphans",
   "test-integration-verbose": "docker-compose -f ./test/integration/docker-compose.yaml up --force-recreate --build --remove-orphans --exit-code-from integration-tests integration-tests dogs-api employees-api dog-scheduling-service email-sending-service && docker-compose -f ./test/integration/docker-compose.yaml down --remove-orphans",
   "start": "node src/server.js"
 },

There are two interesting test commands: one for unit tests and one for integration tests (there’s also a command for integration tests with more details output from our dependencies). We are using Docker Compose here for the integration tests, and that means we’re going to fake each of our dependencies as an entire docker image that will hold an actual HTTP/MongoDB server.

Next, the integration-tests folder. You will see that it also has a dockerfile, and the docker command will run the tests. This shows how decoupled our integration tests can be – they can even be written in a different language from our service. Here they will run in Node.js with mocha, and we also have a package.json file with a whole different set of dependencies. A small note: here we’re using a script called wait-for-it.sh to ensure our dependencies are up and running before we run the tests.

We are using a docker image called stubby4j to create our fake dependencies. This open source container lets us define a service using a yaml file with hard-coded responses for requests. (The full documentation can be found here.) For example, here is the yaml file that describes one of our dependencies, Employees API:

- request:
   method: GET
   url: /api/v1/employee/11/emailAddress
 response:
   status: 200
   body: >
     "ricksanchez@soluto.com"

Stubby4j example: employees api fake server definition

Our Docker Compose file defines our tests, service and all of their dependencies. This is our orchestration for the testing environment.

When we run our integration tests docker image it will set up all of our dependencies, and once they are ready, the tests will run.

Wrap-up

By using Docker Compose along with stubby4j, we were able to create fake dependencies for our service’s integration tests quickly and easily. Furthermore, duplicating these tests to end-to-end tests is quite simple: just create a docker-compose file with actual images and configuration, and you’re done.

Integration testing with Docker Compose gives us a layer of testing that protects a vulnerable gray area by ensuring our service stands up to a contract (which can be found explicitly in the test body). They help us see our service as a black box – a docker image that should run in a given environment – and by faking its dependencies, we can also control extreme cases that might be difficult to reproduce in end-to-end tests.


Also published on Medium.