Component Testing of Frontends: Karate Netty Project
In this tutorial, see how Karate Netty, used for setting up a mock server in component tests, strikes a good balance between ease of use and feature richness.
Join the DZone community and get the full member experience.
Join For FreeIn my previous article, I highlighted the nature of component tests. Their scope should be a single executable where all external dependencies have been replaced by configurable test doubles. Rich front-end applications are a good candidate for such tests. The backend they interact with should return predictable and easily configurable responses to cater to relevant test scenarios. The real backend is usually not cut out for that task. In this continuation, I will therefore look in some more detail at the Karate Netty project.
It must be worth the effort to set up a mock server over just using the actual backend. If it is awkward to set up and hard to learn, it may be more prudent not to use mock at all and run your tests in an end-to-end fashion. But that only works when the backend is lightweight and can run on a local machine. In the most complex microservices environments, that is hardly the case anymore.
A plausible wish-list would be:
- It must be easy and quick to set up and run.
- Configuration should have a gentle learning curve; human-readable syntax.
- However, it should also be sufficiently feature-rich to cater to all request/response pairs we need in our test scenarios.
Most features of Karate Netty support those requirements:
- You can import the mock server as a library and incorporate it in the lifecycle of a JUnit test, or as a standalone executable jar file from the command line.
- Configuration syntax is based on the Gherkin language, instead of a hierarchical markup language like JSON, YAML, or XML. Interactions are defined as Gherkin scenarios. This makes it very readable.
- It has a rich feature set to define requests and responses. You can even make the server stateful and embed custom JavaScript code. Go easy on these features, though. You should rarely need to copy business logic into your test code or you’ll end up validating the test code, not the production code.
- It’s scalable and promotes reuse. When the project grows you can split separate scenarios into multiple files and reuse common request and response payloads in JSON files.
Hand-Written or Automated?
Most backends expose their APIs in an open format like OpenAPI and there is already plenty of tooling around to generate a mock server from such specifications, like Wiremock or Postman. With Karate Studio (a subscription service), you can create feature files from an OpenAPI spec and save you some typing.
Although this gives you a minimally working backend, the responses are not yet optimized for test purposes. This is because the relevant success/failure scenarios implicit in the business rules are not explicit in the spec. For each endpoint, we need several separate scenarios for combinations of input and output, insofar as they are relevant to the frontend user flow. Any request/response alternatives that do not affect the flow differently can be safely ignored.
However, note that the frontend and backend logic can have very different requirements. Remember our online mortgage application tool. Imagine that at some point in the process you are required to upload proof of your employment (the validation may even be performed by a human, which is not relevant here). Say that this step can yield six different response codes. One constitutes a rejection. The flow terminates. Two other scenarios ask you to upload another document, each with its specific message. Lastly, there are three success scenarios that each present a different provisional set of lending conditions and continue the flow.
Now, in a component test for the backend, all six scenarios should be tested exhaustively. But for the frontend, only three are useful to test: the rejection, the request for documentation, and the success scenario. The two requests for documents and the three success scenarios differ only in textual content, which is presented to the user as it is received from the backend. For our test purposes, it can be anything.
To test these three flows we need at least an equal number of test files to produce the expected response. Imagine how awkward that would be to set up on a real backend, which is why we often don’t bother. In Karate, however, you only need three PDF files named rejected, further_docs, and success. They can be empty: only the name is relevant. This is how you would configure it:
Scenario: pathMatches('/api/v1/offer') && requestParts['file'][0].filename == 'rejected.pdf'
* def response = {"code": "REJECTED", "message": "You request has been rejected"}
Scenario: pathMatches('/api/v1/offer') && requestParts['file'][0].filename == 'further_docs.pdf'
* def response = {"code": "FURTHER_DOCS", "message": "Please upload "}
Scenario: pathMatches('/api/v1/offer') && requestParts['file'][0].filename == 'success.pdf'
* def response = {"code": "SUCCESS", "message": "You may proceed"}
Do check out the excellent documentation as linked earlier. This example only scratches the surface of what’s possible. We want to intercept calls to /api/v1/offer
. If that URL handles other methods than POST
, we need to add && methodIs('POST')
to the relevant scenario. The requestParts
clause in JavaScript syntax is more complex and does detailed matching on the properties of uploaded files, of which there can be more than one in a single request, hence the index.
Lifecycle Management
It is very easy to create a Docker container from a standalone Karate executable for your server. Download the executable jar or zip, and place it next to your feature file(s) alongside the following Dockerfile:
FROM openjdk:17
WORKDIR /
COPY . .
EXPOSE 8080
CMD java -jar karate.jar -m mock.feature -m mock2.feature -p 8080
This copies the feature files and Karate jar to a Java 17 image and starts the service on port 8080.
You can also include this in a docker-compose set up together with a running Node server. If you use the Cypress framework for testing, that file would look something like this:
version: '3'
services:
webapp:
image: webapp
environment:
- NODE_ENV=docker
links:
- "backend:backend"
depends_on:
- backend
ports:
- "8080:8080"
backend:
image: karate
ports:
- "8090:8080"
cypress:
image: "cypress/included:9.5.3"
links:
- "webapp:webapp"
The frontend application points to the mock server by its host name "backend" in the Docker network. You would need to set up a configuration that is dependent on the NODE_ENV
environment variable, set to docker
in this example.
Not Just for Test Automation
One additional advantage of Karate’s readable syntax is to help you during the development of frontend components, not just for testing. It can be a powerful communication tool between frontend and backend and lets you develop your frontend code against a stable and representative backend long before the actual backend is ready for use.
Opinions expressed by DZone contributors are their own.
Comments