AWS CDK: Infrastructure as Abstract Data Types, Part 3
At the end of the first part of this CDK series, we promised to demonstrate how to programmatically manage the S3 bucket created as a part of our stack. Let's see it now.
Join the DZone community and get the full member experience.
Join For FreeIn this third part of our CDK series, the project cdk-quarkus-s3
, in the same GIT repository, will be used to illustrate a couple of advanced Quarkus to AWS integration features, together with several tricks specific to RESTeasy which is, as everyone knows, the RedHat implementation of Jakarta REST specifications.
Let's start by looking at the project's pom.xml
file which drives the Maven build process. You'll see the following dependency:
... <dependency> <groupId>io.quarkiverse.amazonservices</groupId> <artifactId>quarkus-amazon-s3</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-amazon-lambda-http</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> ... <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>netty-nio-client</artifactId> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>url-connection-client</artifactId> </dependency> ...
The first dependency in the listing above, quarkus-amazon-s3
is a Quarkus extension allowing your code to act as an AWS S3 client and to store and delete objects in buckets or implement backup and recovery strategies, archive data, etc.
The next dependency, quarkus-amazon-lambda-http
, is another Quarkus extension that aims at supporting the AWS HTTP Gateway API. As the reader already knows from the two previous parts of this series, with Quarkus, one can deploy a REST API as AWS Lambda using either AWS HTTP Gateway API or AWS REST Gateway API. Here we'll be using the former one, less expansive, hence the mentioned extension. If we wanted to use the AWS REST Gateway API,
then we would have had to replace the quarkus-amazon-lambda-http
extension by the quarkus-amazon-lambda-rest
one.
What To Expect
In this project, we'll be using Quarkus 3.11 which, at the time of this writing, is the most recent release. Some of the RESTeasy dependencies have changed, compared with former versions, hence the dependency quarkus-rest-jackson
which replaces now the quarkus-resteasy
one, used in 3.10 and before. Also, the quarkus-rest-client
extension, implementing the Eclipse MP REST Client specifications, is needed for test purposes, as we will see in a moment. Last but not least, the url-connection-client
Quarkus extension is needed because the MP REST Client implementation uses it by default and, consequently, it has to be included in the build process.
Now, let's look at our new REST API. Open the Java class S3FileManagementAPI
in the cdk-quarkus-s3
project and you'll see that it defines three operations: download file, upload file, and list files. All three use the same S3 bucket created as a part of the CDK application's stack.
@Path("/s3")
public class S3FileManagementApi
{
@Inject
S3Client s3;
@ConfigProperty(name = "bucket.name")
String bucketName;
@POST
@Path("upload")
@Consumes(MediaType.MULTIPART_FORM_DATA)
public Response uploadFile(@Valid FileMetadata fileMetadata) throws Exception
{
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucketName)
.key(fileMetadata.filename)
.contentType(fileMetadata.mimetype)
.build();
s3.putObject(request, RequestBody.fromFile(fileMetadata.file));
return Response.ok().status(Response.Status.CREATED).build();
}
...
}
Explaining the Code
The code fragment above reproduces only the upload file operation, the other two being very similar. Observe how simple the instantiation of the S3Client
is by taking advantage of the Quarkus CDI which avoids the need for several boilerplate lines of code. Also, we're using the Eclipse MP Config specification to define the name of the destination S3 bucket.
Our endpoint uploadFile()
accepts POST requests and consumes MULTIPART_FORM_DATA
MIME data is structured in two distinct parts, one for the payload and the other one containing the file to be uploaded. The endpoint takes an input parameter of the class FileMetadata
, shown below:
public class FileMetadata
{
@RestForm
@NotNull
public File file;
@RestForm
@PartType(MediaType.TEXT_PLAIN)
@NotEmpty
@Size(min = 3, max = 40)
public String filename;
@RestForm
@PartType(MediaType.TEXT_PLAIN)
@NotEmpty
@Size(min = 10, max = 127)
public String mimetype;
...
}
This class is a data object grouping the file to be uploaded together with its name and MIME type. It uses the @RestForm
RESTeasy specific annotation to handle HTTP requests that have multipart/form-data
as their content type. The use of jakarta.validation.constraints
annotations are very practical as well for validation purposes.
To come back at our endpoint above, it creates a PutObjectRequest
having as input arguments the destination bucket name, a key that uniquely identifies the stored file in the bucket, in this case, the file name, and the associated MIME type, for example TEXT_PLAIN
for a text file. Once the PutObjectRequest
created it is sent via an HTTP PUT request to the AWS S3 service. Please notice how easy the file to be uploaded is inserted into the request body using the RequestBody.fromFile(...)
statement.
That's all as far as the REST API exposed as an AWS Lambda function is concerned. Now let's look at what's new in our CDK application's stack:
...
HttpApi httpApi = HttpApi.Builder.create(this, "HttpApiGatewayIntegration")
.defaultIntegration(HttpLambdaIntegration.Builder.create("HttpApiGatewayIntegration", function).build()).build();
httpApiGatewayUrl = httpApi.getUrl();
CfnOutput.Builder.create(this, "HttpApiGatewayUrlOutput").value(httpApi.getUrl()).build();
...
These lines have been added to the LambdaWithBucketConstruct
class in the cdk-simple-construct
project. We want the Lambda function we're creating in the current stack to be located behind an HTTP Gateway and backups it. This might have some advantages. So we need to create an integration for our Lambda function.
The notion of integration, as defined by AWS, means providing a backend for an API endpoint. In the case of the HTTP Gateway, one or more backends should be provided for each API Gateway's endpoints. The integrations have their own request and responses, distinct from the ones of the API itself. There are two integration types:
- Lambda integrations where the backend is a Lambda function;
- HTTP integrations where the backend might be any deployed web application;
In our example, we're using Lambda integration, of course. There are two types of Lambda integrations as well:
- Lambda proxy integration where the definition of the integration's request and response, as well as their mapping to/from the original ones, aren't required as they are automatically provided;
- Lambda non-proxy integration where we need to explicitly specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response;
For simplicity's sake, we're using the 1st case in our project. This is what the statement .defaultIntegration(...)
above is doing. Once the integration is created, we need to display the URL of the newly created API Gateway, which our Lambda function is the backup. This way, in addition to being able to directly invoke our Lambda function, as we did previously, we'll be able to do it through the API Gateway. And in a project with several dozens of REST endpoints, it's very important to have a single contact point, where to apply security policies, logging, journalisation, and other cross-cutting concerns. The API Gateway is ideal as a single contact point.
The project comes with a couple of unit and integration tests. For example, the class S3FileManagementTest
performs unit testing using REST Assured, as shown below:
@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class S3FileManagementTest
{
private static File readme = new File("./src/test/resources/README.md");
@Test
@Order(10)
public void testUploadFile()
{
given()
.contentType(MediaType.MULTIPART_FORM_DATA)
.multiPart("file", readme)
.multiPart("filename", "README.md")
.multiPart("mimetype", MediaType.TEXT_PLAIN)
.when()
.post("/s3/upload")
.then()
.statusCode(HttpStatus.SC_CREATED);
}
@Test
@Order(20)
public void testListFiles()
{
given()
.when().get("/s3/list")
.then()
.statusCode(200)
.body("size()", equalTo(1))
.body("[0].objectKey", equalTo("README.md"))
.body("[0].size", greaterThan(0));
}
@Test
@Order(30)
public void testDownloadFile() throws IOException
{
given()
.pathParam("objectKey", "README.md")
.when().get("/s3/download/{objectKey}")
.then()
.statusCode(200)
.body(equalTo(Files.readString(readme.toPath())));
}
}
This unit test starts by uploading the file README.md
to the S3 bucket defined for the purpose. Then it lists all the files present in the bucket and finishes by downloading the file just uploaded. Please notice the following lines in the application.properties
file:
bucket.name=my-bucket-8701
%test.quarkus.s3.devservices.buckets=${bucket.name}
The first one defines the names of the destination bucket and the second one automatically creates it. This only works while executed via the Quarkus Mock server. While this unit test is executed in the Maven test phase, against a localstack
instance run by testcontainers
, automatically managed by Quarkus, the integration one, S3FileManagementIT
, is executed against the real AWS infrastructure, once our CDK application is deployed.
The integration tests use a different paradigm and, instead of REST Assured, very practical for unit tests, they take advantage of the Eclipse MP REST Client specifications, implemented by Quarkus, as shown in the following snippet:
@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class S3FileManagementIT
{
private static File readme = new File("./src/test/resources/README.md");
@Inject
@RestClient
S3FileManagementClient s3FileManagementTestClient;
@Inject
@ConfigProperty(name = "base_uri/mp-rest/url")
String baseURI;
@Test
@Order(40)
public void testUploadFile() throws Exception
{
Response response = s3FileManagementTestClient.uploadFile(new FileMetadata(readme, "README.md", MediaType.TEXT_PLAIN));
assertThat(response).isNotNull();
assertThat(response.getStatusInfo().toEnum()).isEqualTo(Response.Status.CREATED);
}
...
}
We inject S3FileManagementClient
which is a simple interface defining our API endpoints and Quarkus does the rest. It generates the required client code. We just have to invoke endpoints on this interface, for example uploadFile(...)
, and that's all. Have a look at S3FileManagementClient
, in the cdk-quarkus-s3
project, to see how everything works and please notice how the annotation @RegisterRestClient
defines a configuration key, named base_uri
, used further in the deploy.sh
script.
Now, to test against the AWS real infrastructure, you need to execute the deploy.sh
script, as follows:
$ cd cdk
$ ./deploy.sh cdk-quarkus/cdk-quarkus-api-gateway cdk-quarkus/cdk-quarkus-s3
This will compile and build the application, execute the unit tests, deploy the CloudFormation stack on AWS, and execute the integration tests against this infrastructure. At the end of the execution, you should see something like:
Outputs:
QuarkusApiGatewayStack.FunctionURLOutput = https://<generated>.lambda-url.eu-west-3.on.aws/
QuarkusApiGatewayStack.LambdaWithBucketConstructIdHttpApiGatewayUrlOutput = https://<generated>.execute-api.eu-west-3.amazonaws.com/
Stack ARN:
arn:aws:cloudformation:eu-west-3:...:stack/QuarkusApiGatewayStack/<generated>
Now, in addition to the Lambda function URL that you've already seen in our previous examples, you can see how the API HTTP Gateway URL, that you can use now for testing purposes, instead of the Lambda one.
An E2E test case, exported from Postman (S3FileManagementPostmanIT
), is provided as well. It is executed via the Docker image postman/newman:latest
, running in testcontainers
. Here is a snippet:
@QuarkusTest
public class S3FileManagementPostmanIT
{
...
private static GenericContainer<?> postman = new GenericContainer<>("postman/newman")
.withNetwork(Network.newNetwork())
.withCopyFileToContainer(MountableFile.forClasspathResource("postman/AWS.postman_collection.json"),
"/etc/newman/AWS.postman_collection.json")
.withStartupCheckStrategy(new OneShotStartupCheckStrategy().withTimeout(Duration.ofSeconds(10)));
@Test
public void run()
{
String apiEndpoint = System.getenv("API_ENDPOINT");
assertThat(apiEndpoint).isNotEmpty();
postman.withCommand("run", "AWS.postman_collection.json",
"--global-var base_uri=" + apiEndpoint.substring(8).replaceAll(".$", ""));
postman.start();
LOG.info(postman.getLogs());
assertThat(postman.getCurrentContainerInfo().getState().getExitCodeLong()).isZero();
postman.stop();
}
}
Conclusion
As you can see, after starting the postman/newman:latest
image with testcontainers
, we run the E2E test case exported from Postman by passing to it the option global-vars
such that to initialize the global variable labeled base_uri
to the value of the REST API URL saved by the deploy.sh
script in the API-ENDPOINT
environment variable. Unfortunately, due probably to a bug, the postman/newman
image doesn't recognize this option, accordingly, waiting for this issue to be fixed, this test is disabled for now.
You can, of course, import the file AWS.postman_collection.json
in Postman and run it this way after having replaced the global variable {{base_uri}}
with the current value of the API URL generated by AWS.
Enjoy!
Opinions expressed by DZone contributors are their own.
Comments