Optimizing Java Applications for AWS Lambda
Learn how to optimize Java applications for AWS Lambda, tackle cold starts, improve concurrency, and optimize Spring Boot for serverless environments.
Join the DZone community and get the full member experience.
Join For FreeJava has long been a trusted language for enterprise applications due to its versatility and ability to run seamlessly across various platforms, but as serverless platforms like AWS Lambda gain momentum, deploying Java applications in serverless platforms presents unique challenges, notably due to bloated packages and time to get initialized.
This led to an increase in the popularity and adoption of languages such as Go, Node.js, and Python for applications that are traditionally built on Java. When we take a deeper look to understand the inherent struggles that JVM-based applications have, the prominent ones are slow cold starts, high memory consumption, and runtime inefficiencies, making lighter runtimes more attractive in cloud-native environments.
However, with the recent development of modern Java features, frameworks, and tools like GraalVM, virtual threads, Spring Native, and AWS SnapStart, we are now able to build nimble applications to overcome these limitations. These innovations not only improve performance but also enable cost-effective scalability.
This guide explores how to optimize Java applications for AWS Lambda, grouped into three core areas:
- Reducing cold starts
- Improving concurrency and performance
- Optimizing Spring Boot for serverless
Reducing Cold Starts
A cold start is when your application receives the first request; AWS then has to create a temporary environment for the function to run in. This involves loading the function code, setting up all the necessary libraries and dependencies, and initializing the runtime environment. While this is happening, your request has to wait, which can lead to a noticeable delay in the response. Once everything's warmed up, the response time gets significantly better. Still, since the Lambda instance will be created on demand and unlike traditional servers, Lambda is short-lived, and the cold start gets more pronounced for applications that are packages as havier executables (jar) files and also one that takes more time to initialize.
Most modern enterprise web applications built on Java use supporting frameworks like Spring Boot, Hibernate, or Liquibase for easier development and better management of dependencies. However, including those complex frameworks increases the initialization time, causing a spike in cold start duration.
This section covers strategies to minimize these delays and ensure your application is ready to handle on-demand traffic.
Native Compilation With GraalVM
GraalVM is a high-performance runtime that enables Ahead-of-Time (AOT) compilation, transforming Java applications into native executables. Native images eliminate the need for JVM startup, drastically improving cold start times and reducing memory consumption. Below are some of the benefits of using GraalVM.
Benefits
- Faster cold starts: Native binaries bypass JVM initialization and class loading, resulting in speedier initialization.
- Lower memory footprint: Only the necessary classes and dependencies are packaged, reducing resource usage.
- Custom runtime support: Native binaries integrate seamlessly with AWS Lambda's custom runtimes.
Steps to Use GraalVM
1. Add GraalVM dependencies:
<dependency>
<groupId>org.graalvm.nativeimage</groupId>
<artifactId>native-image-maven-plugin</artifactId>
</dependency>
2. Add Spring Native dependencies:
<dependency>
<groupId>org.springframework.experimental</groupId>
<artifactId>spring-native</artifactId>
</dependency>
3. Build the native image:
mvn clean package -Pnative
By leveraging custom runtime or frameworks like Quarkus or Spring Native, you can package and deploy your application as a native binary.
AWS Lambda SnapStart
AWS SnapStart is a feature that helps Lambda functions start up much faster. Think of it like a chef preparing ingredients in advance. Instead of waiting for the function to load and initialize every time you call it, SnapStart creates a ‘warm’ version of your function code by pre-initializing the function by taking a snapshot of the initialized runtime environment. This pre-initialized version is ready to go, much like a chef having all the ingredients measured and ready to cook. This eliminates the delay associated with cold starts, making your Lambda functions much more responsive, especially for those initial requests.
How to Enable SnapStart
- Enable SnapStart in the AWS Management Console.
- Deploy and validate performance improvements.
Benefits of SnapStart
- Eliminates Cold Starts: Skip time-consuming JVM initialization.
- Seamless Integration: Works with existing Java functions without significant code changes.
Class Data Sharing (CDS)
Class Data Sharing (CDS) allows the JVM to share common class data, such as class metadata and static fields, across multiple Java processes. This means that when your Lambda function starts, it doesn't have to load all this information from scratch. Instead, it can reuse a pre-built shared archive, which significantly reduces the time it takes to initialize the JVM and start executing your code. This results in faster responses to the first few invocations of your Lambda function, improving the overall user experience.
Steps to Enable CDS
1. Generate a CDS Archive:
java -Xshare:dump -XX:SharedArchiveFile=app-cds.jsa -cp myapp.jar
2. Use the CDS Archive:
java -Xshare:on -XX:SharedArchiveFile=app-cds.jsa -jar myapp.jar
Benefits
- Faster JVM startup: Preloaded metadata eliminates class-loading overhead.
- Lower memory usage: Shared archives save memory across JVM processes.
Improving Concurrency and Performance
Virtual threads introduced in Java 21 offer a scalable threading model that enables developers to easily write concurrent code without the overhead of managing traditional threads, virtual threads enable Lambda functions to handle a much larger number of concurrent requests with fewer resources. This translates to improved throughput, reduced latency, and better resource utilization within the Lambda environment.
Leveraging Virtual Threads
Virtual threads are lightweight threads managed by the JVM. They are much easier to create and manage, consume far fewer resources, and can handle a vast number of concurrent tasks with greater efficiency. While system threads remain valuable for certain use cases, virtual threads offer a more lightweight and scalable approach to concurrency in Java, enabling developers to build more efficient and responsive applications.
Benefits
- Massive concurrency: Enables application to handle millions of threads for I/O-bound workloads without exhausting system resources.
- Simplified code: Developers can write straightforward, synchronous code without reactive frameworks.
- Efficient resource utilization: Reduce context-switching overhead compared to traditional threads.
How to Use Virtual Threads
Create a Virtual Thread Executor:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class VirtualThreadExample {
public static void main(String[] args) throws InterruptedException {
// Create an ExecutorService specifically for virtual threads
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
// Submit a task to the executor
executor.submit(() -> {
var data = fetchDataFromAPI();
System.out.println(data);
});
}
}
Virtual Threads With Spring Web MVC
In Spring Web MVC, by default, the thread pool processes HTTP requests using traditional threads (System/OS thread). By changing to Virtual threads it allows the application to scale more effectively by handling each request with a lightweight virtual thread. This enables higher concurrency with minimal memory overhead.
Similarly, using Virtual Threads in Spring JPA allows database queries to be executed asynchronously.
How to Configure Virtual Threads in Spring Web MVC
Customize Tomcat's Executor:
@Bean
public TomcatProtocolHandlerCustomizer<?> protocolHandlerVirtualThreads() {
return protocolHandler -> protocolHandler.setExecutor(Executors.newVirtualThreadPerTaskExecutor());
}
Asynchronous Query Execution with Virtual Threads:
@Service
public class UserService {
private final UserRepository userRepository;
private final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
public UserService(UserRepository userRepository) {
this.userRepository = userRepository;
}
public void findUserData() {
executor.submit(() -> {
var user = userRepository.findById(1L);
System.out.println("User: " + user);
});
}
}
Benefits
- Improved scalability: Allows applications to handle thousands of concurrent HTTP requests efficiently.
- Lower latency: Reduced resource contention results in faster response time.
Optimizing Spring Boot for Serverless
Spring Boot is a powerful framework for enterprise applications, its feature-rich nature enables managing dependency far easier but the very same feature can introduce unnecessary complexity in serverless environments. Careful optimizations can make Spring Boot serverless-ready.
Enable Lazy Initialization
By initializing only critical dependencies during startup and delaying the rest of the bean creation until they are needed, we can reduce the startup times and memory usage. Since lazy initialization is not the default behavior, you need the below configuration.
SpringApplication app = new SpringApplication(Application.class);
app.setLazyInitialization(true);
app.run(args);
or
spring.main.lazy-initialization=true
Trim Down Unnecessary Configurations
Spring Boot’s auto-configuration is powerful but often loads unused beans, increasing memory usage and startup time. you can identify the unused beans by calling actuator endpoint /actuator/conditions and exclude those specific classes from auto-configuration as shown in the code below.
@SpringBootApplication(exclude = {
DataSourceAutoConfiguration.class,
SecurityAutoConfiguration.class
})
Conclusion
Building modern Java applications using powerful tools like GraalVM and out-of-the-box features like virtual threads can make it a top choice for deploying it in a serverless environment. By incorporating above mentioned changes we can still use frameworks like Spring Boot to simplify application development without taking a toll on performance. By carefully optimizing your Java applications for AWS Lambda, you can significantly improve its performance.
Opinions expressed by DZone contributors are their own.
Comments