Managing AI-Powered Java App With API Management
Explore how to integrate OpenAI's ChatGPT APIs with a Spring Boot application and manage the APIs using Apache APISIX, an open-source API gateway.
Join the DZone community and get the full member experience.
Join For FreeIn this article, we will explore how to integrate OpenAI's ChatGPT APIs with a Spring Boot application and manage the APIs using Apache APISIX, an open-source API gateway. This integration will allow us to leverage the power of ChatGPT in our Spring Boot application, while APISIX will provide a robust, scalable, and secure way to manage the APIs.
OpenAI ChatGPT APIs
OpenAI's ChatGPT API is a powerful tool that we can use to integrate the capabilities of the ChatGPT model into our own applications, or services. The API allows us to send a series of messages and receive an AI model-generated message in response via REST. It offers a bunch of APIs to create text responses in a chatbot, code completion, generate images, or answer questions in a conversational interface. In this tutorial, we will use chat completion API to generate responses to a prompt (basically we can ask anything). Before starting with the tutorial, you can explore the API to have an understanding of how to authenticate to the API using API keys, how API request parameters and response look like.
A sample cURL request to chat completion API would look like this. You replace OPENAI_API_KEY
with your own API key and place it in the Authorization header while calling the API.
curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '
{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "What is Apache APISIX?"}]
}'
Here's a sample JSON response:
{
"id": "chatcmpl-7PtycrYOTJGv4jw8FQPD7LCCw0tOE", "object": "chat.completion", "created": 1686407730, "model": "gpt-3.5-turbo-0301", "usage": { "prompt_tokens": 15, "completion_tokens": 104, "total_tokens": 119 }, "choices": [ { "message": { "role": "assistant", "content": "Apache APISIX is a dynamic, real-time, high-performance API gateway designed to facilitate the management and routing of microservices and APIs. It provides features such as load balancing, rate limiting, authentication, authorization, and traffic control, all of which help to simplify the management of microservices and APIs. Apache APISIX is built on top of the Nginx server and can support high levels of traffic with low latency and high availability. It is open source and released under the Apache 2.0 license." }, "finish_reason": "stop", "index": 0 } ] }
Project Code Example
The tutorial consists of two parts. The first part covers the setting up Spring Boot application and creating a new API endpoint that can handle our API calls to chat completion API programmatically. In the second part, we will introduce APISIX features such as security, and traffic control to Spring Boot API. The full code examples for this tutorial are available on the GitHub repository called apisix-java-chatgpt-openaiapi.
Prerequisites
Before we start, make sure you have the following:
- Create an OpenAI API Key: To access the OpenAI API, you will need to create an API Key. You can do this by logging into the OpenAI website and navigating to the API Key management page.
- Docker is installed on your machine to run APISIX and Spring Boot.
Step 1: Setting up Your Spring Boot Application
First, we need to set up a new Spring Boot application. You can use Spring Initializr to generate a new Maven project with the necessary dependencies. For this tutorial, we will need the Spring Boot Starter Web dependency. To integrate the ChatGPT API, we will use the OpenAI Java client. There is an open-source community Java library. It provides service classes that create and calls the OpenAI's GPT APIs client in Java. Of course, you can write your own implementation in Spring that interacts with OpenAI APIs too. See other client libraries for different programming languages.
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.theokanning.openai-gpt3-java</groupId>
<artifactId>service</artifactId>
<version>0.12.0</version>
</dependency> </dependencies>
Step 2: Create a Controller Class
In the ChatCompletionController.java class, you can use the OpenAI service to send a request to the ChatGPT API.
import com.theokanning.openai.completion.chat.ChatCompletionChoice; import com.theokanning.openai.completion.chat.ChatCompletionRequest; import com.theokanning.openai.completion.chat.ChatMessage; import com.theokanning.openai.completion.chat.ChatMessageRole; import com.theokanning.openai.service.OpenAiService; import org.springframework.beans.factory.annotation.Value; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RestController; import java.util.ArrayList; import java.util.List; @RestController public class ChatCompletionController { @Value("${openai.model}") private String model; @Value("${openai.api.key}") private String openaiApiKey; @PostMapping("/ai-chat") public String chat(@RequestBody String prompt) {
OpenAiService service = new OpenAiService(openaiApiKey); final List<ChatMessage> messages = new ArrayList<>();
final ChatMessage systemMessage = new ChatMessage(
ChatMessageRole.USER.value(), prompt);
messages.add(systemMessage); ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder().model(model) .messages(messages) .maxTokens(250) .build(); List<ChatCompletionChoice> choices = service.createChatCompletion(chatCompletionRequest)
.getChoices(); if (choices == null || choices.isEmpty())
{
return "No response";
} return choices.get(0).getMessage().getContent();
} }
Chat API endpoint /ai-chat
handles POST requests, creates a chat request, and sends it to the OpenAI API. Then, it returns the first message from the API response.
Step 3: Define Application Properties
Next, we provide the properties for the API like model
and API key
in the application.properties file:
openai.model=gpt-3.5-turbo
openai.api.key=YOUR_OPENAI_API_TOKEN
Step 4: Run the Spring Boot App
We can now run the Application.java and test it with Postman or the cURL command.
As we can see, the application generated a response to our question in the prompt request body.
Step 5: Create a Dockerfile
We use a Docker container to wrap our Spring Boot application and use it together with other APISIX containers in docker-compose.yml. To do so, we can create a Dockerfile to build a JAR and execute it. See how to Dockerize Spring Boot app tutorial. Then, register the service in a docker-compose.yml file.
openaiapi:
build: openaiapi
ports:
- "8080:8080"
networks: apisix:
Step 6: Setting up Apache APISIX
To set up APISIX, we can simply run docker compose up
command. Because we have already defined all necessary services in docker-compose.yml. This file defines only 2 containers one for APISIX, and another for the Spring Boot application we implemented in the previous steps. In this sample project, we run APISIX in standalone mode. There are other APISIX installation options and deployment modes as well. Now APISIX as a separate service is running on localhost:9080
and Spring Boot app on localhost:8080
.
Step 7: Securing the API With APISIX
Once APISIX is set up, we can add security features to our existing Spring boot API /ai-chat
so that only allowed API consumers can access this API. APISIX provides several plugins to secure your APIs. For example, you can use the jwt-auth plugin to require a JWT token for all requests. Here is an example of how to add a route with an upstream and plugins using the apisix.yml
file:
upstreams:
- id: 1
type: roundrobin
nodes: "openaiapi:8080": 1 routes:
- uri: /ask-me-anything
upstream_id: 1
plugins:
proxy-rewrite:
uri: /ai-chat
jwt-auth: {}
- uri: /login
plugins:
public-api:
uri: /apisix/plugin/jwt/sign consumers:
- username: appsmithuser
plugins:
jwt-auth:
key: appsmithuser@gmail.com
secret: my-secret-key
After we specify upstreams, routes, and consumer objects and routing rules in the APISIX config, the config file is loaded into memory immediately after the APISIX node service starts in Docker. APISIX periodically tries to detect whether the file content is updated, if there is an update, it reloads and automatically changes.
With this configuration, we added one upstream, two routes, and one consumer object. In the first route, all requests to /ask-me-anything
(which is a custom URI path; you can define any URI there) must include the Authorization: JWT_TOKEN
in the header. Then, APISIX rewrites the custom URI path to the actual API /ai-chat
automatically with the help of the proxy-rewrite plugin, and forwards requests to the Spring Boot application running on localhost:8080
.
If you try to request the APISIX route, it will reject our requests by returning an authorized error:
curl -i http://localhost:9080/ask-me-anything -X POST -d ' { "prompt":"What is Apache APISIX?" }'
The result of the above request is below:
HTTP/1.1 401 Unauthorized
{"message":"Missing JWT token in request"}
In the second route config, we enabled public-api plugin to expose a new endpoint /login
to sign new JWT tokens. Because APISIX can act as an identity provider to generate and validate a new token from the API consumer or client apps. See Step 8, how we claim a new token for our API consumer.
- uri:
/login
plugins:
public-api:
uri: /apisix/plugin/jwt/sign
If you noticed, in the same config file, we registered an API consumer to use /ask-me-anything
AI-powered API and our users can claim APISIX using their secret to generate a JWT token to access the API:
consumers:
- username: appsmithuser
plugins:
jwt-auth:
key: appsmithuser@gmail.com
secret: my-secret-key
Step 8: Claim a New JWT Token
Now we can claim a new JWT token for our existing API consumer with the key:
curl -i http://127.0.0.1:9080/login?key=user-key -i
We will get the new token as a response from APISIX:
Server: APISIX/3.0.0 eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY4NjU5MjE0NH0.4Kn9c2DBYKthyUx824Ah97-z0Eu2Ul9WGO2WB3IfURA
Step 9: Request the API With the JWT Token
Finally, we can send a request to the API /ask-me-anything
with the JWT token in the header that we obtained in the previous step.
curl -i http://localhost:9080/ask-me-anything -H 'Authorization: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJ1c2VyLWtleSIsImV4cCI6MTY4NjU5Mjk4N30.lhom9db3XMkcVd86ScpM6s4eP1_YzR-tfmXPckszsYo' -X POST -d
'{
"prompt":"What is Apache APISIX?"
}'
Or, using Postman, we will get an AI response but this time response comes through APISIX Gateway:
Conclusion
In this tutorial, we explored the OpenAI ChatGPT API to generate responses to prompts. We created a Spring Boot application that calls the API to generate responses to prompts. Next, you can introduce additional features to your integration by updating the existing apisix.yml file. Also, you check out the branch name called with-frontend and run the project to see the UI interface built using Appsmith that works with APISIX.
Published at DZone with permission of Bobur Umurzokov. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments