OpenAI
Since Camel 4.17
Only producer is supported
The OpenAI component provides integration with OpenAI and OpenAI-compatible APIs for chat completion using the official openai-java SDK.
Maven users will need to add the following dependency to their pom.xml for this component:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-openai</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency> Configuring Options
Camel components are configured on two separate levels:
-
component level
-
endpoint level
Configuring Component Options
At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.
For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
You can configure components using:
-
the Component DSL.
-
in a configuration file (
application.properties,*.yamlfiles, etc). -
directly in the Java code.
Configuring Endpoint Options
You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.
A good practice when configuring options is to use Property Placeholders.
Property placeholders provide a few benefits:
-
They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.
-
They allow externalizing the configuration from the code.
-
They help the code to become more flexible and reusable.
The following two sections list all the options, firstly for the component followed by the endpoint.
Component Options
The OpenAI component supports 5 options, which are listed below.
| Name | Description | Default | Type |
|---|---|---|---|
Default API key for all endpoints. | String | ||
Default base URL for all endpoints. | String | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
Default model for all endpoints. | String | ||
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
Endpoint Options
The OpenAI endpoint is configured using URI syntax:
openai:operation
With the following path and query parameters:
Query Parameters (16 parameters)
| Name | Description | Default | Type |
|---|---|---|---|
OpenAI API key. Can also be set via OPENAI_API_KEY environment variable. | String | ||
Base URL for OpenAI API. Defaults to OpenAI’s official endpoint. Can be used for local or third-party providers. | String | ||
Exchange property name for storing conversation history. | CamelOpenAIConversationHistory | String | |
Enable conversation memory per Exchange. | false | boolean | |
Developer message to prepend before user messages. | String | ||
JSON schema for structured output validation. | String | ||
Maximum number of tokens to generate. | Integer | ||
The model to use for chat completion. | gpt-5 | String | |
Fully qualified class name for structured output using response format. | String | ||
Store the full response in the exchange property 'CamelOpenAIResponse' in non-streaming mode. | false | boolean | |
Enable streaming responses. | false | boolean | |
System message to prepend. When set and conversationMemory is enabled, the conversation history is reset. | String | ||
Temperature for response generation (0.0 to 2.0). | 1.0 | Double | |
Top P for response generation (0.0 to 1.0). | Double | ||
Default user message text to use when no prompt is provided. | String | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
Usage
Authentication
Set baseUrl to your provider’s endpoint (default:`https://api.openai.com/v1`).
API key resolution order:
-
Endpoint
apiKey -
Component
apiKey -
Environment variable
OPENAI_API_KEY -
System property
openai.api.key
| The API key can be omitted if using OpenAI-compatible providers that don’t require authentication (e.g., some local LLM servers). |
Basic Chat Completion with String Input
-
Java
-
YAML
from("direct:chat")
.setBody(constant("What is Apache Camel?"))
.to("openai:chat-completion")
.log("Response: ${body}"); - route:
from:
uri: direct:chat
steps:
- to:
uri: openai:chat-completion
parameters:
userMessage: What is Apache Camel?
- log: "Response: ${body}" File-Backed Prompt with Text File
from("file:prompts?noop=true")
.to("openai:chat-completion")
.log("Response: ${body}"); Image File Input with Vision Model
from("file:images?noop=true")
.to("openai:chat-completion?model=gpt-4.1-mini?userMessage=Describe what you see in this image")
.log("Response: ${body}"); | When using image files, the userMessage is required. Supported image formats are detected by MIME type (e.g., |
Streaming Response
When streaming=true, the component returns an Iterator<ChatCompletionChunk> in the message body. You can consume this iterator using Camel’s streaming EIPs or process it directly:
- route:
id: route-1145
from:
id: from-1972
uri: timer
parameters:
repeatCount: 1
timerName: timer
steps:
- to:
id: to-1301
uri: openai:chat-completion
parameters:
userMessage: In one sentence, what is Apache Camel?
streaming: true
- split:
id: split-3196
steps:
- marshal:
id: marshal-3773
json:
library: Jackson
- log:
id: log-6722
message: ${body}
simple:
expression: ${body}
streaming: true Structured Output with outputClass
public class Person {
public String name;
public int age;
public String occupation;
}
from("direct:structured")
.setBody(constant("Generate a person profile for a software engineer"))
.to("openai:chat-completion?baseUrl=https://api.openai.com/v1&outputClass=com.example.Person")
.log("Structured response: ${body}"); Structured Output with JSON Schema
The jsonSchema option instructs the model to return JSON that conforms to the provided schema. The response will be valid JSON but is not automatically validated against the schema:
from("direct:json-schema")
.setBody(constant("Create a product description"))
.setHeader("CamelOpenAIJsonSchema", constant("{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"number\"}}}"))
.to("openai:chat-completion")
.log("JSON response: ${body}"); You can also load the schema from a resource file:
from("direct:json-schema-resource")
.setBody(constant("Create a product description"))
.to("openai:chat-completion?jsonSchema=resource:classpath:schemas/product.schema.json")
.log("JSON response: ${body}"); | For full schema validation, integrate with the |
Conversation Memory (Per Exchange)
from("direct:conversation")
.setBody(constant("My name is Alice"))
.to("openai:chat-completion?conversationMemory=true")
.log("First response: ${body}")
.setBody(constant("What is my name?"))
.to("openai:chat-completion?conversationMemory=true")
.log("Second response: ${body}"); // Will remember "Alice" Input Handling
The component accepts the following types of input in the message body:
-
String: The prompt text is taken directly from the body
-
File: Used for file-based prompts. The component handles two types of files:
-
Text files (MIME type starting with
text/): The file content is read and used as the prompt. If userMessage endpoint option orCamelOpenAIUserMessageis set, it overrides the file content -
Image files (MIME type starting with
image/): The file is encoded as a base64 data URL and sent to vision-capable models. The userMessage is required when using image files
-
| When using |
Output Handling
Streaming Mode
When streaming=true, the message body contains an Iterator<ChatCompletionChunk> suitable for Camel streaming EIPs (such as split() with streaming()).
IMPORTANT: * Resource cleanup is handled automatically when the Exchange completes (success or failure) * Conversation memory is not automatically updated for streaming responses (only for non-streaming responses)
Structured Outputs
Using outputClass
The model is instructed to return JSON matching the specified class, but the response body remains a String.
Using jsonSchema
The jsonSchema option instructs the model to return JSON conforming to the provided schema. The response will be valid JSON but is not automatically validated against the schema. For full schema validation, integrate with the camel-json-validator component after receiving the response.
The JSON schema must be a valid JSON object. Invalid schema strings will result in an IllegalArgumentException.
Conversation Memory
When conversationMemory=true, the component maintains conversation history in the CamelOpenAIConversationHistory exchange property (configurable via conversationHistoryProperty option). This history is scoped to a single Exchange and allows multi-turn conversations within a route.
IMPORTANT: * Conversation history is automatically updated with each assistant response for non-streaming responses only * The history is stored as a List<ChatCompletionMessageParam> in the Exchange property * The history persists across multiple calls to the endpoint within the same Exchange * You can manually set the CamelOpenAIConversationHistory exchange property to provide custom conversation context
Example of manual conversation history:
List<ChatCompletionMessageParam> history = new ArrayList<>();
history.add(ChatCompletionMessageParam.ofUser(/* ... */));
history.add(ChatCompletionMessageParam.ofAssistant(/* ... */));
from("direct:with-history")
.setBody(constant("Continue the conversation"))
.setProperty("CamelOpenAIConversationHistory", constant(history))
.to("openai:chat-completion?conversationMemory=true")
.log("${body}"); Compatibility
This component works with any OpenAI API-compatible endpoint by setting the baseUrl parameter. This includes:
-
OpenAI official API (
https://api.openai.com/v1) -
Azure OpenAI (may require additional configuration)
-
Local LLM servers (e.g., Ollama, LM Studio, LocalAI)
-
Third-party OpenAI-compatible providers
| When using local or third-party providers, ensure they support the chat completions API endpoint format. Some providers may have different authentication requirements or API variations. |
Error Handling
The component may throw the following exceptions:
-
IllegalArgumentException:-
When an invalid operation is specified (only
chat-completionis supported) -
When message body or user message is missing
-
When image file is provided without userMessage
-
When unsupported file type is provided (only text and image files are supported)
-
When invalid JSON schema string is provided
-
-
API-specific exceptions from the OpenAI SDK for network errors, authentication failures, rate limiting, etc.