OpenAI

Since Camel 4.17

Only producer is supported

The OpenAI component provides integration with OpenAI and OpenAI-compatible APIs for chat completion using the official openai-java SDK.

Maven users will need to add the following dependency to their pom.xml for this component:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-openai</artifactId>
    <version>x.x.x</version>
    <!-- use the same version as your Camel core version -->
</dependency>

URI Format

openai:operation[?options]

Currently, only the chat-completion operation is supported.

Configuring Options

Camel components are configured on two separate levels:

  • component level

  • endpoint level

Configuring Component Options

At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.

For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.

Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.

You can configure components using:

  • the Component DSL.

  • in a configuration file (application.properties, *.yaml files, etc).

  • directly in the Java code.

Configuring Endpoint Options

You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.

Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.

A good practice when configuring options is to use Property Placeholders.

Property placeholders provide a few benefits:

  • They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.

  • They allow externalizing the configuration from the code.

  • They help the code to become more flexible and reusable.

The following two sections list all the options, firstly for the component followed by the endpoint.

Component Options

The OpenAI component supports 5 options, which are listed below.

Name Description Default Type

apiKey (producer)

Default API key for all endpoints.

String

baseUrl (producer)

Default base URL for all endpoints.

String

lazyStartProducer (producer)

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

model (producer)

Default model for all endpoints.

String

autowiredEnabled (advanced)

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

boolean

Endpoint Options

The OpenAI endpoint is configured using URI syntax:

openai:operation

With the following path and query parameters:

Path Parameters (1 parameters)

Name Description Default Type

operation (producer)

Required The operation to perform (currently only chat-completion is supported).

String

Query Parameters (16 parameters)

Name Description Default Type

apiKey (producer)

OpenAI API key. Can also be set via OPENAI_API_KEY environment variable.

String

baseUrl (producer)

Base URL for OpenAI API. Defaults to OpenAI’s official endpoint. Can be used for local or third-party providers.

String

conversationHistoryProperty (producer)

Exchange property name for storing conversation history.

CamelOpenAIConversationHistory

String

conversationMemory (producer)

Enable conversation memory per Exchange.

false

boolean

developerMessage (producer)

Developer message to prepend before user messages.

String

jsonSchema (producer)

JSON schema for structured output validation.

String

maxTokens (producer)

Maximum number of tokens to generate.

Integer

model (producer)

The model to use for chat completion.

gpt-5

String

outputClass (producer)

Fully qualified class name for structured output using response format.

String

storeFullResponse (producer)

Store the full response in the exchange property 'CamelOpenAIResponse' in non-streaming mode.

false

boolean

streaming (producer)

Enable streaming responses.

false

boolean

systemMessage (producer)

System message to prepend. When set and conversationMemory is enabled, the conversation history is reset.

String

temperature (producer)

Temperature for response generation (0.0 to 2.0).

1.0

Double

topP (producer)

Top P for response generation (0.0 to 1.0).

Double

userMessage (producer)

Default user message text to use when no prompt is provided.

String

lazyStartProducer (producer (advanced))

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

Usage

Authentication

Set baseUrl to your provider’s endpoint (default:`https://api.openai.com/v1`).

API key resolution order:

  • Endpoint apiKey

  • Component apiKey

  • Environment variable OPENAI_API_KEY

  • System property openai.api.key

The API key can be omitted if using OpenAI-compatible providers that don’t require authentication (e.g., some local LLM servers).

Basic Chat Completion with String Input

  • Java

  • YAML

from("direct:chat")
    .setBody(constant("What is Apache Camel?"))
    .to("openai:chat-completion")
    .log("Response: ${body}");
- route:
    from:
      uri: direct:chat
      steps:
        - to:
            uri: openai:chat-completion
            parameters:
              userMessage: What is Apache Camel?
        - log: "Response: ${body}"

File-Backed Prompt with Text File

Usage example:
from("file:prompts?noop=true")
    .to("openai:chat-completion")
    .log("Response: ${body}");

Image File Input with Vision Model

Usage example:
from("file:images?noop=true")
    .to("openai:chat-completion?model=gpt-4.1-mini?userMessage=Describe what you see in this image")
    .log("Response: ${body}");

When using image files, the userMessage is required. Supported image formats are detected by MIME type (e.g., image/png, image/jpeg, image/gif, image/webp).

Streaming Response

When streaming=true, the component returns an Iterator<ChatCompletionChunk> in the message body. You can consume this iterator using Camel’s streaming EIPs or process it directly:

Usage example:
- route:
    id: route-1145
    from:
      id: from-1972
      uri: timer
      parameters:
        repeatCount: 1
        timerName: timer
      steps:
        - to:
            id: to-1301
            uri: openai:chat-completion
            parameters:
              userMessage: In one sentence, what is Apache Camel?
              streaming: true
        - split:
            id: split-3196
            steps:
              - marshal:
                  id: marshal-3773
                  json:
                    library: Jackson
              - log:
                  id: log-6722
                  message: ${body}
            simple:
              expression: ${body}
            streaming: true

Structured Output with outputClass

Usage example:
public class Person {
    public String name;
    public int age;
    public String occupation;
}

from("direct:structured")
    .setBody(constant("Generate a person profile for a software engineer"))
    .to("openai:chat-completion?baseUrl=https://api.openai.com/v1&outputClass=com.example.Person")
    .log("Structured response: ${body}");

Structured Output with JSON Schema

The jsonSchema option instructs the model to return JSON that conforms to the provided schema. The response will be valid JSON but is not automatically validated against the schema:

Usage example:
from("direct:json-schema")
    .setBody(constant("Create a product description"))
    .setHeader("CamelOpenAIJsonSchema", constant("{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"number\"}}}"))
    .to("openai:chat-completion")
    .log("JSON response: ${body}");

You can also load the schema from a resource file:

Usage example:
from("direct:json-schema-resource")
    .setBody(constant("Create a product description"))
    .to("openai:chat-completion?jsonSchema=resource:classpath:schemas/product.schema.json")
    .log("JSON response: ${body}");

For full schema validation, integrate with the camel-json-validator component after receiving the response.

Conversation Memory (Per Exchange)

Usage example:
from("direct:conversation")
    .setBody(constant("My name is Alice"))
    .to("openai:chat-completion?conversationMemory=true")
    .log("First response: ${body}")
    .setBody(constant("What is my name?"))
    .to("openai:chat-completion?conversationMemory=true")
    .log("Second response: ${body}"); // Will remember "Alice"

Using Third-Party or Local OpenAI-Compatible Endpoint

Usage example:
from("direct:local")
    .setBody(constant("Hello from local LLM"))
    .to("openai:chat-completion?baseUrl=http://localhost:1234/v1&model=local-model")
    .log("${body}");

Input Handling

The component accepts the following types of input in the message body:

  1. String: The prompt text is taken directly from the body

  2. File: Used for file-based prompts. The component handles two types of files:

    • Text files (MIME type starting with text/): The file content is read and used as the prompt. If userMessage endpoint option or CamelOpenAIUserMessage is set, it overrides the file content

    • Image files (MIME type starting with image/): The file is encoded as a base64 data URL and sent to vision-capable models. The userMessage is required when using image files

When using File input, the component uses Files.probeContentType() to detect the file type. Ensure your system has proper MIME type detection configured.

Output Handling

Default Mode

The full model response is returned as a String in the message body.

Streaming Mode

When streaming=true, the message body contains an Iterator<ChatCompletionChunk> suitable for Camel streaming EIPs (such as split() with streaming()).

IMPORTANT: * Resource cleanup is handled automatically when the Exchange completes (success or failure) * Conversation memory is not automatically updated for streaming responses (only for non-streaming responses)

Structured Outputs

Using outputClass

The model is instructed to return JSON matching the specified class, but the response body remains a String.

Using jsonSchema

The jsonSchema option instructs the model to return JSON conforming to the provided schema. The response will be valid JSON but is not automatically validated against the schema. For full schema validation, integrate with the camel-json-validator component after receiving the response.

The JSON schema must be a valid JSON object. Invalid schema strings will result in an IllegalArgumentException.

Conversation Memory

When conversationMemory=true, the component maintains conversation history in the CamelOpenAIConversationHistory exchange property (configurable via conversationHistoryProperty option). This history is scoped to a single Exchange and allows multi-turn conversations within a route.

IMPORTANT: * Conversation history is automatically updated with each assistant response for non-streaming responses only * The history is stored as a List<ChatCompletionMessageParam> in the Exchange property * The history persists across multiple calls to the endpoint within the same Exchange * You can manually set the CamelOpenAIConversationHistory exchange property to provide custom conversation context

Example of manual conversation history:

Usage example:
List<ChatCompletionMessageParam> history = new ArrayList<>();
history.add(ChatCompletionMessageParam.ofUser(/* ... */));
history.add(ChatCompletionMessageParam.ofAssistant(/* ... */));

from("direct:with-history")
    .setBody(constant("Continue the conversation"))
    .setProperty("CamelOpenAIConversationHistory", constant(history))
    .to("openai:chat-completion?conversationMemory=true")
    .log("${body}");

Compatibility

This component works with any OpenAI API-compatible endpoint by setting the baseUrl parameter. This includes:

  • OpenAI official API (https://api.openai.com/v1)

  • Azure OpenAI (may require additional configuration)

  • Local LLM servers (e.g., Ollama, LM Studio, LocalAI)

  • Third-party OpenAI-compatible providers

When using local or third-party providers, ensure they support the chat completions API endpoint format. Some providers may have different authentication requirements or API variations.

Error Handling

The component may throw the following exceptions:

  • IllegalArgumentException:

    • When an invalid operation is specified (only chat-completion is supported)

    • When message body or user message is missing

    • When image file is provided without userMessage

    • When unsupported file type is provided (only text and image files are supported)

    • When invalid JSON schema string is provided

  • API-specific exceptions from the OpenAI SDK for network errors, authentication failures, rate limiting, etc.