OpenAI

Since Camel 4.17

Only producer is supported

The OpenAI component provides integration with OpenAI and OpenAI-compatible APIs for chat completion and text embeddings using the official openai-java SDK.

Maven users will need to add the following dependency to their pom.xml for this component:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-openai</artifactId>
    <version>x.x.x</version>
    <!-- use the same version as your Camel core version -->
</dependency>

URI Format

openai:operation[?options]

Supported operations:

  • chat-completion - Generate chat completions using language models

  • embeddings - Generate vector embeddings from text for semantic search and RAG applications

  • tool-execution - Execute MCP tool calls from a stored chat completion response (used in manual tool loops)

Configuring Options

Camel components are configured on two separate levels:

  • component level

  • endpoint level

Configuring Component Options

At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.

For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.

Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.

You can configure components using:

  • the Component DSL.

  • in a configuration file (application.properties, *.yaml files, etc).

  • directly in the Java code.

Configuring Endpoint Options

You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.

Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.

A good practice when configuring options is to use Property Placeholders.

Property placeholders provide a few benefits:

  • They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.

  • They allow externalizing the configuration from the code.

  • They help the code to become more flexible and reusable.

The following two sections list all the options, firstly for the component followed by the endpoint.

Component Options

The OpenAI component supports 6 options, which are listed below.

Name Description Default Type

apiKey (producer)

Default API key for all endpoints.

String

baseUrl (producer)

Default base URL for all endpoints.

https://api.openai.com/v1

String

embeddingModel (producer)

Default model for embeddings endpoints.

String

lazyStartProducer (producer)

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

model (producer)

Default model for chat completion endpoints.

String

autowiredEnabled (advanced)

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

boolean

Endpoint Options

The OpenAI endpoint is configured using URI syntax:

openai:operation

With the following path and query parameters:

Path Parameters (1 parameters)

Name Description Default Type

operation (producer)

Required The operation to perform: 'chat-completion', 'embeddings', or 'tool-execution'.

String

Query Parameters (26 parameters)

Name Description Default Type

additionalBodyProperty (producer)

Additional JSON properties to include in the request body (e.g. additionalBodyProperty.traceId=123). This is a multi-value option with prefix: additionalBodyProperty.

Map

apiKey (producer)

OpenAI API key. Can also be set via OPENAI_API_KEY environment variable.

String

autoToolExecution (producer)

When true and MCP servers are configured, automatically execute tool calls and loop back to the model. When false, tool calls are returned as the message body for manual handling.

true

boolean

baseUrl (producer)

Base URL for OpenAI API. Defaults to OpenAI’s official endpoint. Can be used for local or third-party providers.

https://api.openai.com/v1

String

conversationHistoryProperty (producer)

Exchange property name for storing conversation history.

CamelOpenAIConversationHistory

String

conversationMemory (producer)

Enable conversation memory per Exchange.

false

boolean

developerMessage (producer)

Developer message to prepend before user messages.

String

dimensions (producer)

Number of dimensions for the embedding output. Only supported by text-embedding-3 models. Reducing dimensions can lower costs and improve performance without significant quality loss.

Integer

embeddingModel (producer)

The model to use for embeddings.

String

encodingFormat (producer)

The format for embedding output: 'float' for list of floats, 'base64' for compressed format.

Enum values:

  • float

  • base64

base64

String

jsonSchema (producer)

JSON schema for structured output validation.

String

maxTokens (producer)

Maximum number of tokens to generate.

Integer

maxToolIterations (producer)

Maximum number of tool call loop iterations to prevent infinite loops.

50

int

mcpProtocolVersions (producer)

Comma-separated list of MCP protocol versions to advertise when connecting to MCP servers using Streamable HTTP transport. When not set, the SDK default is used. Example: 2024-11-05,2025-03-26,2025-06-18.

String

mcpReconnect (producer)

Automatically reconnect to MCP servers when a tool call fails due to a transport error, and retry the call once.

true

boolean

mcpServer (producer)

MCP (Model Context Protocol) server configurations. Define servers using prefix notation: mcpServer..transportType=stdiossestreamableHttp, mcpServer..command= (stdio), mcpServer..args= (stdio), mcpServer..url= (sse/streamableHttp). This is a multi-value option with prefix: mcpServer.

Map

mcpTimeout (producer)

Timeout in seconds for MCP tool call requests. Applies to all MCP operations including tool execution and initialization.

20

int

model (producer)

The model to use for chat completion.

String

outputClass (producer)

Fully qualified class name for structured output using response format.

String

storeFullResponse (producer)

Store the full response in the exchange property 'CamelOpenAIResponse' in non-streaming mode.

false

boolean

streaming (producer)

Enable streaming responses.

false

boolean

systemMessage (producer)

System message to prepend. When set and conversationMemory is enabled, the conversation history is reset.

String

temperature (producer)

Temperature for response generation (0.0 to 2.0).

Double

topP (producer)

Top P for response generation (0.0 to 1.0).

Double

userMessage (producer)

Default user message text to use when no prompt is provided.

String

lazyStartProducer (producer (advanced))

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

Message Headers

The OpenAI component supports 28 message header(s), which is/are listed below:

Name Description Default Type

CamelOpenAIUserMessage (producer)

Constant: USER_MESSAGE

The user message to send to the OpenAI chat completion API.

String

CamelOpenAISystemMessage (producer)

Constant: SYSTEM_MESSAGE

The system message to provide context and instructions to the model.

String

CamelOpenAIDeveloperMessage (producer)

Constant: DEVELOPER_MESSAGE

The developer message to provide additional instructions to the model.

String

CamelOpenAIModel (producer)

Constant: MODEL

The model to use for chat completion.

String

CamelOpenAITemperature (producer)

Constant: TEMPERATURE

Controls randomness in the response. Higher values (e.g., 0.8) make output more random, lower values (e.g., 0.2) make it more deterministic.

Double

CamelOpenAITopP (producer)

Constant: TOP_P

An alternative to temperature for controlling randomness. Uses nucleus sampling where the model considers tokens with top_p probability mass.

Double

CamelOpenAIMaxTokens (producer)

Constant: MAX_TOKENS

The maximum number of tokens to generate in the completion.

Integer

CamelOpenAIStreaming (producer)

Constant: STREAMING

Whether to stream the response back incrementally.

Boolean

CamelOpenAIOutputClass (producer)

Constant: OUTPUT_CLASS

The Java class to use for structured output parsing.

Class

CamelOpenAIJsonSchema (producer)

Constant: JSON_SCHEMA

The JSON schema to use for structured output validation.

String

CamelOpenAIResponseModel (producer)

Constant: RESPONSE_MODEL

The model used for the completion response.

String

CamelOpenAIResponseId (producer)

Constant: RESPONSE_ID

The unique identifier for the completion response.

String

CamelOpenAIFinishReason (producer)

Constant: FINISH_REASON

The reason the completion finished (e.g., stop, length, content_filter).

String

CamelOpenAIPromptTokens (producer)

Constant: PROMPT_TOKENS

The number of tokens used in the prompt.

Integer

CamelOpenAICompletionTokens (producer)

Constant: COMPLETION_TOKENS

The number of tokens used in the completion.

Integer

CamelOpenAITotalTokens (producer)

Constant: TOTAL_TOKENS

The total number of tokens used (prompt completion).

Integer

CamelOpenAIToolIterations (producer)

Constant: TOOL_ITERATIONS

Number of tool call iterations performed in the agentic loop.

Integer

CamelOpenAIMcpToolCalls (producer)

Constant: MCP_TOOL_CALLS

List of tool names called during the agentic loop.

List

CamelOpenAIMcpReturnDirect (producer)

Constant: MCP_RETURN_DIRECT

Whether the response came directly from a tool with returnDirect=true, rather than from the LLM.

Boolean

CamelOpenAIResponse (producer)

Constant: RESPONSE

The complete OpenAI response object.

ChatCompletion

CamelOpenAIEmbeddingModel (producer)

Constant: EMBEDDING_MODEL

The model to use for embeddings.

String

CamelOpenAIEmbeddingDimensions (producer)

Constant: EMBEDDING_DIMENSIONS

Number of output dimensions.

Integer

CamelOpenAIEmbeddingResponseModel (producer)

Constant: EMBEDDING_RESPONSE_MODEL

The embedding model used in the response.

String

CamelOpenAIEmbeddingCount (producer)

Constant: EMBEDDING_COUNT

Number of embeddings returned.

Integer

CamelOpenAIEmbeddingVectorSize (producer)

Constant: EMBEDDING_VECTOR_SIZE

Vector dimensions of the embeddings.

Integer

CamelOpenAIReferenceEmbedding (producer)

Constant: REFERENCE_EMBEDDING

Reference embedding vector for similarity comparison.

List

CamelOpenAISimilarityScore (producer)

Constant: SIMILARITY_SCORE

Calculated cosine similarity score (0.0 to 1.0).

Double

CamelOpenAIOriginalText (producer)

Constant: ORIGINAL_TEXT

Original text content when embeddings operation is used.

String or List

Usage

Authentication

Set baseUrl to your providers endpoint (default: https://api.openai.com/v1).

API key resolution order:

  • Endpoint apiKey

  • Component apiKey

  • Environment variable OPENAI_API_KEY

  • System property openai.api.key

The API key can be omitted if using OpenAI-compatible providers that don’t require authentication (e.g., some local LLM servers).

Basic Chat Completion with String Input

  • Java

  • YAML

from("direct:chat")
    .setBody(constant("What is Apache Camel?"))
    .to("openai:chat-completion")
    .log("Response: ${body}");
- route:
    from:
      uri: direct:chat
      steps:
        - to:
            uri: openai:chat-completion
            parameters:
              userMessage: What is Apache Camel?
        - log: "Response: ${body}"

File-Backed Prompt with Text File

Usage example:
from("file:prompts?noop=true")
    .to("openai:chat-completion")
    .log("Response: ${body}");

Image File Input with Vision Model

Usage example:
from("file:images?noop=true")
    .to("openai:chat-completion?model=gpt-4.1-mini?userMessage=Describe what you see in this image")
    .log("Response: ${body}");

When using image files, the userMessage is required. Supported image formats are detected by MIME type (e.g., image/png, image/jpeg, image/gif, image/webp).

Streaming Response

When streaming=true, the component returns an Iterator<ChatCompletionChunk> in the message body. You can consume this iterator using Camel’s streaming EIPs or process it directly:

Usage example:
- route:
    id: route-1145
    from:
      id: from-1972
      uri: timer
      parameters:
        repeatCount: 1
        timerName: timer
      steps:
        - to:
            id: to-1301
            uri: openai:chat-completion
            parameters:
              userMessage: In one sentence, what is Apache Camel?
              streaming: true
        - split:
            id: split-3196
            steps:
              - marshal:
                  id: marshal-3773
                  json:
                    library: Jackson
              - log:
                  id: log-6722
                  message: ${body}
            simple:
              expression: ${body}
            streaming: true

Structured Output with outputClass

Usage example:
public class Person {
    public String name;
    public int age;
    public String occupation;
}

from("direct:structured")
    .setBody(constant("Generate a person profile for a software engineer"))
    .to("openai:chat-completion?baseUrl=https://api.openai.com/v1&outputClass=com.example.Person")
    .log("Structured response: ${body}");

Structured Output with JSON Schema

The jsonSchema option instructs the model to return JSON that conforms to the provided schema. The response will be valid JSON but is not automatically validated against the schema:

Usage example:
from("direct:json-schema")
    .setBody(constant("Create a product description"))
    .setHeader("CamelOpenAIJsonSchema", constant("{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"number\"}}}"))
    .to("openai:chat-completion")
    .log("JSON response: ${body}");

You can also load the schema from a resource file:

Usage example:
from("direct:json-schema-resource")
    .setBody(constant("Create a product description"))
    .to("openai:chat-completion?jsonSchema=resource:classpath:schemas/product.schema.json")
    .log("JSON response: ${body}");

For full schema validation, integrate with the camel-json-validator component after receiving the response.

Conversation Memory (Per Exchange)

Usage example:
from("direct:conversation")
    .setBody(constant("My name is Alice"))
    .to("openai:chat-completion?conversationMemory=true")
    .log("First response: ${body}")
    .setBody(constant("What is my name?"))
    .to("openai:chat-completion?conversationMemory=true")
    .log("Second response: ${body}"); // Will remember "Alice"

Using Third-Party or Local OpenAI-Compatible Endpoint

Usage example:
from("direct:local")
    .setBody(constant("Hello from local LLM"))
    .to("openai:chat-completion?baseUrl=http://localhost:1234/v1&model=local-model")
    .log("${body}");

Input Handling

The component accepts the following types of input in the message body:

  1. String: The prompt text is taken directly from the body

  2. File: Used for file-based prompts. The component handles two types of files:

    • Text files (MIME type starting with text/): The file content is read and used as the prompt. If userMessage endpoint option or CamelOpenAIUserMessage is set, it overrides the file content

    • Image files (MIME type starting with image/): The file is encoded as a base64 data URL and sent to vision-capable models. The userMessage is required when using image files

When using File input, the component uses Files.probeContentType() to detect the file type. Ensure your system has proper MIME type detection configured.

Output Handling

Default Mode

The full model response is returned as a String in the message body.

Streaming Mode

When streaming=true, the message body contains an Iterator<ChatCompletionChunk> suitable for Camel streaming EIPs (such as split() with streaming()).

IMPORTANT:

  • Conversation memory is not automatically updated for streaming responses (only for non-streaming responses)

Structured Outputs

Using outputClass

The model is instructed to return JSON matching the specified class, but the response body remains a String.

Using jsonSchema

The jsonSchema option instructs the model to return JSON conforming to the provided schema. The response will be valid JSON but is not automatically validated against the schema. For full schema validation, integrate with the camel-json-validator component after receiving the response.

The JSON schema must be a valid JSON object. Invalid schema strings will result in an IllegalArgumentException.

Conversation Memory

When conversationMemory=true, the component maintains conversation history in the CamelOpenAIConversationHistory exchange property (configurable via conversationHistoryProperty option). This history is scoped to a single Exchange and allows multi-turn conversations within a route.

IMPORTANT:

  • Conversation history is automatically updated with each assistant response for non-streaming responses only

  • The history is stored as a List<ChatCompletionMessageParam> in the Exchange property

  • The history persists across multiple calls to the endpoint within the same Exchange

  • You can manually set the CamelOpenAIConversationHistory exchange property to provide custom conversation context

Example of manual conversation history:

Usage example:
List<ChatCompletionMessageParam> history = new ArrayList<>();
history.add(ChatCompletionMessageParam.ofUser(/* ... */));
history.add(ChatCompletionMessageParam.ofAssistant(/* ... */));

from("direct:with-history")
    .setBody(constant("Continue the conversation"))
    .setProperty("CamelOpenAIConversationHistory", constant(history))
    .to("openai:chat-completion?conversationMemory=true")
    .log("${body}");

Compatibility

This component works with any OpenAI API-compatible endpoint by setting the baseUrl parameter. This includes:

  • OpenAI official API (https://api.openai.com/v1)

  • Local LLM servers (e.g., Ollama, LM Studio, LocalAI)

  • Third-party OpenAI-compatible providers

When using local or third-party providers, ensure they support the chat completions and/or embeddings API endpoint format. Some providers may have different authentication requirements or API variations.

Embedding Models by Provider

Provider Recommended Model Dimensions

OpenAI

text-embedding-3-small

1536 (reducible to 256, 512, 1024)

OpenAI

text-embedding-3-large

3072 (reducible)

Ollama

nomic-embed-text

768

Ollama

mxbai-embed-large

1024

Mistral

mistral-embed

1024

Example using Ollama for local embeddings:
- to:
    uri: openai:embeddings
    parameters:
      baseUrl: http://localhost:11434/v1
      embeddingModel: nomic-embed-text

Embeddings Operation

The embeddings operation generates vector embeddings from text, which can be used for semantic search, similarity comparison, and RAG (Retrieval-Augmented Generation) applications.

Basic Embedding

  • Java

  • YAML

from("direct:embed")
    .setBody(constant("What is Apache Camel?"))
    .to("openai:embeddings?embeddingModel=nomic-embed-text")
- route:
    from:
      uri: direct:embed
      steps:
        - to:
            uri: openai:embeddings
            parameters:
              embeddingModel: nomic-embed-text

The response body is the embedding vector data:

  • Single input: List<Float> (a single embedding vector)

  • Batch input: List<List<Float>> (one embedding vector per input string)

Additional metadata (model, token usage, vector size, count) is exposed via headers (see OpenAIConstants).

Batch Embedding

You can embed multiple texts in a single request by passing a List<String>:

from("direct:batch-embed")
    .setBody(constant(List.of("First text", "Second text", "Third text")))
    .to("openai:embeddings?embeddingModel=nomic-embed-text")
    .log("Generated ${header.CamelOpenAIEmbeddingCount} embeddings");

Direct Vector Database Integration

For single-input requests, the component returns a raw List<Float> embedding vector, enabling direct chaining to vector database components.

# Index documents in PostgreSQL with pgvector
- route:
    from:
      uri: direct:index
      steps:
        - setVariable:
            name: text
            simple: "${body}"
        - to:
            uri: openai:embeddings
            parameters:
              embeddingModel: nomic-embed-text
        - setVariable:
            name: embedding
            simple: "${body.toString()}"
        - to:
            uri: sql:INSERT INTO documents (content, embedding) VALUES (:#text, :#embedding::vector)

Alternative: Dedicated Vector Databases

For specialized vector workloads, you can also use camel-qdrant, camel-weaviate, camel-milvus, or camel-pinecone:

Similarity Calculation

The component can automatically calculate cosine similarity when a reference embedding is provided:

List<Float> referenceEmbedding = /* previously computed embedding */;

from("direct:compare")
    .setBody(constant("New text to compare"))
    .setHeader("CamelOpenAIReferenceEmbedding", constant(referenceEmbedding))
    .to("openai:embeddings?embeddingModel=nomic-embed-text")
    .log("Similarity score: ${header.CamelOpenAISimilarityScore}");

You can also use SimilarityUtils directly for manual calculations:

import org.apache.camel.component.openai.SimilarityUtils;

double similarity = SimilarityUtils.cosineSimilarity(embedding1, embedding2);
double distance = SimilarityUtils.euclideanDistance(embedding1, embedding2);
List<Float> normalized = SimilarityUtils.normalize(embedding);

Embeddings Output Headers

The following headers are set after an embeddings request:

Header Type Description

CamelOpenAIEmbeddingResponseModel

String

The model used for embedding

CamelOpenAIEmbeddingCount

Integer

Number of embeddings returned

CamelOpenAIEmbeddingVectorSize

Integer

Dimension of each embedding vector

CamelOpenAIPromptTokens

Integer

Tokens used in the input

CamelOpenAITotalTokens

Integer

Total tokens used

CamelOpenAIOriginalText

String/List

Original input text(s)

CamelOpenAISimilarityScore

Double

Cosine similarity (if reference embedding provided)

MCP Tool Calling (Agentic Loop)

The component supports automatic tool calling via the Model Context Protocol (MCP). When MCP servers are configured, the component acts as an MCP client: it lists available tools, converts them to OpenAI function-calling format, and runs an agentic loop — the model requests tool calls, the component executes them via MCP, feeds results back, and repeats until the model produces a final text answer.

MCP Server Configuration

MCP servers are configured inline on the endpoint URI using the mcpServer. prefix pattern. Each server is identified by a name, with sub-properties for transport type, command/URL, and arguments.

Streamable HTTP Transport

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp");

SSE Transport

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpServer.weather.transportType=sse"
        + "&mcpServer.weather.url=http://localhost:8080");

Stdio Transport

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpServer.fs.transportType=stdio"
        + "&mcpServer.fs.command=npx"
        + "&mcpServer.fs.args=-y,@modelcontextprotocol/server-filesystem,/tmp");

Multiple MCP Servers

Multiple servers can be configured on the same endpoint. Tools from all servers are merged and made available to the model:

  • Java

  • YAML

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpServer.fs.transportType=stdio"
        + "&mcpServer.fs.command=npx"
        + "&mcpServer.fs.args=-y,@modelcontextprotocol/server-filesystem,/tmp"
        + "&mcpServer.weather.transportType=sse"
        + "&mcpServer.weather.url=http://localhost:8080");
- route:
    from:
      uri: direct:chat
      steps:
        - to:
            uri: openai:chat-completion
            parameters:
              model: gpt-4
              mcpServer.fs.transportType: stdio
              mcpServer.fs.command: npx
              mcpServer.fs.args: "-y,@modelcontextprotocol/server-filesystem,/tmp"
              mcpServer.weather.transportType: sse
              mcpServer.weather.url: http://localhost:8080
        - log: "${body}"

Agentic Loop Behavior

When the model responds with tool calls, the component automatically:

  1. Executes each tool call via the corresponding MCP server

  2. Sends the tool results back to the model

  3. Repeats until the model produces a final text response

The maxToolIterations option (default: 50) prevents infinite loops. If exceeded, an IllegalStateException is thrown.

Set autoToolExecution=false to disable the agentic loop and receive raw tool calls in the message body instead:

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&autoToolExecution=false"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp")
    .log("Tool calls: ${body}"); // body is the raw tool calls list

Manual Tool Loop with tool-execution Operation

When autoToolExecution=false, you can implement your own tool loop using the openai:tool-execution operation and Camel’s loopDoWhile EIP. This gives you full control to add logging, filtering, retry logic, or custom routing between tool calls — without writing any Java code.

The tool-execution operation:

  • Reads the stored ChatCompletion response (requires storeFullResponse=true on the chat-completion call)

  • Extracts tool calls and executes them via MCP

  • Rebuilds the conversation history with the proper message chain

  • Clears the body for the next chat-completion call

  • Java

  • YAML

from("direct:chat")
    // Save the original prompt for the tool-execution operation
    .setProperty("originalPrompt", body())

    // Initial call: tools are listed but not auto-executed
    .to("openai:chat-completion?autoToolExecution=false"
        + "&conversationMemory=true&storeFullResponse=true"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp")

    // Loop while the model requests tool calls
    .loopDoWhile(header("CamelOpenAIFinishReason").isEqualTo("tool_calls"))
        // Execute tool calls via MCP
        .to("openai:tool-execution"
            + "?mcpServer.api.transportType=streamableHttp"
            + "&mcpServer.api.url=http://localhost:9090/mcp")
        // Send updated conversation back to the model
        .to("openai:chat-completion?autoToolExecution=false"
            + "&conversationMemory=true&storeFullResponse=true"
            + "&mcpServer.api.transportType=streamableHttp"
            + "&mcpServer.api.url=http://localhost:9090/mcp")
    .end()

    .log("Final answer: ${body}");
- route:
    from:
      uri: direct:chat
      steps:
        - setProperty:
            name: originalPrompt
            simple: "${body}"
        - to:
            uri: openai:chat-completion
            parameters:
              autoToolExecution: false
              conversationMemory: true
              storeFullResponse: true
              mcpServer.api.transportType: streamableHttp
              mcpServer.api.url: http://localhost:9090/mcp
        - loopDoWhile:
            simple: "${header.CamelOpenAIFinishReason} == 'tool_calls'"
            steps:
              - to:
                  uri: openai:tool-execution
                  parameters:
                    mcpServer.api.transportType: streamableHttp
                    mcpServer.api.url: http://localhost:9090/mcp
              - to:
                  uri: openai:chat-completion
                  parameters:
                    autoToolExecution: false
                    conversationMemory: true
                    storeFullResponse: true
                    mcpServer.api.transportType: streamableHttp
                    mcpServer.api.url: http://localhost:9090/mcp
        - log: "Final answer: ${body}"
The tool-execution operation requires the originalPrompt exchange property (set via setProperty before the first call) and the CamelOpenAIResponse exchange property (set by storeFullResponse=true).

returnDirect

MCP tools can declare returnDirect=true in their annotations. When all tools invoked in a single batch carry this flag, the component short-circuits: it returns the tool result directly as the exchange body without sending it back to the model for further processing.

This is useful for tools whose output is the definitive answer (e.g., a database lookup) and does not need LLM interpretation.

The CamelOpenAIMcpReturnDirect header is set to true when this occurs, so downstream processors can distinguish tool-direct responses from LLM-generated ones.

Tool execution errors always bypass returnDirect — errors are sent back to the model for graceful handling.

MCP Tool Call Headers

The following headers are set after the agentic loop completes:

Header Type Description

CamelOpenAIToolIterations

Integer

Number of tool call iterations performed

CamelOpenAIMcpToolCalls

List<String>

Ordered list of tool names called during the loop

CamelOpenAIMcpReturnDirect

Boolean

true if the response came directly from a tool with returnDirect

Conversation Memory with MCP Tools

When conversationMemory=true, the full tool call chain is stored in the conversation history exchange property (CamelOpenAIConversationHistory). This includes:

  • Assistant messages containing tool call requests

  • Tool result messages with execution outputs

  • The final assistant text response

This enables multi-turn agentic conversations where the model can reference previous tool interactions across exchanges.

Multi-Turn Example

  • Java

  • YAML

from("direct:chat")
    .to("openai:chat-completion?conversationMemory=true"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp")
    .to("mock:response");
- route:
    from:
      uri: direct:chat
      steps:
        - to:
            uri: openai:chat-completion
            parameters:
              conversationMemory: true
              mcpServer.api.transportType: streamableHttp
              mcpServer.api.url: http://localhost:9090/mcp
        - log: "${body}"

With this route, a multi-turn conversation works as follows:

// Turn 1: the model calls the "add" tool and returns the result
Exchange turn1 = template.request("direct:chat", e ->
    e.getIn().setBody("Use the add tool to add 15 and 27"));
// Response: "The result of adding 15 and 27 is 42."

// Turn 2: carry forward the conversation history
List<?> history = turn1.getProperty("CamelOpenAIConversationHistory", List.class);
Exchange turn2 = template.request("direct:chat", e -> {
    e.getIn().setBody("What numbers did you just add?");
    e.setProperty("CamelOpenAIConversationHistory", history);
});
// Response: "I added 15 and 27." — the model remembers the tool interaction

The conversation history includes the full tool call chain from turn 1 (the assistant’s tool call request, the tool result, and the final answer), so in turn 2 the model has complete context of what happened — including which tools were called and what they returned.

When systemMessage is set and conversationMemory is enabled, the conversation history is reset. This allows starting fresh conversations within the same route.

MCP Protocol Version

When using the Streamable HTTP transport, the component advertises MCP protocol versions during initialization. By default, the SDK’s built-in versions are used. If your MCP server does not support the latest protocol version, you can restrict the advertised versions:

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp"
        + "&mcpProtocolVersions=2024-11-05,2025-03-26,2025-06-18");

MCP Connection Recovery

When mcpReconnect=true (the default), the component automatically recovers from MCP server connection failures. If a tool call fails with a transport error, the component:

  1. Closes the failed connection

  2. Creates a new transport and client using the original server configuration

  3. Re-initializes and re-lists available tools

  4. Retries the tool call once on the new connection

This handles scenarios where an MCP server restarts, a network connection drops, or a stdio subprocess dies. If reconnection fails, the original transport error is propagated.

Set mcpReconnect=false to disable automatic recovery:

from("direct:chat")
    .to("openai:chat-completion?model=gpt-4"
        + "&mcpReconnect=false"
        + "&mcpServer.api.transportType=streamableHttp"
        + "&mcpServer.api.url=http://localhost:9090/mcp");

Error Handling in the Agentic Loop

Scenario Behavior

MCP client initialization failure

Route fails to start (RuntimeException during doStart())

Tool execution throws an exception

Error is caught, logged as WARN, and sent as tool result text to the model

MCP transport error (mcpReconnect=true)

Automatic reconnection and retry (once). If retry fails, error is sent to the model

MCP CallToolResult.isError() is true

Error content is sent as tool result text to the model

Tool name not found in any server

IllegalStateException is thrown

Max iterations exceeded

IllegalStateException is thrown with the tool call log

Streaming + MCP tools with autoToolExecution

Falls back to non-streaming (logged as INFO)

Error Handling

The component may throw the following exceptions:

  • IllegalArgumentException:

    • When an invalid operation is specified (supported: chat-completion, embeddings)

    • When message body or user message is missing

    • When image file is provided without userMessage (chat-completion)

    • When unsupported file type is provided (only text and image files are supported)

    • When invalid JSON schema string is provided

  • API-specific exceptions from the OpenAI SDK for network errors, authentication failures, rate limiting, etc.

Spring Boot Auto-Configuration

When using openai with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:

<dependency>
  <groupId>org.apache.camel.springboot</groupId>
  <artifactId>camel-openai-starter</artifactId>
  <version>x.x.x</version>
  <!-- use the same version as your Camel core version -->
</dependency>

The component supports 7 options, which are listed below.

Name Description Default Type

camel.component.openai.api-key

Default API key for all endpoints.

String

camel.component.openai.autowired-enabled

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

Boolean

camel.component.openai.base-url

Default base URL for all endpoints.

https://api.openai.com/v1

String

camel.component.openai.embedding-model

Default model for embeddings endpoints.

String

camel.component.openai.enabled

Whether to enable auto configuration of the openai component. This is enabled by default.

Boolean

camel.component.openai.lazy-start-producer

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

Boolean

camel.component.openai.model

Default model for chat completion endpoints.

String