LangChain4j Agent

Since Camel 4.14

Only producer is supported

The LangChain4j Agent component provides comprehensive AI agent capabilities by integrating with the LangChain4j library. This component supports advanced AI agent patterns including tool calling, conversation memory, retrieval-augmented generation (RAG), and input/output guardrails.

Features

The LangChain4j Agent component offers the following key features:

  • Tool Integration: Seamless integration with Camel routes via the langchain4j-tools component

  • Conversation Memory: Persistent chat memory for maintaining conversation context

  • RAG Support: Integration with retrieval systems for naive and advanced RAG

  • Guardrails: Input and output validation and transformation

URI format

langchain4j-agent:agentId[?options]

Where agentId is a unique identifier for the agent instance.

Configuring Options

Camel components are configured on two separate levels:

  • component level

  • endpoint level

Configuring Component Options

At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.

For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.

Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.

You can configure components using:

  • the Component DSL.

  • in a configuration file (application.properties, *.yaml files, etc).

  • directly in the Java code.

Configuring Endpoint Options

You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.

Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.

A good practice when configuring options is to use Property Placeholders.

Property placeholders provide a few benefits:

  • They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.

  • They allow externalizing the configuration from the code.

  • They help the code to become more flexible and reusable.

The following two sections list all the options, firstly for the component followed by the endpoint.

Component Options

The LangChain4j Agent component supports 9 options, which are listed below.

Name Description Default Type

configuration (producer)

The configuration.

LangChain4jAgentConfiguration

inputGuardrails (producer)

Comma-separated list of input guardrail class names to validate user input before sending to LLM.

String

lazyStartProducer (producer)

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

outputGuardrails (producer)

Comma-separated list of output guardrail class names to validate LLM responses.

String

tags (producer)

Tags for discovering and calling Camel route tools.

String

autowiredEnabled (advanced)

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

boolean

chatMemoryProvider (advanced)

Autowired Chat Memory Provider of type dev.langchain4j.memory.ChatMemoryProvider. Note for this to be successful, you need to use a reliable ChatMemoryStore. This provider supposes that a user has multiple sessions, if need only a single session, use a default memoryId.

ChatMemoryProvider

chatModel (advanced)

Autowired Chat Model of type dev.langchain4j.model.chat.ChatModel.

ChatModel

retrievalAugmentor (advanced)

Autowired Retrieval Augmentor for advanced RAG of type dev.langchain4j.rag.RetrievalAugmentor. This allows using RAG on both Naive and Advanced RAG.

RetrievalAugmentor

Endpoint Options

The LangChain4j Agent endpoint is configured using URI syntax:

langchain4j-agent:agentId

With the following path and query parameters:

Path Parameters (1 parameters)

Name Description Default Type

agentId (producer)

Required The Agent id.

String

Query Parameters (7 parameters)

Name Description Default Type

inputGuardrails (producer)

Comma-separated list of input guardrail class names to validate user input before sending to LLM.

String

outputGuardrails (producer)

Comma-separated list of output guardrail class names to validate LLM responses.

String

tags (producer)

Tags for discovering and calling Camel route tools.

String

lazyStartProducer (producer (advanced))

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

boolean

chatMemoryProvider (advanced)

Autowired Chat Memory Provider of type dev.langchain4j.memory.ChatMemoryProvider. Note for this to be successful, you need to use a reliable ChatMemoryStore. This provider supposes that a user has multiple sessions, if need only a single session, use a default memoryId.

ChatMemoryProvider

chatModel (advanced)

Autowired Chat Model of type dev.langchain4j.model.chat.ChatModel.

ChatModel

retrievalAugmentor (advanced)

Autowired Retrieval Augmentor for advanced RAG of type dev.langchain4j.rag.RetrievalAugmentor. This allows using RAG on both Naive and Advanced RAG.

RetrievalAugmentor

Message Headers

The LangChain4j Agent component supports 2 message header(s), which is/are listed below:

Name Description Default Type

CamelLangChain4jAgentSystemMessage (producer)

Constant: SYSTEM_MESSAGE

The system prompt.

String

CamelLangChain4jAgentMemoryId (producer)

Constant: MEMORY_ID

Memory ID.

Object

Spring Boot Auto-Configuration

When using langchain4j-agent with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:

<dependency>
  <groupId>org.apache.camel.springboot</groupId>
  <artifactId>camel-langchain4j-agent-starter</artifactId>
  <version>x.x.x</version>
  <!-- use the same version as your Camel core version -->
</dependency>

The component supports 10 options, which are listed below.

Name Description Default Type

camel.component.langchain4j-agent.autowired-enabled

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

Boolean

camel.component.langchain4j-agent.chat-memory-provider

Chat Memory Provider of type dev.langchain4j.memory.ChatMemoryProvider. Note for this to be successful, you need to use a reliable ChatMemoryStore. This provider supposes that a user has multiple sessions, if need only a single session, use a default memoryId. The option is a dev.langchain4j.memory.chat.ChatMemoryProvider type.

ChatMemoryProvider

camel.component.langchain4j-agent.chat-model

Chat Model of type dev.langchain4j.model.chat.ChatModel. The option is a dev.langchain4j.model.chat.ChatModel type.

ChatModel

camel.component.langchain4j-agent.configuration

The configuration. The option is a org.apache.camel.component.langchain4j.agent.LangChain4jAgentConfiguration type.

LangChain4jAgentConfiguration

camel.component.langchain4j-agent.enabled

Whether to enable auto configuration of the langchain4j-agent component. This is enabled by default.

Boolean

camel.component.langchain4j-agent.input-guardrails

Comma-separated list of input guardrail class names to validate user input before sending to LLM.

String

camel.component.langchain4j-agent.lazy-start-producer

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

Boolean

camel.component.langchain4j-agent.output-guardrails

Comma-separated list of output guardrail class names to validate LLM responses.

String

camel.component.langchain4j-agent.retrieval-augmentor

Retrieval Augmentor for advanced RAG of type dev.langchain4j.rag.RetrievalAugmentor. This allows using RAG on both Naive and Advanced RAG. The option is a dev.langchain4j.rag.RetrievalAugmentor type.

RetrievalAugmentor

camel.component.langchain4j-agent.tags

Tags for discovering and calling Camel route tools.

String

Usage

Using a specific Chat Model

The Camel LangChain4j chat component provides an abstraction for interacting with various types of Large Language Models (LLMs) supported by LangChain4j.

Integrating with specific LLM

To integrate with a specific LLM, users should follow the steps described below, which explain how to integrate with OpenAI.

Add the dependency for LangChain4j OpenAI support:

Example
<dependency>
      <groupId>dev.langchain4j</groupId>
      <artifactId>langchain4j-open-ai</artifactId>
    <version>x.x.x</version>
</dependency>

Initialize the OpenAI Chat Model, and add it to the Camel Registry:

ChatModel model = OpenAiChatModel.builder()
                .apiKey(openApiKey)
                .modelName(GPT_3_5_TURBO)
                .temperature(0.3)
                .timeout(ofSeconds(3000))
                .build();
context.getRegistry().bind("chatModel", model);

Use the model in the Camel LangChain4j Agent Producer

 from("direct:chat")
      .to("langchain4j-agent:test?chatModel=#chatModel")

To switch to another Large Language Model and its corresponding dependency, replace the langchain4j-open-ai dependency with the appropriate dependency for the desired model. Update the initialization parameters accordingly in the code snippet provided above.

Basic Chat with only a userMessage

For simple chat interactions, you can use the producer only by setting the chatModel.

from("direct:chat")
    .to("langchain4j-agent:simple?chatModel=#chatModel")

The body can either contain the prompt as a String, or you can create an object of type org.apache.camel.component.langchain4j.agent.AiAgentBody containing the userMessage.

Usage example with a body as String:
var prompt = "What is Apache Camel";

String response = template.requestBody("direct:chat", prompt, String.class);
Usage example with a body as AiAgentBody:
var prompt = "What is Apache Camel";
AiAgentBody body = new AiAgentBody(prompt);

String response = template.requestBody("direct:chat", body, String.class);

Basic Chat with user and system messages

For simple chat interactions, you can use the producer only by setting the chatModel.

from("direct:chat")
    .to("langchain4j-agent:simple?chatModel=#chatModel")

The body can either contain the user prompt as a String and specifying the CamelLangChain4jAgentSystemMessage header for the system prompt, or you can create an object of type org.apache.camel.component.langchain4j.agent.AiAgentBody containing both userMessage and systemMessage.

Usage example with a body as String:
var userPrompt = "Write a short story about a lost cat.";
var systemPrompt = "You are a whimsical storyteller. Your responses should be imaginative, descriptive, and always include a touch of magic. Start every story with 'Once upon a starlit night...";

String response = template.requestBodyAndHeader("direct:chat",
                userPrompt, "CamelLangChain4jAgentSystemMessage", systemPrompt , String.class);
Usage example with a body as AiAgentBody:
var userPrompt = "Write a short story about a lost cat.";
var systemPrompt = "You are a whimsical storyteller. Your responses should be imaginative, descriptive, and always include a touch of magic. Start every story with 'Once upon a starlit night...";

AiAgentBody body = new AiAgentBody()
                .withUserMessage(userPrompt)
                .withSystemMessage(systemPrompt);

String response = template.requestBody("direct:chat", body, String.class);

Chat with Tools

Integrate with Camel routes as tools. This is powerful, because under the hood the LangChain4j Agent component reuses under the hood LangChain4j AIService to integrate with any Camel Routes defined using the Camel LangChain4j Tools component.

// Define tool routes
from("langchain4j-tools:userDb?tags=users&description=Query user database&parameter.userId=string")
    .setBody(constant("{\"name\": \"John Doe\", \"id\": \"123\"}"));

from("langchain4j-tools:weather?tags=weather&description=Get weather information&parameter.city=string")
    .setBody(constant("{\"weather\": \"sunny\", \"temperature\": \"22°C\"}"));

// Agent with tools
from("direct:chat")
    .to("langchain4j-agent:tools?chatModel=#chatModel&tags=users,weather")
Usage example :
var userPrompt = "Can you tell me the name of user 123 and the weather in New York?";
var systemPrompt = "You are a helpful assistant that can access user database and weather information. Use the available tools to provide accurate information.";

String response = template.requestBodyAndHeader("direct:chat",
                userPrompt, "CamelLangChain4jAgentSystemMessage", systemPrompt , String.class);

There’s no need to add Camel LangChain4j Tools component as a dependency when using the tools with LangChain4j Agent component.

RAG Integration

For the current version of the component, RAG is an advanced feature that requires adding a Retrieval Augmentor of type dev.langchain4j.rag.RetrievalAugmentor. This class enables to use either naive or advanced RAG. For more information and examples you can check the LangChain4j RAG documentation page

from("direct:chat")
    .to("langchain4j-agent:rag?chatModel=#chatModel&retrievalAugmentor=#retrievalAugmentor")
Usage example with Retrieval Augmentor serving as naive RAG :
// creating the retrieval Augmentor
EmbeddingStoreContentRetriever contentRetriever = EmbeddingStoreContentRetriever.builder()
                .embeddingStore(embeddingStore) // the embedding store should be defined
                .embeddingModel(embeddingModel) // the embedding model should be defined
                .maxResults(3)
                .minScore(0.6)
                .build();

RetrievalAugmentor retrievalAugmentor = DefaultRetrievalAugmentor.builder()
                .contentRetriever(contentRetriever)
                // other options or steps can be included for Advanced RAG
                .build();

// bind the retrievalAugmentor in the context
context.getRegistry().bind("retrievalAugmentor", retrievalAugmentor);

// using the producer
String response = template.requestBody("direct:chat", body, String.class);

Chat with Memory

For the current version of the component, Memory is an advanced feature that requires adding a Chat Memory Provider of type dev.langchain4j.memory.chat.ChatMemoryProvider. This class enables using a dedicated LangChain4j ChatMemoryProvider. For more information and examples you can check the LangChain4j Chat Memory documentation page.

The component requires using a Chat Memory Provider that uses a persistent memory store.

The memory works for multiple users/sessions. For each context window, the users needs to set the memory ID. 2 ways to set the memory ID: - By setting the Header CamelLangChain4jAgentMemoryId. This supposes that user is using a body as String. - By setting the AiAgentBody.memoryId field. This supposes that that user is using a body as AiAgentBody.

If there’s no need to use different sessions, it is recommended to use the same memory ID.

Example of Route
from("direct:chat")
    .to("langchain4j-agent:memory?chatModel=#chatModel&chatMemoryProvider=#chatMemoryProvider")
Example of usage with AiAgentBody
// Example of creating a Chat Memory Provider : Create a message window memory that keeps the last 10 messages
        ChatMemoryProvider chatMemoryProvider = memoryId -> MessageWindowChatMemory.builder()
                .id(memoryId)
                .maxMessages(10)
                .chatMemoryStore(store) // the Chat Memory store is previously created
                .build();

// bind the chat memory provider in the context
context.getRegistry().bind("chatMemoryProvider", chatMemoryProvider);


AiAgentBody request = new AiAgentBody("Hello!", null, "session-123");
String response = template.requestBody("direct:chat", request, String.class);

Input and Output Guardrails

There’s a possibility to use these advanced features to add Input and Output Guardrails to the AiAgent Route. For this, create classes defining InputGuardrails and OutputGuardrails as defined in the LangChain4j Guardrails documentation page.

The list of input guardrails classes should be defined as inputGuardrails option as a list of classes separated by comma. The list of output guardrails classes should be defined as outputGuardrails option as a list of classes separated by comma.

from("direct:agent-with-guardrails")
    .to("langchain4j-agent:safe?chatModel=#chatModel" +
        "&inputGuardrails=com.example.MyInputGuardrail" +
        "&outputGuardrails=com.example.MyOutputGuardrail1,com.example.MyOutputGuardrail2")

The current version of the component returns a String as response. If the outputGuardrails extends JsonExtractorOutputGuardrail class, make sure to return a Json in String format.