LangChain4j Spring Boot Integration
Since Camel 4.18
This guide explains how to integrate LangChain4j Spring Boot starters with Apache Camel Spring Boot applications.
Overview
LangChain4j provides Spring Boot starters that offer auto-configuration for various AI/LLM providers. When using Camel’s langchain4j components in a Spring Boot application, you can leverage these starters to simplify configuration and reduce boilerplate code.
Benefits
-
Auto-configuration: Automatic bean creation and configuration based on properties
-
Type-safe configuration: Configuration properties with IDE auto-completion support
-
Simplified dependency management: Single starter dependency per provider
-
Production-ready: Built-in health checks and metrics (when using Spring Boot Actuator)
-
Consistent configuration: Unified configuration approach across different LLM providers
Available LangChain4j Spring Boot Starters
LangChain4j provides Spring Boot starters for various AI/LLM providers:
Chat Models
-
langchain4j-open-ai-spring-boot-starter- OpenAI (GPT-3.5, GPT-4, etc.) -
langchain4j-azure-open-ai-spring-boot-starter- Azure OpenAI -
langchain4j-google-ai-gemini-spring-boot-starter- Google Gemini -
langchain4j-ollama-spring-boot-starter- Ollama (local models) -
langchain4j-anthropic-spring-boot-starter- Anthropic Claude -
langchain4j-mistral-ai-spring-boot-starter- Mistral AI -
langchain4j-hugging-face-spring-boot-starter- Hugging Face -
langchain4j-vertex-ai-spring-boot-starter- Google Vertex AI
Getting Started
Step 1: Add Dependencies
Add the Camel Spring Boot starter and the LangChain4j component you need:
<dependencies>
<!-- Camel Spring Boot -->
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<!-- Camel LangChain4j Component -->
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-langchain4j-chat-starter</artifactId>
</dependency>
<!-- LangChain4j Spring Boot Starter for OpenAI -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>1.10.0</version>
</dependency>
</dependencies> Step 2: Configure Properties
Configure the LangChain4j provider in application.properties or application.yml:
# OpenAI Chat Model Configuration
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o
langchain4j.open-ai.chat-model.temperature=0.7
langchain4j.open-ai.chat-model.max-tokens=1000
# OpenAI Embedding Model Configuration
langchain4j.open-ai.embedding-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.embedding-model.model-name=text-embedding-ada-002 langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o
temperature: 0.7
max-tokens: 1000
embedding-model:
api-key: ${OPENAI_API_KEY}
model-name: text-embedding-ada-002 Step 3: Use in Camel Routes
The auto-configured beans are automatically available in your Camel routes:
@Component
public class MyRoutes extends RouteBuilder {
@Override
public void configure() {
// Chat endpoint using auto-configured ChatLanguageModel
from("direct:chat")
.to("langchain4j-chat:openai?chatModel=#chatLanguageModel");
// Embeddings endpoint using auto-configured EmbeddingModel
from("direct:embeddings")
.to("langchain4j-embeddings:openai?embeddingModel=#embeddingModel");
}
} Complete Examples
Example 1: OpenAI Chat Integration
<dependencies>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-langchain4j-chat-starter</artifactId>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>1.10.0</version>
</dependency>
</dependencies> langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o
langchain4j.open-ai.chat-model.temperature=0.7 import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
@Component
public class ChatRoute extends RouteBuilder {
@Override
public void configure() {
from("direct:chat")
.log("Sending message to OpenAI: ${body}")
.to("langchain4j-chat:openai?chatModel=#chatLanguageModel")
.log("Received response: ${body}");
}
} Example 2: Azure OpenAI with RAG
<dependencies>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-langchain4j-chat-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-langchain4j-embeddings-starter</artifactId>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-azure-open-ai-spring-boot-starter</artifactId>
<version>1.10.0</version>
</dependency>
</dependencies> langchain4j:
azure-open-ai:
chat-model:
endpoint: ${AZURE_OPENAI_ENDPOINT}
api-key: ${AZURE_OPENAI_API_KEY}
deployment-name: gpt-4
embedding-model:
endpoint: ${AZURE_OPENAI_ENDPOINT}
api-key: ${AZURE_OPENAI_API_KEY}
deployment-name: text-embedding-ada-002 import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.langchain4j.chat.LangChain4jRagAggregatorStrategy;
import org.springframework.stereotype.Component;
@Component
public class RagRoute extends RouteBuilder {
@Override
public void configure() {
LangChain4jRagAggregatorStrategy ragStrategy = new LangChain4jRagAggregatorStrategy();
from("direct:rag-chat")
.log("Processing RAG query: ${body}")
.enrich("direct:retrieve-context", ragStrategy)
.to("langchain4j-chat:azure?chatModel=#chatLanguageModel")
.log("RAG response: ${body}");
from("direct:retrieve-context")
.to("langchain4j-embeddings:azure?embeddingModel=#embeddingModel")
.to("langchain4j-embeddingstore:search?embeddingStore=#embeddingStore")
.log("Retrieved context: ${body}");
}
} Example 3: Ollama (Local LLM)
<dependencies>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-langchain4j-chat-starter</artifactId>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-ollama-spring-boot-starter</artifactId>
<version>1.10.0</version>
</dependency>
</dependencies> langchain4j.ollama.chat-model.base-url=http://localhost:11434
langchain4j.ollama.chat-model.model-name=llama2
langchain4j.ollama.chat-model.temperature=0.8 Configuration Properties Reference
OpenAI Configuration
# Chat Model
langchain4j.open-ai.chat-model.api-key=
langchain4j.open-ai.chat-model.model-name=gpt-4o
langchain4j.open-ai.chat-model.temperature=0.7
langchain4j.open-ai.chat-model.max-tokens=
langchain4j.open-ai.chat-model.timeout=60s
langchain4j.open-ai.chat-model.max-retries=3
# Embedding Model
langchain4j.open-ai.embedding-model.api-key=
langchain4j.open-ai.embedding-model.model-name=text-embedding-ada-002 Azure OpenAI Configuration
# Chat Model
langchain4j.azure-open-ai.chat-model.endpoint=
langchain4j.azure-open-ai.chat-model.api-key=
langchain4j.azure-open-ai.chat-model.deployment-name=
langchain4j.azure-open-ai.chat-model.temperature=0.7
# Embedding Model
langchain4j.azure-open-ai.embedding-model.endpoint=
langchain4j.azure-open-ai.embedding-model.api-key=
langchain4j.azure-open-ai.embedding-model.deployment-name= Ollama Configuration
# Chat Model
langchain4j.ollama.chat-model.base-url=http://localhost:11434
langchain4j.ollama.chat-model.model-name=llama2
langchain4j.ollama.chat-model.temperature=0.8
langchain4j.ollama.chat-model.timeout=60s
# Embedding Model
langchain4j.ollama.embedding-model.base-url=http://localhost:11434
langchain4j.ollama.embedding-model.model-name=nomic-embed-text Advanced Configuration
Using Multiple LLM Providers
You can configure multiple LLM providers in the same application by using different bean names:
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o
ollama:
chat-model:
base-url: http://localhost:11434
model-name: llama2 @Component
public class MultiProviderRoute extends RouteBuilder {
@Override
public void configure() {
// Use OpenAI for production
from("direct:production-chat")
.to("langchain4j-chat:openai?chatModel=#chatLanguageModel");
// Use Ollama for development/testing
from("direct:dev-chat")
.to("langchain4j-chat:ollama?chatModel=#ollamaChatModel");
}
} Custom Bean Configuration
You can customize the auto-configured beans or create additional beans:
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.openai.OpenAiChatModel;
@Configuration
public class CustomLangChain4jConfig {
@Bean
@ConditionalOnProperty(name = "custom.llm.enabled", havingValue = "true")
public ChatLanguageModel customChatModel() {
return OpenAiChatModel.builder()
.apiKey(System.getenv("CUSTOM_API_KEY"))
.modelName("gpt-4o-mini")
.temperature(0.5)
.logRequests(true)
.logResponses(true)
.build();
}
} Environment-Specific Configuration
Use Spring profiles for environment-specific configuration:
langchain4j:
ollama:
chat-model:
base-url: http://localhost:11434
model-name: llama2 langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o
max-retries: 5
timeout: 120s Best Practices
1. Secure API Keys
Never hardcode API keys in your code or configuration files. Use environment variables or external configuration:
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY} Or use Spring Cloud Config, Vault, or other secret management solutions.
2. Configure Timeouts and Retries
Set appropriate timeouts and retry policies for production:
langchain4j.open-ai.chat-model.timeout=60s
langchain4j.open-ai.chat-model.max-retries=3
langchain4j.open-ai.chat-model.log-requests=false
langchain4j.open-ai.chat-model.log-responses=false 3. Use Streaming for Long Responses
For long-running conversations, consider using streaming chat models:
@Component
public class StreamingChatRoute extends RouteBuilder {
@Autowired
private StreamingChatLanguageModel streamingChatModel;
@Override
public void configure() {
from("direct:streaming-chat")
.process(exchange -> {
String prompt = exchange.getIn().getBody(String.class);
streamingChatModel.generate(prompt, new StreamingResponseHandler<AiMessage>() {
@Override
public void onNext(String token) {
// Handle streaming tokens
}
@Override
public void onComplete(Response<AiMessage> response) {
exchange.getIn().setBody(response.content().text());
}
@Override
public void onError(Throwable error) {
// Handle errors
}
});
});
}
} 4. Monitor and Log
Enable logging for debugging during development:
# Development
langchain4j.open-ai.chat-model.log-requests=true
langchain4j.open-ai.chat-model.log-responses=true
# Production (disable for security and performance)
langchain4j.open-ai.chat-model.log-requests=false
langchain4j.open-ai.chat-model.log-responses=false Troubleshooting
Common Issues
Bean Not Found
If you encounter "No qualifying bean" errors, ensure:
-
The LangChain4j Spring Boot starter is in your dependencies
-
The configuration properties are correctly set
-
The bean name matches what you’re referencing in your routes