Blog post featured image

This blog shows how Apache Camel can help integrate multiple systems with an AI model, in particular, the camel-whatsapp component is used to build a chat on WhatsApp; so that a user can easily communicate with the LLM (large Language Model) via WhatsApp.


The objective is the following, I’d like to have specific conversations about some topic, in this case, how to contribute to Apache Camel, with an LLM via WhatsApp. In this context WhatsApp is just an example, Apache Camel offers 300+ components that can be easily integrated!

The objective is clear, but what about the implementation? Libraries like LangChain4j and Apache Camel help a lot with this kind of use case, in particular, we will leverage the following features:

  • camel-whatsapp will take care of the integration with WhatsApp APIs and thanks to the camel-webhook feature the communication with the WhatsApp APIs is effortless.
  • On the other hand, LangChain4j offers abstractions and toolkits that help developers interact with LLMs.

In this example, the model GPT-3.5 Turbo is used, and the camel core contribution documentation is used as an embedding, in this way it will be possible to have clear conversations about camel contributions.

Set up

This is the hardest part, if you would like to test it by yourself some requirements need to be fulfilled before executing the code, in particular:

  • A business WhatsApp account is needed, for development purposes this is free, you can follow the documentation in the Camel WhatsApp component
  • An OpenAI API key, the LangChain4j getting started contains information how to generate the API key
  • Webhook needs to be configured in the WhatsApp business account, so that way WhatsApp API can communicate with the running Apache Camel application
  • If you are testing locally, the running application’s webhook has to be exposed to the internet, for example via ngrok
  • Finally, the sample application can be cloned and run via mvn spring-boot:run

Route Definition

Given a chat service that returns a String to an input String message, let’s focus on the Camel route.

We would like to achieve the following:

  • a user sends a message to a WhatsApp business account
  • The WhatsApp API then sends the message to our running application
  • The application invokes
  • under the hood, via LangChain4j abstraction, the GPT-3.5 is used to produce a response message
  • The message is sent to the WhatsApp API and, finally, to the user.

This integration can be easily implemented by the following Camel route:

ConversationalAIAgent agent; // [1]


from("webhook:whatsapp:{{}}") // [2]
   // A lot of events are received by the webhook, in this case, we want to choose only the ones that contain a message
   .choice().when().jsonpath("$.entry[0].changes[0].value.messages", true)
       // We will use this variable in the transformer to retrieve the recipient phone number
       .setVariable("PhoneNumber", jsonpath("$.entry[0].changes[0].value.contacts[0].wa_id"))
       // The body is used as input String in
       .setBody(jsonpath("$.entry[0].changes[0].value.messages[0].text.body")) // [3]
       // Invoke the LLM
       .bean(agent) // [4]
       .convertBodyTo(TextMessageRequest.class) // [5]
       // reply to the number that started the conversation
       .to("whatsapp:{{}}") // [6]

[1] ConversationalAIAgent is implemented with LangChain4j, it uses the camel-contributing.txt to gain information regarding the contributions rules, and GPT-3.5 to generate the response.

[2] from("webhook:whatsapp:{{}}") Expose an HTTP endpoint that is known to the WhatsApp Business account, and, every time a user generates an event with the WhatsApp Business account associated number, events like a message sent, message read, writing message and so on, the HTTP endpoint is invoked with a JSON containing all the information.

                          "body":"What about camels?"

This is an example JSON sent by the WhatsApp API to our webhook, as you can see the JSON structure is quite complex, but the Apache Camel jsonpath expression can be used to retrieve the required data.

[3] We are leveraging Apache Camel jsonpath capabilities to retrieve the message sent by the user jsonpath("$.entry[0].changes[0].value.messages[0].text.body"), and the user phone number jsonpath("$.entry[0].changes[0].value.contacts[0].wa_id") that we store in variables for further computation.

[4] The bean(agent) invokes the LLM with the message sent by the user, that we set into the Camel body in [3] once the computation is done, the response message from the LLM is set into the Camel body.

[5] .convertBodyTo(TextMessageRequest.class) is implemented by a Camel TypeConverter that takes a String message and the Exchange as input and creates a TextMessageRequest object that can be serialized and sent to the WhatsApp API.

The Converter is implemented by a simple TypeConverters implementation:

public class CamelWhatsAppTypeConverters implements TypeConverters {

	 * Create an object that can be serialized to WhatsApp APIs, a variable PhoneNumber is expected
	 * as well as a body containing the String message
	public static TextMessageRequest toTextMessageRequest(String message, Exchange exchange) {
		String phoneNumber = exchange.getVariable("PhoneNumber", String.class);

		TextMessageRequest responseMessage = new TextMessageRequest();
		responseMessage.setText(new TextMessage());

		return responseMessage;

In this example, we handle only TextMessageRequest objects, but camel-whatsapp provides classes for multiple types of messages like media, location and template. A nice improvement to the example would be to have a MediaMessageRequest implementation that handles images generated by the agent.

[5] And finally, .to(whatsapp:{{}}) we reply to the phone number that initiated the conversation.


This blog post shows how Apache Camel can be leveraged to integrate LLMs with bulletproof enterprise integration patterns, we just showed how few lines of code can implement such a complex use case. Demo