Camel proposes different solutions to allow your solution to be scalable, to distribute the load between different instances. The choice will depend on how you will deploy/package Camel (standalone mode, embedded in a J2EE or OSGI server, ...) and how the servers will be configured (master/slave, ...).
This approach allows to distribute the load between different endpoints. The from endpoint plays the role of a 'proxy' when we use by example a jetty server
The clustering can be achieved in different ways : Work distribution, Consumer competition, ... depending how you would like to configure your infrastructure (one or several instances running on the same machine or distribute across a cloud of servers).
Different Camel components can be used depending if the work must be done in a sync or async way
Async : seda:// + threads()
Sync : direct:// + threads()
This approach is suitable for solutions running in standalone mode or deployed in Web Application Server (different WAR/EAR)
Remark : This component cannot be used between different OSGI servers
When the infrastructure offers the possibility to deploy the application on several machines, then you can use the approaches presented before. That means means that you will deploy the same application on several nodes and in this case, we have competitor consumers. Depending on the load on the servers, messages can be consume by application 1 running on Server 1 instead of application 2 running on Server 2.
If you prefer to deploy routes on servers having different CPU/memory, then you can use the following components
Camel can be not only plugged on the NMR bus but can also use the clustering functionality offered by ServiceMix 4 ()