Kafka Batch with Apicurio Registry secured with Keycloak Source
Provided by: "Apache Software Foundation"
Support Level for this Kamelet is: "Preview"
Receive data from Kafka topics in batch on an insecure broker combined with Apicurio Registry secured with Keycloak and commit them manually through KafkaManualCommit or auto commit.
Configuration Options
The following table summarizes the configuration options available for the kafka-batch-apicurio-registry-source
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
Apicurio Registry Auth Client ID | Required The Client ID in Keycloak instance securing the Apicurio Registry. | string | |||
Apicurio Registry Auth Client Secret | Required The Client Secret in Keycloak instance securing the Apicurio Registry. | string | |||
Apicurio Registry Auth Password | Required The Password in Keycloak instance securing the Apicurio Registry. | string | |||
Apicurio Registry Auth Realm | Required The Realm in Keycloak instance securing the Apicurio Registry. | string | |||
Apicurio Registry Auth Service URL | Required The URL for Keycloak instance securing the Apicurio Registry. | string | http://my-keycloak.com:8080/ | ||
Apicurio Registry Auth Username | Required The Username in Keycloak instance securing the Apicurio Registry. | string | |||
Apicurio Registry URL | Required The Apicurio Schema Registry URL. | string | |||
Bootstrap Servers | Required Comma separated list of Kafka Broker URLs. | string | |||
Topic Names | Required Comma separated list of Kafka topic names. | string | |||
Allow Manual Commit | Whether to allow doing manual commits. | boolean | false | ||
Auto Commit Enable | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. | boolean | true | ||
Auto Offset Reset | What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none. | string | latest | ||
Avro Datum Provider | How to read data with Avro. | string | io.apicurio.registry.serde.avro.ReflectAvroDatumProvider | ||
Batch Dimension | The maximum number of records returned in a single call to poll(). | int | 500 | ||
Consumer Group | A string that uniquely identifies the group of consumers to which this source belongs. | string | my-group-id | ||
Automatically Deserialize Headers | When enabled the Kamelet source will deserialize all message headers to String representation. | boolean | true | ||
Max Poll Interval | The maximum delay between invocations of poll() when using consumer group management. | int | |||
Poll On Error Behavior | What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP. | string | ERROR_HANDLER | ||
Poll Timeout Interval | The timeout used when polling the KafkaConsumer. | int | 5000 | ||
Topic Is Pattern | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | boolean | false | ||
Value Deserializer | Deserializer class for value that implements the Deserializer interface. | string | io.apicurio.registry.serde.avro.AvroKafkaDeserializer |
Dependencies
At runtime, the kafka-batch-apicurio-registry-source
Kamelet relies upon the presence of the following dependencies:
-
mvn:org.apache.camel.kamelets:camel-kamelets-utils:4.6.0-SNAPSHOT
-
camel:kafka
-
camel:core
-
camel:kamelet
-
mvn:io.quarkus:quarkus-apicurio-registry-avro:3.6.3
Camel JBang usage
Prerequisites
-
You’ve installed JBang.
-
You have executed the following command:
jbang app install camel@apache/camel
Supposing you have a file named route.yaml with this content:
- route:
from:
uri: "kamelet:timer-source"
parameters:
period: 10000
message: 'test'
steps:
- to:
uri: "kamelet:log-sink"
You can now run it directly through the following command
camel run route.yaml
Camel K Environment Usage
This section describes how you can use the kafka-batch-apicurio-registry-source
.
Knative source
You can use the kafka-batch-apicurio-registry-source
Kamelet as a Knative source by binding it to a Knative object.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: kafka-batch-apicurio-registry-source-pipe
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: kafka-batch-apicurio-registry-source
properties:
apicurioAuthClientId: The Apicurio Registry Auth Client ID
apicurioAuthClientSecret: The Apicurio Registry Auth Client Secret
apicurioAuthPassword: The Apicurio Registry Auth Password
apicurioAuthRealm: The Apicurio Registry Auth Realm
apicurioAuthServiceUrl: http://my-keycloak.com:8080/
apicurioAuthUsername: The Apicurio Registry Auth Username
apicurioRegistryUrl: The Apicurio Registry URL
bootstrapServers: The Bootstrap Servers
topic: The Topic Names
sink:
ref:
kind: Channel
apiVersion: messaging.knative.dev/v1
name: mychannel
Prerequisite
You have Camel K installed on the cluster.
Procedure for using the cluster CLI
-
Save the
kafka-batch-apicurio-registry-source-pipe.yaml
file to your local drive, and then edit it as needed for your configuration. -
Run the source by using the following command:
kubectl apply -f kafka-batch-apicurio-registry-source-pipe.yaml
Procedure for using the Kamel CLI
Configure and run the source by using the following command:
kamel bind channel:mychannel -p "source.apicurioAuthClientId=The Apicurio Registry Auth Client ID" -p "source.apicurioAuthClientSecret=The Apicurio Registry Auth Client Secret" -p "source.apicurioAuthPassword=The Apicurio Registry Auth Password" -p "source.apicurioAuthRealm=The Apicurio Registry Auth Realm" -p "source.apicurioAuthServiceUrl=\http://my-keycloak.com:8080/" -p "source.apicurioAuthUsername=The Apicurio Registry Auth Username" -p "source.apicurioRegistryUrl=The Apicurio Registry URL" -p "source.bootstrapServers=The Bootstrap Servers" -p "source.topic=The Topic Names" kafka-batch-apicurio-registry-source
This command creates the Kamelet Pipe in the current namespace on the cluster.
Kafka source
You can use the kafka-batch-apicurio-registry-source
Kamelet as a Kafka source by binding it to a Kafka topic.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: kafka-batch-apicurio-registry-source-pipe
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: kafka-batch-apicurio-registry-source
properties:
apicurioAuthClientId: The Apicurio Registry Auth Client ID
apicurioAuthClientSecret: The Apicurio Registry Auth Client Secret
apicurioAuthPassword: The Apicurio Registry Auth Password
apicurioAuthRealm: The Apicurio Registry Auth Realm
apicurioAuthServiceUrl: http://my-keycloak.com:8080/
apicurioAuthUsername: The Apicurio Registry Auth Username
apicurioRegistryUrl: The Apicurio Registry URL
bootstrapServers: The Bootstrap Servers
topic: The Topic Names
sink:
ref:
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
Prerequisites
-
You’ve installed Strimzi.
-
You’ve created a topic named
my-topic
in the current namespace. -
You have Camel K installed on the cluster.
Procedure for using the cluster CLI
-
Save the
kafka-batch-apicurio-registry-source-pipe.yaml
file to your local drive, and then edit it as needed for your configuration. -
Run the source by using the following command:
kubectl apply -f kafka-batch-apicurio-registry-source-pipe.yaml
Procedure for using the Kamel CLI
Configure and run the source by using the following command:
kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic -p "source.apicurioAuthClientId=The Apicurio Registry Auth Client ID" -p "source.apicurioAuthClientSecret=The Apicurio Registry Auth Client Secret" -p "source.apicurioAuthPassword=The Apicurio Registry Auth Password" -p "source.apicurioAuthRealm=The Apicurio Registry Auth Realm" -p "source.apicurioAuthServiceUrl=\http://my-keycloak.com:8080/" -p "source.apicurioAuthUsername=The Apicurio Registry Auth Username" -p "source.apicurioRegistryUrl=The Apicurio Registry URL" -p "source.bootstrapServers=The Bootstrap Servers" -p "source.topic=The Topic Names" kafka-batch-apicurio-registry-source
This command creates the Kamelet Pipe in the current namespace on the cluster.