kafka ssl sink Kafka SSL Sink

Provided by: "Apache Software Foundation"

Support Level for this Kamelet is: "Stable"

Send data to Kafka topics wit TLS/SSL support.

The Kamelet is able to understand the following headers to be set:

  • key / ce-key: as message key

  • partition-key / ce-partitionkey: as message partition key

Both the headers are optional.

Configuration Options

The following table summarizes the configuration options available for the kafka-ssl-sink Kamelet:

Property Name Description Type Default Example

bootstrapServers

Brokers

Required Comma separated list of Kafka Broker URLs.

string

sslKeyPassword

SSL Key Password

Required The password of the private key in the key store file.

string

sslKeystoreLocation

SSL Keystore Location

Required The location of the key store file. This is optional for client and can be used for two-way authentication for client.

string

sslKeystorePassword

SSL Keystore Password

Required The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured.

string

sslTruststoreLocation

SSL Truststore Location

Required The location of the trust store file.

string

topic

Topic Names

Required Comma separated list of Kafka topic names.

string

saslMechanism

SASL Mechanism

The Simple Authentication and Security Layer (SASL) Mechanism used.

string

GSSAPI

securityProtocol

Security Protocol

Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported.

string

SSL

sslEnabledProtocols

SSL Enabled Protocols

The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default.

string

TLSv1.2,TLSv1.1,TLSv1

sslEndpointAlgorithm

SSL Endpoint Algorithm

The endpoint identification algorithm to validate server hostname using server certificate. Use none or false to disable server hostname verification.

string

https

sslProtocol

SSL Protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.

string

TLSv1.2

Dependencies

At runtime, the kafka-ssl-sink Kamelet relies upon the presence of the following dependencies:

  • camel:core

  • camel:kafka

  • camel:kamelet

Camel JBang usage

Prerequisites

  • You’ve installed JBang.

  • You have executed the following command:

jbang app install camel@apache/camel

Supposing you have a file named route.yaml with this content:

- route:
    from:
      uri: "kamelet:timer-source"
      parameters:
        period: 10000
        message: 'test'
      steps:
        - to:
            uri: "kamelet:log-sink"

You can now run it directly through the following command

camel run route.yaml

Camel K Environment Usage

This section describes how you can use the kafka-ssl-sink.

Knative sink

You can use the kafka-ssl-sink Kamelet as a Knative sink by binding it to a Knative object.

kafka-ssl-sink-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: kafka-ssl-sink-pipe
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: kafka-ssl-sink
    properties:
      bootstrapServers: The Brokers
      sslKeyPassword: The SSL Key Password
      sslKeystoreLocation: The SSL Keystore Location
      sslKeystorePassword: The SSL Keystore Password
      sslTruststoreLocation: The SSL Truststore Location
      topic: The Topic Names

Prerequisite

You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the kafka-ssl-sink-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the sink by using the following command:

    kubectl apply -f kafka-ssl-sink-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind kafka-ssl-sink -p "sink.bootstrapServers=The Brokers" -p "sink.sslKeyPassword=The SSL Key Password" -p "sink.sslKeystoreLocation=The SSL Keystore Location" -p "sink.sslKeystorePassword=The SSL Keystore Password" -p "sink.sslTruststoreLocation=The SSL Truststore Location" -p "sink.topic=The Topic Names" channel:mychannel

This command creates the Kamelet Pipe in the current namespace on the cluster.

Kafka sink

You can use the kafka-ssl-sink Kamelet as a Kafka sink by binding it to a Kafka topic.

kafka-ssl-sink-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: kafka-ssl-sink-pipe
spec:
  source:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: kafka-ssl-sink
    properties:
      bootstrapServers: The Brokers
      sslKeyPassword: The SSL Key Password
      sslKeystoreLocation: The SSL Keystore Location
      sslKeystorePassword: The SSL Keystore Password
      sslTruststoreLocation: The SSL Truststore Location
      topic: The Topic Names

Prerequisites

  • You’ve installed Strimzi.

  • You’ve created a topic named my-topic in the current namespace.

  • You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the kafka-ssl-sink-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the sink by using the following command:

    kubectl apply -f kafka-ssl-sink-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind kafka-ssl-sink -p "sink.bootstrapServers=The Brokers" -p "sink.sslKeyPassword=The SSL Key Password" -p "sink.sslKeystoreLocation=The SSL Keystore Location" -p "sink.sslKeystorePassword=The SSL Keystore Password" -p "sink.sslTruststoreLocation=The SSL Truststore Location" -p "sink.topic=The Topic Names" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This command creates the Kamelet Pipe in the current namespace on the cluster.