aws sqs fifo sink AWS SQS FIFO Sink

Provided by: "Apache Software Foundation"

Support Level for this Kamelet is: "Stable"

Send message to an AWS SQS FIFO Queue.

Access Key/Secret Key are the basic method for authenticating to the AWS SQS Service. These parameters are optional because the Kamelet provides the 'useDefaultCredentialsProvider'.

When using a default Credentials Provider the SQS client will load the credentials through this provider and won’t use the static credential. This is reason for not having the access key and secret key as mandatory parameter for this Kamelet.

Configuration Options

The following table summarizes the configuration options available for the aws-sqs-fifo-sink Kamelet:

Property Name Description Type Default Example

queueNameOrArn

Queue Name

Required The SQS Queue name or ARN.

string

region

AWS Region

Required The AWS region to access.

Enum values:

* ap-south-1 * eu-south-1 * us-gov-east-1 * me-central-1 * ca-central-1 * eu-central-1 * us-iso-west-1 * us-west-1 * us-west-2 * af-south-1 * eu-north-1 * eu-west-3 * eu-west-2 * eu-west-1 * ap-northeast-3 * ap-northeast-2 * ap-northeast-1 * me-south-1 * sa-east-1 * ap-east-1 * cn-north-1 * us-gov-west-1 * ap-southeast-1 * ap-southeast-2 * us-iso-east-1 * ap-southeast-3 * us-east-1 * us-east-2 * cn-northwest-1 * us-isob-east-1 * aws-global * aws-cn-global * aws-us-gov-global * aws-iso-global * aws-iso-b-global

string

accessKey

Access Key

The access key obtained from AWS.

string

amazonAWSHost

AWS Host

The hostname of the Amazon AWS cloud.

string

amazonaws.com

autoCreateQueue

Autocreate Queue

Setting the autocreation of the SQS queue.

boolean

false

contentBasedDeduplication

Content-Based Deduplication

Use content-based deduplication (should be enabled in the SQS FIFO queue first).

boolean

false

overrideEndpoint

Endpoint Overwrite

Select this option to override the endpoint URI. To use this option, you must also provide a URI for the uriEndpointOverride option.

boolean

false

protocol

Protocol

The underlying protocol used to communicate with SQS.

string

https

http or https

secretKey

Secret Key

The secret key obtained from AWS.

string

uriEndpointOverride

Overwrite Endpoint URI

The overriding endpoint URI. To use this option, you must also select the overrideEndpoint option.

string

useDefaultCredentialsProvider

Default Credentials Provider

Set whether the SQS client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in.

boolean

false

Dependencies

At runtime, the aws-sqs-fifo-sink Kamelet relies upon the presence of the following dependencies:

  • camel:aws2-sqs

  • camel:core

  • camel:kamelet

Camel JBang usage

Prerequisites

  • You’ve installed JBang.

  • You have executed the following command:

jbang app install camel@apache/camel

Supposing you have a file named route.yaml with this content:

- route:
    from:
      uri: "kamelet:timer-source"
      parameters:
        period: 10000
        message: 'test'
      steps:
        - to:
            uri: "kamelet:log-sink"

You can now run it directly through the following command

camel run route.yaml

Camel K Environment Usage

This section describes how you can use the aws-sqs-fifo-sink.

Knative sink

You can use the aws-sqs-fifo-sink Kamelet as a Knative sink by binding it to a Knative object.

aws-sqs-fifo-sink-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-sqs-fifo-sink-pipe
spec:
  source:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-sqs-fifo-sink
    properties:
      queueNameOrArn: The Queue Name
      region: The AWS Region

Prerequisite

You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the aws-sqs-fifo-sink-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the sink by using the following command:

    kubectl apply -f aws-sqs-fifo-sink-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind channel:mychannel -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=The AWS Region" aws-sqs-fifo-sink

This command creates the Kamelet Pipe in the current namespace on the cluster.

Kafka sink

You can use the aws-sqs-fifo-sink Kamelet as a Kafka sink by binding it to a Kafka topic.

aws-sqs-fifo-sink-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-sqs-fifo-sink-pipe
spec:
  source:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic
  sink:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-sqs-fifo-sink
    properties:
      queueNameOrArn: The Queue Name
      region: The AWS Region

Prerequisites

  • You’ve installed Strimzi.

  • You’ve created a topic named my-topic in the current namespace.

  • You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the aws-sqs-fifo-sink-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the sink by using the following command:

    kubectl apply -f aws-sqs-fifo-sink-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the sink by using the following command:

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=The AWS Region" aws-sqs-fifo-sink

This command creates the Kamelet Pipe in the current namespace on the cluster.