aws s3 source AWS S3 Source

Provided by: "Apache Software Foundation"

Support Level for this Kamelet is: "Stable"

Receive data from an Amazon S3 Bucket.

The basic authentication method for the S3 service is to specify an access key and a secret key. These parameters are optional because the Kamelet provides a default credentials provider.

If you use the default credentials provider, the S3 client loads the credentials through this provider and doesn’t use the basic authentication method.

Two headers will be duplicated with different names for clarity at sink level, CamelAwsS3Key will be duplicated into aws.s3.key and CamelAwsS3BucketName will be duplicated in aws.s3.bucket.name

Configuration Options

The following table summarizes the configuration options available for the aws-s3-source Kamelet:

Property Name Description Type Default Example

bucketNameOrArn

Bucket Name

Required The S3 Bucket name or Amazon Resource Name (ARN).

string

region

AWS Region

Required The AWS region to access.

Enum values:

* ap-south-1 * eu-south-1 * us-gov-east-1 * me-central-1 * ca-central-1 * eu-central-1 * us-iso-west-1 * us-west-1 * us-west-2 * af-south-1 * eu-north-1 * eu-west-3 * eu-west-2 * eu-west-1 * ap-northeast-3 * ap-northeast-2 * ap-northeast-1 * me-south-1 * sa-east-1 * ap-east-1 * cn-north-1 * us-gov-west-1 * ap-southeast-1 * ap-southeast-2 * us-iso-east-1 * ap-southeast-3 * us-east-1 * us-east-2 * cn-northwest-1 * us-isob-east-1 * aws-global * aws-cn-global * aws-us-gov-global * aws-iso-global * aws-iso-b-global

string

accessKey

Access Key

The access key obtained from AWS.

string

autoCreateBucket

Autocreate Bucket

Specifies to automatically create the S3 bucket.

boolean

false

delay

Delay

The number of milliseconds before the next poll of the selected bucket.

integer

500

deleteAfterRead

Auto-delete Objects

Specifies to delete objects after consuming them.

boolean

true

forcePathStyle

Force Path Style

Forces path style when accessing AWS S3 buckets.

boolean

false

ignoreBody

Ignore Body

If true, the S3 Object body is ignored. Setting this to true overrides any behavior defined by the includeBody option. If false, the S3 object is put in the body.

boolean

false

maxMessagesPerPoll

Max Messages Per Poll

Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited.

integer

10

overrideEndpoint

Endpoint Overwrite

Select this option to override the endpoint URI. To use this option, you must also provide a URI for the uriEndpointOverride option.

boolean

false

prefix

Prefix

The AWS S3 bucket prefix to consider while searching.

string

folder/

secretKey

Secret Key

The secret key obtained from AWS.

string

uriEndpointOverride

Overwrite Endpoint URI

The overriding endpoint URI. To use this option, you must also select the overrideEndpoint option.

string

useDefaultCredentialsProvider

Default Credentials Provider

If true, the S3 client loads credentials through a default credentials provider. If false, it uses the basic authentication method (access key and secret key).

boolean

false

Dependencies

At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies:

  • camel:core

  • camel:aws2-s3

  • mvn:org.apache.camel.kamelets:camel-kamelets-utils:4.4.2-SNAPSHOT

  • camel:kamelet

Camel JBang usage

Prerequisites

  • You’ve installed JBang.

  • You have executed the following command:

jbang app install camel@apache/camel

Supposing you have a file named route.yaml with this content:

- route:
    from:
      uri: "kamelet:timer-source"
      parameters:
        period: 10000
        message: 'test'
      steps:
        - to:
            uri: "kamelet:log-sink"

You can now run it directly through the following command

camel run route.yaml

Camel K Environment Usage

This section describes how you can use the aws-s3-source.

Knative source

You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object.

aws-s3-source-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-s3-source-pipe
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-s3-source
    properties:
      bucketNameOrArn: The Bucket Name
      region: The AWS Region
  sink:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel

Prerequisite

You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the aws-s3-source-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the source by using the following command:

    kubectl apply -f aws-s3-source-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the source by using the following command:

kamel bind channel:mychannel -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=The AWS Region" aws-s3-source

This command creates the Kamelet Pipe in the current namespace on the cluster.

Kafka source

You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic.

aws-s3-source-pipe.yaml
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
  name: aws-s3-source-pipe
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1
      name: aws-s3-source
    properties:
      bucketNameOrArn: The Bucket Name
      region: The AWS Region
  sink:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic

Prerequisites

  • You’ve installed Strimzi.

  • You’ve created a topic named my-topic in the current namespace.

  • You have Camel K installed on the cluster.

Procedure for using the cluster CLI

  1. Save the aws-s3-source-pipe.yaml file to your local drive, and then edit it as needed for your configuration.

  2. Run the source by using the following command:

    kubectl apply -f aws-s3-source-pipe.yaml

Procedure for using the Kamel CLI

Configure and run the source by using the following command:

kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=The AWS Region" aws-s3-source

This command creates the Kamelet Pipe in the current namespace on the cluster.