Contributing to Camel K

We love contributions!

The main project is written in go and contains some parts written in Java for the integration runtime. Camel K is built on top of Kubernetes through Custom Resource Definitions.

How can I contribute?

There are many ways you can contribute to Camel K, not only software development, as well as with the rest of Camel community:

  • Contribute actively to development (see the section below)

  • Use it and report any feedback, improvement or bug you may find via Github, mailing list or chat.

  • Contribute by writing missing documentation or blog posts about the features around Camel K

  • Tweet, like and socialize Camel K in your preferred social network

  • Enjoy the talks that the contributors submit in various conferences around the world


In order to build the project, you need to comply with the following requirements:

  • Go version 1.16+: needed to compile and test the project. Refer to the Go website for the installation.

  • GNU Make: used to define composite build actions. This should be already installed or available as a package if you have a good OS (

  • MinGW: needed to compile the project on Windows. Refer to the MinGW website for the installation.

  • Windows Subsystem for Linux (WSL): for running Linux binary executables natively on Windows. Refer to WSL Website for installation. Alternatively, you can use Cygwin or Git Bash.

MacOS users will need to use gnu-sed to successfully run the Make build scripts (e.g. for generating the Camel K bundle). Please install gnu-sed on your machine (e.g. brew install gnu-sed) and set your PATH accordingly to use gnu-sed with: export PATH="/usr/local/opt/gnu-sed/libexec/gnubin:$PATH"

The Camel K Java runtime (camel-k-runtime) requires:

  • Java 11: needed for compilation

  • Maven: needed for building

Running checks

Checks rely on golangci-lint being installed, to install it look at the Local Installation instructions.

You can run checks via make lint, or you can install a GIT pre-commit hook and have the checks run via pre-commit; then make sure to install the pre-commit hooks after installing pre-commit by running:

$ pre-commit install

Checking Out the Sources

You can create a fork of this project from GitHub, then clone your fork with the git command line tool.


This is a high-level overview of the project structure:

Table 1. Structure
Path Content


Contains the Docker and Maven build configuration.


Contains the entry points (the main functions) for the camel-k binary (manager) and the kamel client tool.


Contains Kubernetes resource files, specifically for use with the operator-sdk, that are used by the kamel client during installation. The /pkg/resources/resources.go file is kept in sync with the content of the directory (make build-resources), so that resources can be used from within the go code.


Contains Kubernetes resource files, used by the kamel client during installation. The /pkg/resources.go file is kept in sync with the content of the directory (make build-resources), so that resources can be used from within the go code.


Contains the documentation website based on Antora.


Include integration tests to ensure that the software interacts correctly with Kubernetes and OpenShift.


Various examples of Camel K usage.


This is where the code resides. The code is divided in multiple subpackages.


Contains scripts used during make operations for building the project.


To build the whole project you now need to run:


This executes a full build of the Go code. If you need to build the components separately you can execute:

  • make build-kamel: to build the kamel client tool only.

Currently the build is not entirely supported on Windows. If you’re building on a Windows system, here’s a temporary workaround:

  1. Copy the script/Makefile to the root of the project.

  2. Run make -f script/Makefile.

  3. If the above command fails, run make build-kamel.

  4. Rename the kamel binary in the root to kamel.exe.

After a successful build, if you’re connected to a Docker daemon, you can build the operator Docker image by running:

make images

The above command produces a camel-k image with the name Sometimes you might need to produce camel-k images that need to be pushed to the custom repository e.g., to do that you can pass a parameter STAGING_IMAGE_NAME to make as shown below:

make STAGING_IMAGE_NAME='' images-push-staging


Unit tests are executed automatically as part of the build. They use the standard go testing framework.

Integration tests (aimed at ensuring that the code integrates correctly with Kubernetes and OpenShift), need special care. Integration tests are all in the /e2e dir.

For more detail on integration testing, refer to the following documentation:


If you want to install everything you have in your source code and see it running on Kubernetes, you need to run the following command:

For Red Hat CodeReady Containers (CRC)

  • You need to have Docker installed and running (or connected to a Docker daemon)

  • You need to set up Docker daemon to trust CRC’s insecure Docker registry which is exposed by default through the route default-route-openshift-image-registry.apps-crc.testing. One way of doing that is to instruct the Docker daemon to trust the certificate:

    • oc extract secret/router-ca --keys=tls.crt -n openshift-ingress-operator: to extract the certificate

    • sudo cp tls.crt /etc/docker/certs.d/default-route-openshift-image-registry.apps-crc.testing/ca.crt: to copy the certificate for Docker daemon to trust

    • docker login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing: to test that the certificate is trusted

  • Run make install-crc: to build the project and install it in the current namespace on CRC

  • You can specify a different namespace with make install-crc project=myawesomeproject

  • To uninstall Camel K, run kamel uninstall --all --olm=false

The commands assume you have an already running CRC instance and logged in correctly.

For Minikube

First remove any camel k operator you may have installed, otherwise it will conflict with the new one we will build and install.

  • Enable the registry minikube addon: minikube addons enable registry

  • Set the access to the internal minikube registry: eval $(minikube docker-env)

  • Run make images: to build the project and install the image in the internal minikube registry

  • Install camel-k-operator: ./kamel install

For remote Kubernetes/OpenShift clusters

If you have changed anything locally and want to apply the changes to a remote cluster, first push your camel-k image to a custom repository (see Building) and run the following command (the image name should be changed accordingly):

kamel install --operator-image-pull-policy=Always --olm=false

Note --olm=false is necessary as otherwise the OLM bundle version is preferred.

Local Helm installation

If you want to test Helm installation

  • Build the Helm chart make release-helm

  • Build the project and the image: make images

  • Set the internal registry export REGISTRY_ADDRESS=$(kubectl -n kube-system get service registry -o jsonpath='{.spec.clusterIP}')

  • Install with Helm (look at the latest version produced by make release-helm)

 helm install camel-k-dev docs/charts/camel-k-2.1.0-SNAPSHOT.tgz --set${REGISTRY_ADDRESS} --set --set operator.image=apache/camel-k:2.1.0-SNAPSHOT

To uninstall: helm uninstall camel-k-dev


Now you can play with Camel K:

./kamel run examples/

To add additional dependencies to your routes:

./kamel run -d camel:dns examples/dns.js

Local development environment

If you need to develop and test your Camel K operator locally, you can follow the local development procedure.

Debugging and Running from IDE

Sometimes it’s useful to debug the code from the IDE when troubleshooting.

Debugging the kamel binary

It should be straightforward: just execute the /cmd/kamel/main.go file from the IDE (e.g. Goland) in debug mode.

Debugging the operator

It is a bit more complex (but not so much).

You are going to run the operator code outside OpenShift in your IDE so, first of all, you need to stop the operator running inside:

// use kubectl in plain Kubernetes
oc scale deployment/camel-k-operator --replicas 0

You can scale it back to 1 when you’re done, and you have updated the operator image.

You can set up the IDE (e.g. Goland) to execute the /cmd/manager/main.go file in debug mode with operator as the argument.

When configuring the IDE task, make sure to add all required environment variables in the IDE task configuration screen:

  • Set the KUBERNETES_CONFIG environment variable to point to your Kubernetes configuration file (usually <homedir>/.kube/config).

  • Set the WATCH_NAMESPACE environment variable to a Kubernetes namespace you have access to.

  • Set the OPERATOR_NAME environment variable to camel-k.

After you set up the IDE task, with Java 11+ to be used by default, you can run and debug the operator process.

The operator can be fully debugged in CRC, because it uses OpenShift S2I binary builds under the hood. The build phase cannot be (currently) debugged in Minikube because the Kaniko builder requires that the operator and the publisher pod share a common persistent volume.

Building Metadata for Publishing the Operator in Operator Hub

Publishing to an operator hub requires creation and submission of metadata, required in a specific format. The operator-sdk provides tools to help with the creation of this metadata.


The latest packaging format used for deploying the operator to an OLM registry. This generates a CSV and related metadata files in a directory named bundle. The directory contains a Dockerfile that allows for building the bundle into a single image. It is this image that is submitted to the OLM registry.

To generate the bundle for camel-k, use the following command:

make bundle

The bundle directory is created at the root of the camel-k project filesystem.