Gardener extension controller for the Cilium CNI network plugin
This controller operates on the
Network resource in the
extensions.gardener.cloud/v1alpha1 API group. It manages those objects that are requesting cilium Networking configuration (
# enabled: true
# store: kubernetes
Please find a concrete example in the
example folder. All the
cilium specific configuration
should be configured in the
providerConfig section. If additional configuration is required, it should be added to
networking-cilium chart in
controllers/networking-cilium/charts/internal/cilium/values.yaml and corresponding code
parts should be adapted (for example in
Once the network resource is applied, the
networking-cilium controller would then create all the necessary
managed-resources which should be picked
up by the gardener-resource-manager which will then apply all the
network extensions resources to the shoot cluster.
Finally after successful reconciliation an output similar to the one below should be expected.
description: Successfully reconciled network
How to start using or developing this extension controller locally
You can run the controller locally on your machine by executing
make start. Please make sure to have the
kubeconfig pointed to the cluster you want to connect to.
Static code checks and tests can be executed by running
make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.
Feedback and Support
Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).
Please find further resources about out project here:
1 - Usage As End User
Using the Networking Cilium extension with Gardener as end-user
core.gardener.cloud/v1beta1.Shoot resource declares a
networking field that is meant to contain network-specific configuration.
In this document we are describing how this configuration looks like for Cilium and provide an example
Shoot manifest with minimal configuration that you can use to create a cluster.
Hubble is a fully distributed networking and security observability platform build on top of Cilium and BPF. It is optional and is deployed to the cluster when enabled in the
If the dashboard is not externally exposed
kubectl port-forward -n kube-system deployment/hubble-ui 8081
can be used to acess it locally.
NetworkingConfig for the Cilium extension looks as follows:
hubble.enabled field describes whether hubble should be deployed into the cluster or not (default).
debug field describes whether you want to run cilium in debug mode or not (default), change this value to
true to use debug mode.
psp field describes whether
cilium-agent shall be deployed with pod security policies or not (default).
tunnel field describes the encapsulation mode for communication between nodes. Possible values are
store field describes which backend to use to store the identities. Can be either
etcd (kvstore) or
kubernetes (crd) (default).
Please find below an example
Shoot manifest with cilium networking configuration:
If you would like to see a provider specific shoot example, please check out the documentation of the well-known extensions. A list of them can be found here.