This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.


The web UI for managing your projects and clusters

Gardener Dashboard

CI Build status Slack channel #gardener


Gardener Demo


Gardener Dashboard Documentation


The following SAP developers contributed to this project until this initial contribution was published as open source.

contributorcommits (%)+lines-linesfirst commitlast commit
Holger Koser313 (42%)57878185622017-07-132018-01-23
Andreas Herz307 (41%)13666110992017-07-142017-10-27
Peter Sutter99 (13%)483839672017-11-072018-01-23
Gross, Lukas31 (4%)4002672018-01-102018-01-23

It is derived from the historical, internal gardener-ui repository at commit eeb623d60c86e6037c0e1dc2bdd9e54663bf41a8.


Apache License 2.0

Copyright 2020 The Gardener Authors

1 - Architecture

Dashboard Architecture Overview


The dashboard frontend is a Single Page Application (SPA) built with Vue.js. The dashboard backend is web server build with Express and Node.js. The backend serves the bundled frontend as static content. The dashboard uses Socket.IO to enable real-time, bidirectional and event-based communication between the frontend and the backend. For the communication from the backend to different kube-apiservers the http/2 network protocol is used. Authentication at the apiserver of the garden cluster is done via JWT tokens. These can either be an ID Token issued by an OpenID Connect Provider or the token of a Kubernetes Service Account.


The dashboard frontend consists of many Vue.js single file components that manage their state via a centralized store. The store defines mutations to modify the state synchronously. If several mutations have to be combined or the state in the backend has to be modified at the same time, the store provides asynchronous actions to do this job. The synchronization of the data with the backend is done by plugins that also use actions.


The backend is currently a monolithic Node.js application, but it performs several tasks that are actually independent.

  • Static web server for the frontend single page application
  • Forward real time events of the apiserver to the frontend
  • Provide an HTTP Api
  • Bootstrapping shoot and seed clusters to support web terminals
  • Initiate and manage the end user login flow in order to obtain an ID Token
  • Bidirectional integration with the github issue management

It is planed to split the backend into several independent containers to increase stability and performance.


The following diagram shows the authorization code flow in the gardener dashboard. When the user clicks the login button he is redirected to the authorization endpoint of the openid connect provider. In the case of Dex IDP, authentication is delegated to the connected IDP. After successful login, the OIDC provider redirects back to the dashboard backend with a one time authorization code. With this code the dashboard backend can now request an ID token for the logged in user. The ID token is encrypted and stored as a secure httpOnly session cookie.

2 - Concepts

2.1 - Webterminals



Architecture Overview


We want to give garden operators and “regular” users of the Gardener dashboard an easy way to have a preconfigured shell directly in the browser.

This has several advantages:

  • no need to set up any tools locally
  • no need to download / store kubeconfigs locally
  • Each terminal session will have its own “access” service account created. This makes it easier to see “who” did “what” when using the web terminals.
  • The “access” service account is deleted when the terminal session expires
  • Easy “privileged” access to a node (privileged container, hostPID, and hostNetwork enabled, mounted host root fs) in case of troubleshooting node. If allowed by PSP.

How it’s done - TL;DR

On the host cluster, we schedule a pod to which the dashboard frontend client attaches to (similar to kubectl attach). Usually the ops-toolbelt image is used, containing all relevant tools like kubectl. The Pod has a kubeconfig secret mounted with the necessary privileges for the target cluster - usually cluster-admin.

Target types

There are currently three targets, where a user can open a terminal session to:

  • The (virtual) garden cluster - Currently operator only
  • The shoot cluster
  • The control plane of the shoot cluster - operator only


There are different factors on where the host cluster (and namespace) is chosen by the dashboard:

  • Depending on, the selected target and the role of the user (operator or “regular” user) the host is chosen.
  • For performance / low latency reasons, we want to place the “terminal” pods as near as possible to the target kube-apiserver.

For example, the user wants to have a terminal for a shoot cluster. The kube-apiserver of the shoot is running in the seed-shoot-ns on the seed.

  • If the user is an operator, we place the “terminal” pod directly in the seed-shoot-ns on the seed.
  • However, if the user is a “regular” user, we don’t want to have “untrusted” workload scheduled on the seeds, that’s why the “terminal” pod is scheduled on the shoot itself, in a temporary namespace that is deleted afterwards.

Lifecycle of a Web Terminal Session

1. Browser / Dashboard Frontend - Open Terminal

User chooses the target and clicks in the browser on Open terminal button. A POST request is made to the dashboard backend to request a new terminal session.

2. Dashboard Backend - Create Terminal Resource

According to the privileges of the user (operator / enduser) and the selected target, the dashboard backend creates a terminal resource on behalf of the user in the (virtual) garden and responds with a handle to the terminal session.

3. Browser / Dashboard Frontend

The frontend makes another POST request to the dashboard backend to fetch the terminal session. The Backend waits until the terminal resource is in a “ready” state (timeout 10s) before sending a response to the frontend. More to that later.

4. Terminal Resource

The terminal resource, among other things, holds the information of the desired host and target cluster. The credentials to these clusters are declared as references (secretRef / serviceAccountRef). The terminal resource itself doesn’t contain sensitive information.

5. Admission

A validating webhook is in place to ensure that the user, that created the terminal resource, has the permission to read the referenced credentials. There is also a mutating webhook in place. Both admission configurations have failurePolicy: Fail.

6. Terminal-Controller-Manager - Apply Resources on Host & Target Cluster

Sidenote: The terminal-controller-manager has no knowledge about the gardener, its shoots, and seeds. In that sense it can be considered as independent from the gardener.

The terminal-controller-manager watches terminal resources and ensures the desired state on the host and target cluster. The terminal-controller-manager needs the permission to read all secrets / service accounts in the virtual garden. As additional safety net, the terminal-controller-manager ensures that the terminal resource was not created before the admission configurations were created.

The terminal-controller-manager then creates the necessary resources in the host and target cluster.

  • Target Cluster:
    • “Access” service account + (cluster)rolebinding usually to cluster-admin cluster role
      • used from within the “terminal” pod
  • Host Cluster:
    • “Attach” service Account + rolebinding to “attach” cluster role (privilege to attach and get pod)
      • will be used by the browser to attach to the pod
    • Kubeconfig secret, containing the “access” token from the target cluster
    • The “terminal” pod itself, having the kubeconfig secret mounted

7. Dashboard Backend - Responds to Frontend

As mentioned in step 3, the dashboard backend waits until the terminal resource is “ready”. It then reads the “attach” token from the host cluster on behalf of the user. It responds with:

  • attach token
  • hostname of the host cluster’s api server
  • name of the pod and namespace

8. Browser / Dashboard Frontend - Attach to Pod

Dashboard frontend attaches to the pod located on the host cluster by opening a WebSocket connection using the provided parameter and credentials. As long as the terminal window is open, the dashboard regularly annotates the terminal resource (heartbeat) to keep it alive.

9. Terminal-Controller-Manager - Cleanup

When there is no heartbeat on the terminal resource for a certain amount of time (default is 5m) the created resources in the host and target cluster are cleaned up again and the terminal resource will be deleted.

Browser Trusted Certificates for Kube-Apiservers


The dashboard frontend opens up a secure WebSocket connection to the kube-apiserver. The certificate presented by the kube-apiserver must be browser trusted, otherwise the connection can’t be established (rejected by browser policy). Most kube-apiservers have self-signed certificates from a custom Root CA.


Preferred Solution

There is an issue on the gardener component, to have browser trusted certificates for shoot kube-apiservers using SNI and certmanager. However, this would solve the issue for shoots and shooted-seeds, but not for soil and plant kube-apiservers and potentially others.

Current Solution

We had to “workaround” it by creating ingress resources for the kube-apiservers and letting the certmanager (or the new shoot cert service) request browser trusted certificates.

3 - Deployment

3.1 - Access Restrictions

Access Restrictions

The dashboard can be configured with access restrictions.

Access restrictions are shown for regions that have a matching label in the CloudProfile

  - name: pangaea-north-1
    - name: pangaea-north-1a
    - name: pangaea-north-1b
    - name: pangaea-north-1c
    labels: "true"
  • If the user selects the access restriction, spec.seedSelector.matchLabels[key] will be set.
  • When selecting an option, metadata.annotations[optionKey] will be set.

The value that is set depends on the configuration. See 2. under Configuration section below.

kind: Shoot
  annotations: "true" "true"
    matchLabels: "true"

In order for the shoot (with enabled access restriction) to be scheduled on a seed, the seed needs to have the label set. E.g.

kind: Seed
  labels: "true"

Configuration As gardener administrator:

  1. you can control the visibility of the chips with the accessRestriction.items[].display.visibleIf and accessRestriction.items[].options[].display.visibleIf property. E.g. in this example the access restriction chip is shown if the value is true and the option is shown if the value is false.
  2. you can control the value of the input field (switch / checkbox) with the accessRestriction.items[].input.inverted and accessRestriction.items[].options[].input.inverted property. Setting the inverted property to true will invert the value. That means that when selecting the input field the value will be'false' instead of 'true'.
  3. you can configure the text that is displayed when no access restriction options are available by setting accessRestriction.noItemsText example values.yaml:
  noItemsText: No access restriction options available for region {region} and cloud profile {cloudProfile}
  - key:
      visibleIf: true
      # title: foo # optional title, if not defined key will be used
      # description: bar # optional description displayed in a tooltip
      title: EU Access
      description: |
                This service is offered to you with our regular SLAs and 24x7 support for the control plane of the cluster. 24x7 support for cluster add-ons and nodes is only available if you meet the following conditions:
    - key:
        visibleIf: false
        # title: bar # optional title, if not defined key will be used
        # description: baz # optional description displayed in a tooltip
        title: No personal data is used as name or in the content of Gardener or Kubernetes resources (e.g. Gardener project name or Kubernetes namespace, configMap or secret in Gardener or Kubernetes)
        description: |
                    If you can't comply, only third-level/dev support at usual 8x5 working hours in EEA will be available to you for all cluster add-ons such as DNS and certificates, Calico overlay network and network policies, kube-proxy and services, and everything else that would require direct inspection of your cluster through its API server
        inverted: true
    - key:
        visibleIf: false
        title: No personal data is stored in any Kubernetes volume except for container file system, emptyDirs, and persistentVolumes (in particular, not on hostPath volumes)
        description: |
                    If you can't comply, only third-level/dev support at usual 8x5 working hours in EEA will be available to you for all node-related components such as Docker and Kubelet, the operating system, and everything else that would require direct inspection of your nodes through a privileged pod or SSH
        inverted: true

3.2 - Theming



Gardener landscape administrators should have the possibility to change the appearance of the Gardener Dashboard via configuration without the need to touch the code.


Gardener Dashboard has been built with Vuetify. We use Vuetify’s built-in theming support to centrally configure colors that are used throughout the web application. Colors can be configured for both light and dark themes. Configuration is done via the helm chart, see the respective theme section there. Colors can be specified as HTML color code (e.g. #FF0000 for red) or by referencing a color from Vuetify’s Material Design Color Pack.

The following colors can be configured:

primaryicons, chips, buttons, popovers, etc.
main-backgroundmain navigation, login page
main-navigation-titletext color on main navigation
toolbar-backgroundbackground color for toolbars in cards, dialogs, etc.
toolbar-titletext color for toolbars in cards, dialogs, etc.
action-buttonbuttons in tables and cards, e.g. cluster details page
infoSnotify info popups
warningSnotify warning popups, warning texts
errorSnotify error popups, error texts

If you use the helm chart, you can configure those with frontendConfig.themes.light for the light theme and frontendConfig.themes.dark for the dark theme.


      primary: '#0b8062'
      anchor: '#0b8062'
      main-background: 'grey.darken3'
      main-navigation-title: 'shades.white'
      toolbar-background: '#0b8062'
      toolbar-title: 'shades.white'
      action-button: 'grey.darken4'

Logos and Icons

It is also possible to exchange the Dashboard logo and icons. You can replace the assets folder when using the helm chart in the frontendConfig.assets map.

Attention: You need to set values for all files as mapping the volume will overwrite all files. It is not possible to exchange single files.

The files have to be encoded as base64 for the chart - to generate the encoded files for the values.yaml of the helm chart, you can use the following shorthand with bash or zsh on Linux systems. If you use macOS, install coreutils with brew (brew install coreutils) or remove the -w0 parameter.

cat << EOF

    favicon-16x16.png: |
      $(cat frontend/public/static/assets/favicon-16x16.png | base64 -w0)
    favicon-32x32.png: |
      $(cat frontend/public/static/assets/favicon-32x32.png | base64 -w0)
    favicon-96x96.png: |
      $(cat frontend/public/static/assets/favicon-96x96.png | base64 -w0)
    favicon.ico: |
      $(cat frontend/public/static/assets/favicon.ico | base64 -w0)
    logo.svg: |
      $(cat frontend/public/static/assets/logo.svg | base64 -w0)

Then, swap in the base64 encoded version of your files where needed.

4 - Development

4.1 - Local Setup

Local development


Develop new feature and fix bug on the Gardener Dashboard.


  • Yarn. For the required version, refer to .engines.yarn in package.json.
  • Node.js. For the required version, refer to .engines.node in package.json.


1. Clone repository

Clone the gardener/dashboard repository

git clone

2. Install dependencies

Run yarn at the repository root to install all dependencies.

cd dashboard

3. Configuration

Place the Gardener Dashboard configuration under ${HOME}/.gardener/config.yaml or alternatively set the path to the configuration file using the GARDENER_CONFIG environment variable.

A local configuration example for minikube and dex could look like follows:

port: 3030
logLevel: debug
logFormat: text
apiServerUrl: https://minkube    # garden cluster kube-apiserver url
sessionSecret: c2VjcmV0          # symetric key used for encryption
  issuer: https://minikube:32001
  client_id: dashboard
  client_secret: c2VjcmV0       # oauth client secret
  redirect_uri: http://localhost:8080/auth/callback
  scope: 'openid email profile groups audience:server:client_id:dashboard audience:server:client_id:kube-kubectl'
  clockTolerance: 15
    pathname: /api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
    - start: 00 17 * * 1,2,3,4,5
    - start: 00 17 * * 1,2,3,4,5
      end: 00 08 * * 1,2,3,4,5
    production: ~

5. Run it locally

The Gardener Dashboard backend server requires a kubeconfig for the Garden Cluster. You can set it e.g. by using the KUBECONFIG environment variable.

Concurrently run the backend server (port 3030) and the frontend server (port 8080) with hot reload enabled.

cd backend
export KUBECONFIG=/path/to/garden/cluster/kubeconfig.yaml
yarn serve
cd frontend
yarn serve

You can now access the UI on http://localhost:8080/


Build docker image locally.

make build


Push docker image to Google Container Registry.

make push

This command expects a valid gcloud configuration named gardener.

gcloud config configurations describe gardener
is_active: true
name: gardener
    project: johndoe-1008

4.2 - Testing



We use Jest JavaScript Testing Framework

  • Jest can collect code coverage information​
  • Jest support snapshot testing out of the box​
  • All in One solution. Replaces Mocha, Chai, Sinon and Istanbul​
  • It works with Vue.js and Node.js projects​

To execute all tests, simply run

yarn workspaces foreach --all run test

or to include test coverage generation

yarn workspaces foreach --all run test-coverage

You can also run tests for frontend, backend and charts directly inside the respective folder via

yarn test


We use ESLint for static code analyzing.

To execute, run

yarn workspaces foreach --all run lint

5 - Usage

5.1 - Connect Kubectl

Connect kubectl

In Kubernetes, the configuration for access to your cluster is a format known as kubeconfig that is normally stored as a file. It contains details such as cluster API server addresses and user access credentials. Treat it as sensitive data. Tools like kubectl use kubeconfig to connect and authenticate to a cluster and perform operations on it. Learn more about kubeconfig and kubectl on


  • You are logged on to the Gardener Dashboard.
  • You have created a cluster and its status is operational.

On this page:

Downloading kubeconfig for a cluster

  1. Select your project from the dropdown on the left, then choose CLUSTERS and locate your cluster in the list. Choose the key icon to bring up a dialog with the access options.

    In the Kubeconfig section the options are to download, copy or view the kubeconfig for the cluster. The same options are available also in the Access section in the cluster details screen. To find it, choose a cluster from the list.

  2. Choose the download icon to download kubeconfig as file on your local system.

Connecting to the cluster

In the following command, change <path-to-kubeconfig> with the actual path to the file where you stored the kubeconfig downloaded in the previous steps.

$ kubectl --kubeconfig=<path-to-kubeconfig> get namespaces

The command connects to the cluster and list its namespaces.

Exporting KUBECONFIG environment variable

Since many kubectl commands will be used, it’s a good idea to take advantage of every opportunity to shorten the expressions. The kubectl tool has a fallback strategy for looking up a kubeconfig to work with. For example, it looks for the KUBECONFIG environment variable with value that is the path to the kubeconfig file meant to be used. Export the variable:

$ export KUBECONFIG=<path-to-file>

In the previous snippet make sure to change the <path-to-file> with the path to the kubeconfig for the cluster that you want to connect to on your system.

What’s next?

5.2 - Custom Fields

Custom Shoot Fields

The Dashboard supports custom shoot fields, that can be defined per project by specifying metadata.annotations[""]. The fields can be configured to be displayed on the cluster list and cluster details page. Custom fields do not show up on the ALL_PROJECTS page.

nameString✔️Name of the custom field
pathString✔️Path in shoot resource, of which the value must be of primitive type (no object / array). Use lodash get path syntax, e.g. metadata.labels[""] or spec.networking.type
iconStringMDI icon for field on the cluster details page. See for available icons. Must be in the format: mdi-<icon-name>.
tooltipStringTooltip for the custom field that appears when hovering with the mouse over the value
defaultValueString/NumberDefault value, in case there is no value for the given path
showColumnBooltrueField shall appear as column in the cluster list
columnSelectedByDefaultBooltrueIndicates if field shall be selected by default on the cluster list (not hidden by default)
weightNumber0Defines the order of the column. The standard columns start with weight 100 and continue in 100 increments (200, 300, ..)
sortableBooltrueIndicates if column is sortable on the cluster list.
searchableBooltrueField shall appear in a dedicated card (Custom Fields) on the cluster details page
showDetailsBooltrueIndicates if field shall appear in a dedicated card (Custom Fields) on the cluster details page

As there is currently no way to configure the custom shoot fields for a project in the gardener dashboard, you have to use kubectl to update the project resource. See /docs/dashboard/usage/project-operations/#download-kubeconfig-for-a-user on how to get a kubeconfig for the garden cluster in order to edit the project.

The following is an example project yaml:

kind: Project
  annotations: |
        "shootStatus": {
          "name": "Shoot Status",
          "path": "metadata.labels[\"\"]",
          "icon": "mdi-heart-pulse",
          "tooltip": "Indicates the health status of the cluster",
          "defaultValue": "unknown",
          "showColumn": true,
          "columnSelectedByDefault": true,
          "weight": 950,
          "searchable": true,
          "sortable": true,
          "showDetails": true
        "networking": {
          "name": "Networking Type",
          "path": "spec.networking.type",
          "icon": "mdi-table-network",
          "showColumn": false

5.3 - Project Operations

Project Operations

This section demonstrates how to use the standard Kubernetes tool for cluster operation kubectl for common cluster operations with emphasis on Gardener resources. For more information on kubectl, see kubectl on


  • You’re logged on to the Gardener Dashboard.
  • You’ve created a cluster and its status is operational.

It’s recommended that you get acquainted with the resources in the Gardener API.

Downloading kubeconfig for remote project operations

The kubeconfig for project operations is different from the one for cluster operations. It has a larger scope and allows a different set of operations that are applicable for a project administrator role, such as lifecycle control on clusters and managing project members.

Depending on your goal, you create a service account suitable for automation and download its kubeconfig, or you can get a user-specific kubeconfig. The difference is the identity on behalf of which the operations are performed.

Download kubeconfig for a user

Kubernetes doesn’t offer an own resource type for human users that access the API server. Instead, you either have to manage unique user strings, or use an OpenID-Connect (OIDC) compatible Identity Provider (IDP) to do the job.

Once the latter is set up, each Gardener user can use the kubelogin plugin for kubectl to authenticate against the API server:

  1. Set up kubelogin if you don’t have it yet. More information: kubelogin setup.

  2. Open the menu at the top right of the screen, then choose MY ACCOUNT.

    Show account details

  3. On the Access card, choose the arrow to see all options for the personalized command-line interface access.

    Show details of OICD login

    The personal bearer token that is also offered here only provides access for a limited amount of time for one time operations, for example, in curl commands. The kubeconfig provided for the personalized access is used by kubelogin to grant access to the Gardener API for the user permanently by using a refresh token.

  4. Check that the right Project is chosen and keep the settings otherwise. Download the kubeconfig file and add its path to the KUBECONFIG environment variable.

You can now execute kubectl commands on the garden cluster using the identity of your user.

Download kubeconfig for a Service Account

  1. Go to a service account and choose Download.

    Download service account kubeconfig

  2. Add the downloaded kubeconfig to your configuration.

You can now execute kubectl commands on the garden cluster using the technical service account.

List Gardener API resources

  1. Using a kubeconfig for project operations, you can list the Gardner API resources using the following command:

    kubectl api-resources | grep garden

    The response looks like this:

    backupbuckets                     bbc               false        BackupBucket
    backupentries                     bec               true         BackupEntry
    cloudprofiles                     cprofile,cpfl            false        CloudProfile
    controllerinstallations           ctrlinst            false        ControllerInstallation
    controllerregistrations           ctrlreg            false        ControllerRegistration
    plants                            pl                true         Plant
    projects                                            false        Project
    quotas                            squota            true         Quota
    secretbindings                    sb                true         SecretBinding
    seeds                                               false        Seed
    shoots                                              true         Shoot
    shootstates                                         true         ShootState
    terminals                                      true         Terminal
    clusteropenidconnectpresets       coidcps        false        ClusterOpenIDConnectPreset
    openidconnectpresets              oidcps        true         OpenIDConnectPreset
  2. Enter the following command to view the Gardener API versions:

    kubectl api-versions | grep garden

    The response looks like this:

Check your permissions

  1. The operations on project resources are limited by the role of the identity that tries to perform them. To get an overview over your permissions, use the following command:

    kubectl auth can-i --list | grep garden

    The response looks like this:                      []                       []                 [create delete deletecollection get list patch update watch]                      []                       []                 [create delete deletecollection get list patch update watch]              []                       []                 [create delete deletecollection get list patch update watch]                      []                       []                 [create delete deletecollection get list patch update watch]              []                       []                 [create delete deletecollection get list patch update watch]    []                       []                 [create delete deletecollection get list patch update watch]               []                       []                 [get list watch]                    []                       [flowering]             [get patch update delete]
    namespaces                                      []                       [garden-flowering]      [get]
  2. Try to execute an operation that you aren’t allowed, for example:

    kubectl get projects

    You receive an error message like this:

    Error from server (Forbidden): is forbidden: User "system:serviceaccount:garden-flowering:robot" cannot list resource "projects" in API group "" at the cluster scope

Working with projects

  1. You can get the details for a project, where you (or the service account) is a member.

    kubectl get project flowering

    The response looks like this:

    NAME        NAMESPACE          STATUS   OWNER                    CREATOR                         AGE
    flowering   garden-flowering   Ready    [PROJECT-ADMIN]@domain   [PROJECT-ADMIN]@domain system   45m

    For more information, see Project in the API reference.

  2. To query the names of the members of a project, use the following command:

    kubectl get project docu -o jsonpath='{.spec.members[*].name }'

    The response looks like this:

    [PROJECT-ADMIN]@domain system:serviceaccount:garden-flowering:robot

    For more information, see members in the API reference.

Working with clusters

The Gardener domain object for a managed cluster is called Shoot.

List project clusters

To query the clusters in a project:

kubectl get shoots

The output looks like this:

geranium   aws            1.18.3    aws-eu1   geranium.flowering.shoot.<truncated>   Awake         Succeeded   100        True        True      True    True     74m

Create a new cluster

To create a new cluster using the command line, you need a YAML definition of the Shoot resource.

  1. To get started, copy the following YAML definition to a new file, for example, daffodil.yaml (or copy file shoot.yaml to daffodil.yaml) and adapt it to your needs.

    kind: Shoot
      name: daffodil
      namespace: garden-flowering
      secretBindingName: trial-secretbinding-gcp
      cloudProfileName: gcp
      region: europe-west1
      purpose: evaluation
        type: gcp
          kind: InfrastructureConfig
          zone: europe-west1-c
          kind: ControlPlaneConfig
        - name: cpu-worker
          maximum: 2
          minimum: 1
          maxSurge: 1
          maxUnavailable: 0
            type: n1-standard-2
              name: coreos
              version: 2303.3.0
            type: pd-standard
            size: 50Gi
            - europe-west1-c
        type: calico
          begin: 220000+0100
          end: 230000+0100
          kubernetesVersion: true
          machineImageVersion: true
        enabled: true
          - start: '00 17 * * 1,2,3,4,5'
            location: Europe/Kiev
        allowPrivilegedContainers: true
          enableBasicAuthentication: false
          nodeCIDRMaskSize: 24
          mode: IPTables
        version: 1.18.3
          enabled: false
          enabled: false
  2. In your new YAML definition file, replace the value of field metadata.namespace with your namespace following the convention garden-[YOUR-PROJECTNAME].

  3. Create a cluster using this manifest (with flag --wait=false the command returns immediately, otherwise it doesn’t return until the process is finished):

    kubectl apply -f daffodil.yaml --wait=false

    The response looks like this: created
  4. It takes 5–10 minutes until the cluster is created. To watch the progress, get all shoots and use the -w flag.

    kubectl get shoots -w

For a more extended example, see Gardener example shoot manifest.

Delete cluster

To delete a shoot cluster, you must first annotate the shoot resource to confirm the operation with "true":

  1. Add the annotation to your manifest (daffodil.yaml in the previous example):

      kind: Shoot
        name: daffodil
        namespace: garden-flowering
  2. Apply your changes of daffodil.yaml.

    kubectl apply -f daffodil.yaml

    The response looks like this: configured
  3. Trigger the deletion.

    kubectl delete shoot daffodil --wait=false

    The response looks like this: "daffodil" deleted
  4. It takes 5–10 minutes to delete the cluster. To watch the progress, get all shoots and use the -w flag.

    kubectl get shoots -w

Get kubeconfig for a cluster

To get the kubeconfig for a cluster:

kubectl get secrets daffodil.kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d

The response looks like this:

apiVersion: v1
kind: Config
current-context: shoot--flowering--daffodil
- name: shoot--flowering--daffodil
    certificate-authority-data: LS0tLS1CRUdJTiBDR <truncated>
    server: https://api.daffodil.flowering.shoot.<truncated>
- name: shoot--flowering--daffodil
    cluster: shoot--flowering--daffodil
    user: shoot--flowering--daffodil-token
- name: shoot--flowering--daffodil-token
    token: HbjYIMuR9hmyb9 <truncated>

The name of the Secret containing the kubeconfig is in the form <cluster-name>.kubeconfig, that is, in this example: daffodil.kubeconfig

Working with Service Accounts

Authenticating with an Identity Provider.

5.4 - Terminal Shortcuts

Terminal Shortcuts

As user and/or gardener administrator you can configure terminal shortcuts, which are preconfigured terminals for frequently used views.

You can launch the terminal shortcuts directly on the shoot details screen.

You can view the definition of a terminal terminal shortcut by clicking on they eye icon

What also has improved is, that when creating a new terminal you can directly alter the configuration.

With expanded configuration

On the Create Terminal Session dialog you can choose one or multiple terminal shortcuts.

Project specific terminal shortcuts created (by a member of the project) have a project icon badge and are listed as Unverified.

A warning message is displayed before a project specific terminal shortcut is ran informing the user about the risks.

How to create a project specific terminal shortcut

Disclaimer: “Project specific terminal shortcuts” is experimental feature and may change in future releases (we plan to introduce a dedicated custom resource).

You need to create a secret with the name terminal.shortcuts within your project namespace, containing your terminal shortcut configurations. Under data.shortcuts you add a list of terminal shortcuts (base64 encoded). Example terminal.shortcuts secret:

kind: Secret
type: Opaque
  name: terminal.shortcuts
  namespace: garden-myproject
apiVersion: v1
  shortcuts: LS0tCi0gdGl0bGU6IE5ldHdvcmtEZWxheVRlc3RzCiAgZGVzY3JpcHRpb246IFNob3cgbmV0d29ya21hY2hpbmVyeS5pbydzIE5ldHdvcmtEZWxheVRlc3RzCiAgdGFyZ2V0OiBzaG9vdAogIGNvbnRhaW5lcjoKICAgIGltYWdlOiBxdWF5LmlvL2RlcmFpbGVkL2s5czpsYXRlc3QKICAgIGFyZ3M6CiAgICAtIC0taGVhZGxlc3MKICAgIC0gLS1jb21tYW5kPW5ldHdvcmtkZWxheXRlc3QKICBzaG9vdFNlbGVjdG9yOgogICAgbWF0Y2hMYWJlbHM6CiAgICAgIGZvbzogYmFyCi0gdGl0bGU6IFNjYW4gQ2x1c3RlcgogIGRlc2NyaXB0aW9uOiBTY2FucyBsaXZlIEt1YmVybmV0ZXMgY2x1c3RlciBhbmQgcmVwb3J0cyBwb3RlbnRpYWwgaXNzdWVzIHdpdGggZGVwbG95ZWQgcmVzb3VyY2VzIGFuZCBjb25maWd1cmF0aW9ucwogIHRhcmdldDogc2hvb3QKICBjb250YWluZXI6CiAgICBpbWFnZTogcXVheS5pby9kZXJhaWxlZC9rOXM6bGF0ZXN0CiAgICBhcmdzOgogICAgLSAtLWhlYWRsZXNzCiAgICAtIC0tY29tbWFuZD1wb3BleWU=

How to configure the dashboard with terminal shortcuts Example values.yaml:

    terminalEnabled: true
    projectTerminalShortcutsEnabled: true # members can create a `terminal.shortcuts` secret containing the project specific terminal shortcuts
    - title: "Control Plane Pods"
      description: Using K9s to view the pods of the control plane for this cluster
      target: cp
        - "--headless"
        - "--command=pods"
    - title: "Cluster Overview"
      description: This gives a quick overview about the status of your cluster using K9s pulse feature
      target: shoot
        - "--headless"
        - "--command=pulses"
    - title: "Nodes"
      description: View the nodes for this cluster
      target: shoot
        - bin/sh
        - -c
        - sleep 1 && while true; do k9s --headless --command=nodes; done
#      shootSelector:
#        matchLabels:
#          foo: bar
terminal: # is generally required for the terminal feature
    - image: /.*/ops-toolbelt:.*/
      description: Run `ghelp` to get information about installed tools and packages
    seedRef: my-soil
        name: dashboard-terminal-admin
        namespace: garden
    disabled: false
    shootDisabled: false
    seedDisabled: false
    gardenTerminalHostDisabled: true
      annotations: managed nginx HTTPS

5.5 - Using Terminal

Using the Dashboard Terminal

The dashboard features an integrated web-based terminal to your clusters. It allows you to use kubectl without the need to supply kubeconfig. There are several ways to access it and they’re described on this page.


  • You are logged on to the Gardener Dashboard.
  • You have created a cluster and its status is operational.
  • The landscape administrator has enabled the terminal feature
  • The cluster you want to connect to is reachable from the dashboard

On this page:

Open from cluster list

  1. Choose your project from the menu on the left and choose CLUSTERS.

  2. Locate a cluster for which you want to open a Terminal and choose the key icon.

  3. In the dialog, choose the icon on the right of the Terminal label.

Open from cluster details page

  1. Choose your project from the menu on the left and choose CLUSTERS.

  2. Locate a cluster for which you want to open a Terminal and choose to display its details.

  3. In the Access section, choose the icon on the right of the Terminal label.


Opening up the terminal in either of the ways discussed here results in the following screen:

It provides a bash environment and range of useful tools and an installed and configured kubectl (with alias k) to use right away with your cluster.

Try to list the namespaces in the cluster.

$ k get ns

You get a result like this:

5.6 - Working With Projects

Working with Projects

Projects are used to group clusters, to onboard IaaS resources utilized by them and organize access control. To work with clusters, you need to create a project that they’ll belong to.


  • You have access to the Gardener dashboard and have permissions to create projects.


  1. Log on to the Gardener Dashboard and choose CREATE YOUR FIRST PROJECT.

  2. Provide a project Name, and optionally a Description, and a Purpose, and choose CREATE.

    Note: You will not be able to change the project Name later. The rest of the details are editable.

    The result is similar to the following:

    If you need to create more projects, expand the projects list dropdown on the left. When expanded, it reveals a CREATE PROJECT button that brings up the same dialog as above.

    When you need to delete your project, go to ADMINISTRATON, choose the trash bin icon and, confirm the operation.

5.7 - Working With Service Accounts

Working with Service Accounts


The cluster operations that are performed manually in the dashboard or via kubectl can be automated using the Gardener API. You need a service account to be authorized to perform them.

The service account of a project has access to all Kubernetes resources in the project.

Create a Service Account

  1. Select your project and choose MEMBERS from the menu on the left.

  2. Locate the section Service Accounts and choose +.

    Add service account

  3. Enter the service account details.

    Enter service account details

    The following Roles are available:

    RoleGranted Permissions
    AdminFully manage resources inside the project, except for member management. Also the delete/modify permissions for ServiceAccounts are now deprecated for this role and will be removed in a future version of Gardener, use the Service Account Manager role instead.
    ViewerRead all resources inside the project except secrets.
    UAMManage human users or groups in the project member list. Service accounts can only be managed admins.
    Service Account ManagerThis allows to fully manage service accounts inside the project namespace and request tokens for them. Please refer to this document. For security reasons this role should not be assigned to service accounts, especially it should be prevented that a service account can refresh tokens for itself.
  4. Choose CREATE.

Use the Service Account

To use the service account, download or copy its kubeconfig.

Download service account kubeconfig

Delete the Service Account

Choose Delete Service Account to delete it.

Delete service account