This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Provider KubeVirt

Gardener Extension Provider for KubeVirt

Gardener Extension for KubeVirt provider

CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

This extension implements Gardener’s extension contract for the KubeVirt provider. It includes KubeVirt-specific controllers for Infrastructure, ControlPlane, and Worker resources, as well as KubeVirt-specific control plane webhooks. Unlike other provider extensions, it does not include controllers for BackupBucket and BackupEntry resources, since KubeVirt as technology is not concerned with backup storage. Use the Gardener extension for your respective cloud provider to backup and restore your ETCD data. On OpenShift clusters, use Gardener extension for OpenShift provider.

For more information about Gardener integration with KubeVirt see this gardener.cloud blog post.

An example for a ControllerRegistration resource that can be used to register the controllers of this extension with Gardener can be found here.

Please find more information regarding the extensibility concepts and a detailed proposal here.

Supported Kubernetes versions

This extension supports the following Kubernetes versions:

VersionSupportConformance test results
Kubernetes 1.19not testedN/A
Kubernetes 1.181.18.0+N/A
Kubernetes 1.171.17.0+N/A
Kubernetes 1.16not testedN/A
Kubernetes 1.15not testedN/A

Please take a look here to see which versions are supported by Gardener in general.


How to start using or developing this extension locally

You can run the extension locally on your machine by executing make start.

Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

1 - Dev Setup

Development Setup

This document describes the recommended development setup for the KubeVirt provider extension. Following the guidelines presented here would allow you to test the full Gardener reconciliation and deletion flows with the KubeVirt provider extension and the KubeVirt MCM extension.

In this setup, only Gardener itself is running in your local development cluster. All other components, as well as KubeVirt VMs, are deployed and run on external clusters, which avoids high CPU and memory load on your local laptop.

Prerequisites

Follow the steps outlined in Setting up a local development environment for Gardener in order to install all needed prerequisites and enable running gardener-apiserver, gardener-controller-manager, and gardenlet locally. You can use either minikube, kind, or the nodeless cluster as your local development cluster.

Before continuing, copy all files from docs/development/manifests and docs/development/scripts to your dev directory and adapt them as needed. The sections that follow assume that you have already done this and all needed manifests and scripts can be found in your dev directory.

Creating the ControllerRegistrations

Before you register seeds or create shoots, you need to register all needed extensions using ControllerRegistration resources. The easiest way to manage ControllerRegistrations is via gem.

After installing gem, create a requirements.yaml file similar to requirements.yaml. The example file contains only the extensions needed for the development setup described here, but you could add any other Gardener extensions you may need.

In your requirements.yaml file you can refer to a released extension version, or to a revision (commit) from a Gardener repo or your fork of it. This version or revision is used to find the correct controller-registration.yaml file for the extension.

You can generate or update the controller-registrations.yaml file out of your requirements.yaml file by running:

gem ensure --requirements dev/requirements.yaml --controller-registrations dev/controller-registrations.yaml

After generating or updating the controller-registrations.yaml file, review it and make sure all versions are the ones you want to use for your tests. For example, if you are working on a PR for the KubeVirt provider extension, in addition to specifying the revision in your fork in requirements.yaml, you may need to change the version from 0.1.0-dev to something unique to you or your PR, e.g. 0.1.0-dev-johndoe. You can also add pullPolicy: Always to ensure that if you push a new extension image with that version and delete the corresponding pod, the new image will always be pulled when the pod is recreated.

Once you are satisfied with your controller registrations, apply the controller-registrations.yaml to your local Gardener:

kubectl apply -f dev/controller-registrations.yaml

Registering the Seed Cluster

Create or choose an external cluster, different from your local development cluster, to register as seed in your local Gardener. This can be any cluster and it can be the same or different from your provider cluster. It is recommended to use a different cluster to avoid confusion between the two. If you want to use your provider cluster as seed, first create it as described below and then return to this step.

To register your cluster as a seed, create the secret containing the kubeconfig for your seed cluster, the secret containing the credentials for your cloud provider (e.g. GCP), and the seed resource itself. See the following files as examples:

kubectl apply -f dev/secret-gcp1-kubeconfig.yaml
kubectl apply -f dev/secret-seed-operator-gcp.yaml
kubectl apply -f dev/seed-gcp1.yaml

Creating the Project

At this point, you should create a dev project in your local Gardener.

Create the project resource for your local dev project, see project-dev as an example.

kubectl apply -f dev/project-dev.yaml

Creating the DNS Domain Secrets

At this point, you should create the domain secrets used by the DNS extension.

If you want to use an external DNS provider (e.g. route53), create default and internal domain secrets similar to secret-default-domain.yaml and secret-internal-domain.yaml.

kubectl apply -f dev/secret-default-domain.yaml
kubectl apply -f dev/secret-internal-domain.yaml

Alternatively, if you don’t want to use an external DNS provider and use nip.io addresses instead, create just an internal domain secret similar to 10-secret-internal-domain-unmanaged.yaml. For more information, see Prepare the Gardener.

Creating the Provider Cluster

Create or choose an external cluster, different from your local development cluster, to use as a provider cluster. The only requirement to this cluster is that virtualization extensions are supported on its nodes. You can check if this is the case as described in Easy install using Cloud Providers, by executing the command egrep 'svm|vmx' /proc/cpuinfo and checking for non-empty output.

Creating an OS Image with Nested Virtualization Enabled

Before you can create such a cluster, you need to ensure that nested virtualizaton is enabled for its instances by using an appropriate OS image. To create such an image in GCP, follow the steps described in Enabling nested virtualization for VM instances. For example, to create a custom Ubuntu image with nested virtualizaton enabled based on Ubuntu 18.04, execute the following commands:

gcloud compute disks create ubuntu-disk1 
  --image-project ubuntu-os-cloud \
  --image ubuntu-1804-bionic-v20200916 \
  --zone us-central1-b
gcloud compute images create ubuntu-1804-bionic-v20200916-vmx-enabled \
  --source-disk ubuntu-disk1 \
  --source-disk-zone us-central1-b \
  --licenses "https://compute.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
gcloud compute images list | grep ubuntu

Once the image has been created, to create the provider cluster, you could use any Kubernetes provisioning tool, including of course Gardener itself, to create a cluster using this image.

Creating the Provider Cluster Using Gardener

To create the provider cluster using Gardener, simply create a shoot in the seed you registered previously using a custom GCP cloud profile that contains the above image, such as cloudprofile-gcp.yaml. To do this, follow these steps:

  1. Create the custom GCP cloud profile, for example cloudprofile-gcp.yaml.

    kubectl apply -f dev/cloudprofile-gcp.yaml
    
  2. Create the shoot secret binding, you could bind to the seed-operator-gcp secret you created previously for your seed, see secretbinding-shoot-operator-gcp.yaml as an example.

    kubectl apply -f dev/secretbinding-shoot-operator-gcp.yaml
    
  3. Create the GCP shoot itself. See shoot-gcp-vmx.yaml as an example. Note that this shoot should use the image with name ubuntu and version 18.4.20200916-vmx from the custom GCP cloud profile you created previously. Also, please rename the shoot to contain an unique prefix such as your github username, e.g. johndoe-gcp-vmx, to avoid naming conflicts in GCP.

    kubectl apply -f dev/shoot-gcp-vmx.yaml
    

    During the reconciliation by your local gardenlet, you may want to connect to the seed to monitor the shoot namespace shoot--dev--<prefix>-gcp-vmx.

  4. Once the shoot is successfully reconciled by your local gardenlet, get its kubeconfig by executing:

    kubectl get secret <prefix>-gcp-vmx.kubeconfig -n garden-dev -o jsonpath={.data.kubeconfig} | base64 -d > dev/kubeconfig-gcp-vmx.yaml
    

Installing KubeVirt, CDI, and Multus in the Provider Cluster

Once the provider cluster has been created (with Gardener or any other provisioning tool), you should install KubeVirt, CDI, and optionally Multus in it so that it can serve its purpose as a provider cluster.

  1. Install KubeVirt and CDI in this cluster by executing the install-kubevirt.sh script:

    export KUBECONFIG=dev/kubeconfig-gcp-vmx.yaml
    hack/kubevirt/install-kubevirt.sh
    
  2. Optionally, to use networking features, install Multus CNI as described in its documentation, or by applying the provided multus.yaml manifest.

    export KUBECONFIG=dev/kubeconfig-gcp-vmx.yaml
    kubectl apply -f hack/kubevirt/multus.yaml
    

    Note: In order to use any additional CNI plugins, the plugin binaries must be present in the /opt/cni/bin directory of the provider cluster nodes. For testing purposes, they can be installed manually by downloading a containernetworking/plugins release and copying the needed plugins to the /opt/cni/bin directory of each provider cluster node.

Testing the Gardener Reconciliation Flow

To test the Gardener reconciliation flow with the KubeVirt provider extensions, create the KubeVirt shoot cluster in your local dev project, by following these steps:

  1. Create the KubeVirt cloud profile, for example cloudprofile-kubevirt.yaml.

    kubectl apply -f dev/cloudprofile-kubevirt.yaml
    

    Note: The example cloud profile is intentionally rather simple and does not take advantage of some of the features supported by the KubeVirt provider extension. To test these features, modify the cloud profile manifest accordingly. For more information, see Using the KubeVirt provider extension with Gardener as operator.

  2. Create the shoot secret and secret binding. You should create a secret containing the kubeconfig for your provider cluster, and a corresponding secret binding:

    kubectl create secret generic kubevirt-credentials -n garden-dev --from-file=kubeconfig=dev/kubeconfig-gcp-vmx.yaml
    kubectl apply -f dev/secretbinding-kubevirt-credentials.yaml
    
  3. Create the KubeVirt shoot itself. See shoot-kubevirt.yaml as an example. Note that the nodes CIDR for this shoot must be the same range as the pods CIDR of your provider cluster.

    kubectl apply -f dev/shoot-kubevirt.yaml
    

    Note: The example shoot is intentionally very simple and does not take advantage of many of the features supported by the KubeVirt provider extension. To test these features, modify the shoot manifest accordingly. For more information, see Using the KubeVirt provider extension with Gardener as end-user.

  4. During the shoot reconciliation by your local gardenlet, you may want to:

    • Monitor the gardenlet logs in your local console where gardenlet is running.
    • Connect to the seed to monitor the shoot namespace shoot--dev--kubevirt and the logs of the KubeVirt provider extension in the extension-provider-kubevirt-* namespace.
    • Connect to the provider cluster to monitor the default namespace where VMs and VMIs are being created.
  5. Once the shoot has been successfully reconciled, get its kubeconfig by executing:

    kubectl get secret kubevirt.kubeconfig -n garden-dev -o jsonpath={.data.kubeconfig} | base64 -d > dev/kubeconfig-kubevirt.yaml
    

    At this point, you may want to connect to the KubeVirt shoot and check if it’s usable.

Testing the Gardener Deletion Flow

To test the Gardener deletion flow with the KubeVirt provider extensions, delete the KubeVirt shoot cluster in your local dev project, by following these steps:

  1. Delete the KubeVirt shoot itself using the delete script.

    kubectl annotate shoot kubevirt -n garden-dev confirmation.gardener.cloud/deletion=1
    kubectl delete shoot kubevirt -n garden-dev
    
  2. During the shoot deletion by your local gardenlet, you may want to:

    • Monitor the gardenlet logs in your local console where gardenlet is running.
    • Connect to the seed to monitor the shoot namespace shoot--dev--kubevirt and the logs of the KubeVirt provider extension in the extension-provider-kubevirt-* namespace.
    • Connect to the provider cluster to monitor the default namespace where VMs and VMIs are being created.

2 - Local Setup Admission

admission-kubevirt

admission-kubevirt is an admission webhook server which is responsible for the validation of the cloud provider (KubeVirt in this case) specific fields and resources. The Gardener API server is cloud provider agnostic and it wouldn’t be able to perform similar validation.

Follow the steps below to run the admission webhook server locally.

  1. Start the Gardener API server.

    For details, check the Gardener local setup.

  2. Start the webhook server

    Make sure that the KUBECONFIG environment variable is pointing to the local garden cluster.

    make start-admission
    
  3. Setup the ValidatingWebhookConfiguration.

    hack/dev-setup-admission-kubevirt.sh will configure the webhook Service which will allow the kube-apiserver of your local cluster to reach the webhook server. It will also apply the ValidatingWebhookConfiguration manifest.

    ./hack/dev-setup-admission-kubevirt.sh
    

You are now ready to experiment with the admission-kubevirt webhook server locally.

3 - Usage As End User

Using the KubeVirt provider extension with Gardener as end-user

The core.gardener.cloud/v1beta1.Shoot resource declares a few fields that are meant to contain provider-specific configuration.

This document describes how this configuration looks like for KubeVirt and provides an example Shoot manifest with minimal configuration that you can use to create a KubeVirt shoot cluster (without the landscape-specific information such as cloud profile names, secret binding names, etc.).

Provider Secret Data

Every shoot cluster references a SecretBinding which itself references a Secret, and this Secret contains the kubeconfig of your KubeVirt provider cluster. This cluster is the cluster where KubeVirt itself is installed, and that hosts the KubeVirt virtual machines used as shoot worker nodes. This Secret must look as follows:

apiVersion: v1
kind: Secret
metadata:
  name: provider-cluster-kubeconfig
  namespace: garden-dev
type: Opaque
data:
  kubeconfig: base64(kubeconfig)

Permissions

All KubeVirt resources (VirtualMachines, DataVolumes, etc.) are created in the namespace of the current context of the above kubeconfig, that is my-shoot in the example below:

...
current-context: provider-cluster
contexts:
- name: provider-cluster
  context:
    cluster: provider-cluster
    namespace: my-shoot
    user: provider-cluster-token
...

If no namespace is specified, the default namespace is assumed. You can use the same namespace for multiple shoots. The user specified in the kubeconfig must have permissions to read and write KubeVirt and Kubernetes core resources in this namespace.

InfrastructureConfig

The infrastructure configuration can contain additional networks used by the shoot worker nodes. If this configuration is empty, all KubeVirt virtual machines used as shoot worker nodes use only the pod network of the provider cluster.

An example InfrastructureConfig for the KubeVirt extension looks as follows:

apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
  sharedNetworks:
  # Reference to the network defined by the NetworkAttachmentDefinition default/net-conf
  - name: net-conf
    namespace: default
  tenantNetworks:
  - name: network-1
    # Configuration for the CNI plugins bridge and firewall
    config: |
      {
        "cniVersion": "0.4.0",
        "name": "bridge-firewall",
        "plugins": [
          {
            "type": "bridge",
            "isGateway": true,
            "isDefaultGateway": true,
            "ipMasq": true,
            "ipam": {
              "type": "host-local",
              "subnet": "10.100.0.0/16"
            }
          },
          {
            "type": "firewall"
          }
        ]
      }      
    # Don't attach the pod network at all, instead use this network as default
    default: true

A non-empty infrastructure configuration can contain:

  • References to pre-existing, shared networks that can be shared between multiple shoots. These networks must exist in the provider cluster prior to shoot creation.
  • CNI configurations for tenant networks that are created, updated, and deleted together with the shoot. If one of these networks is marked as default: true, it becomes the default network instead of the pod network of the provider cluster. This can be used to achieve higher level of network isolation, since the networks of the different shoots can be isolated from each other, and in some cases better performance.

Both shared and tenant networks are maintained in the provider cluster via Multus CNI NetworkAttachmentDefinition resources. For shared networks, these resources must be created in advance, while for tenant networks they are managed by the shoot reconciliation process.

In order to use any additional CNI plugins in a tenant network configuration, such as bridge or firewall in the above example, the plugin binaries must be present in the /opt/cni/bin directory of the provider cluster nodes. They can be installed manually by downloading a containernetworking/plugins release (not recommended except for testing a new configuration). Alternatively, they can be installed via a specially prepared daemon set that ensures the existence of the plugin binaries on each provider cluster node.

Note: Although it is possible to update the network configuration in InfrastructureConfig, any such changes will result in recreating all KubeVirt VMs, so that the new network configuration is properly taken into account. This will be done automatically by the MCM using rolling update.

ControlPlaneConfig

The control plane configuration contains options for the KubeVirt-specific control plane components. Currently, the only component deployed by the KubeVirt extension is the KubeVirt Cloud Controller Manager (CCM).

An example ControlPlaneConfig for the KubeVirt extension looks as follows:

apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
cloudControllerManager:
  featureGates:
    CustomResourceValidation: true

The cloudControllerManager.featureGates contains a map of explicitly enabled or disabled feature gates. For production usage it’s not recommend to use this field at all as you can enable alpha features or disable beta/stable features, potentially impacting the cluster stability. If you don’t want to configure anything for the CCM, simply omit the key in the YAML specification.

WorkerConfig

The KubeVirt extension supports specifying additional data volumes per machine in the worker pool. For each data volume, you must specify a name and a type.

Below is an example Shoot resource snippet with root volume and data volumes:

spec:
  provider:
    workers:
    - name: cpu-worker
      ...
      volume:
        type: default
        size: 20Gi
      dataVolumes:
      - name: volume-1
        type: default
        size: 10Gi

Note: The additional data volumes will be attached as blank disks to the KubeVirt VMs. These disks must be formatted and mounted manually to the VM before they can be used.

The KubeVirt extension does not currently support encryption for volumes.

Additionally, it is possible to specify additional KubeVirt-specific options for configuring the worker pools. They can be specified in .spec.provider.workers[].providerConfig and are evaluated by the KubeVirt worker controller when it reconciles the shoot machines.

An example WorkerConfig for the KubeVirt extension looks as follows:

apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
kind: WorkerConfig
devices:
  # disks allow to customize disks attached to KubeVirt VM
  # check [link](https://kubevirt.io/user-guide/#/creation/disks-and-volumes?id=disks-and-volumes) for full specification and options
  disks:
  # name must match defined dataVolume name
  # to modify root volume the name must be equal to 'root-disk'
  - name: root-disk # modify root-disk
    # disk type, check [link](https://kubevirt.io/user-guide/#/creation/disks-and-volumes?id=disks) for more types
    disk:
      # bus indicates the type of disk device to emulate.
      bus: virtio
    # set disk device cache
    cache: writethrough
    # dedicatedIOThread indicates this disk should have an exclusive IO Thread
    dedicatedIOThread: true
  - name: volume-1 # modify dataVolume named volume-1
    disk: {}
  # whether to have random number generator from host
  rng: {}
  # whether or not to enable virtio multi-queue for block devices
  blockMultiQueue: true
  # if specified, virtual network interfaces configured with a virtio bus will also enable the vhost multiqueue feature
  networkInterfaceMultiQueue: true
cpu:
  # number of cores inside the VMI
  cores: 1
  # number of sockets inside the VMI
  sockets: 2
  # number of threads inside the VMI
  threads: 1
  # models specifies the CPU model of the VMI
  # list of available models https://github.com/libvirt/libvirt/tree/master/src/cpu_map.
  # and options https://libvirt.org/formatdomain.html#cpu-model-and-topology
  model: "host-model"
  # features specifies the CPU features list inside the VMI
  features:
  - "pcid"
  # dedicatedCPUPlacement requests the scheduler to place the VirtualMachineInstance on a node
  # with dedicated pCPUs and pin the vCPUs to it.
  dedicatedCpuPlacement: false
  # isolateEmulatorThread requests one more dedicated pCPU to be allocated for the VMI to place the emulator thread on it.
  isolateEmulatorThread: false
# memory configuration for KubeVirt VMs, allows to set 'hugepages' and 'guest' settings. 
# See https://kubevirt.io/api-reference/master/definitions.html#_v1_memory
memory:
  # hugepages requires appropriate feature gate to be enabled, take a look at the following links for more details:
  # * k8s - https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/
  # * okd - https://docs.okd.io/latest/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.html
  hugepages:
     pageSize: "2Mi"
  # guest allows to specifying the amount of memory which is visible inside the Guest OS. It must lie between requests and limits.
  # Defaults to the requested memory in the machineTypes.
  guest: "1Gi"
# overcommitGuestOverhead informs the scheduler to not take the guest-management overhead into account. Instead
# put the overhead only into the container's memory limit. This can lead to crashes if
# all memory is in use on a node. Defaults to false.
# For more details take a look at https://kubevirt.io/user-guide/#/usage/overcommit?id=overcommit-the-guest-overhead
overcommitGuestOverhead: true
# DNS policy for KubeVirt VMs. Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'.
# Defaults to 'ClusterFirst`.
# See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
dnsPolicy: ClusterFirst
# DNS configuration for KubeVirt VMs, merged with the generated DNS configuration based on dnsPolicy.
# See https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
dnsConfig:
  nameservers:
  - 8.8.8.8
# Disable using pre-allocated data volumes. Defaults to 'false'.
disablePreAllocatedDataVolumes: true
# cpu allows to set the CPU topology of the VMI
# See https://kubevirt.io/api-reference/master/definitions.html#_v1_cpu

Currently, these KubeVirt-specific options may include:

  • The CPU topology and memory configuration of the KubVirt VMs. For more information, see CPU.v1 and Memory.v1.
  • The DNS policy and DNS configuration of the KubeVirt VMs. For more information, see DNS for Services and Pods.
  • Whether to use pre-allocated data volumes with KubeVirt VMs. With pre-allocated data volumes (the default), a data volume is created in advance for each machine class, the OS image is imported into this volume only once, and actual KubeVirt VM data volumes are cloned from this data volume. Typically, this significantly speeds up the data volume creation process. You can disable this feature by setting the disablePreAllocatedDataVolumes option to true.

Region and Zone Support

Nodes in the provider cluster may belong to provider-specific regions and zones, and Kubernetes would then use this information to spread pods across zones as described in Running in multiple zones. You may want to take advantage of these capabilities in the shoot cluster as well.

To achieve this, the KubeVirt provider extension ensures that the region and zones specified in the Shoot resource are taken into account when creating the KubeVirt VMs used as shoot cluster nodes.

Below is an example Shoot resource snippet with region and zones:

spec:  
  region: europe-west1
  provider:
    ...
    workers:
    - name: cpu-worker
      ...
      zones:
      - europe-west1-c
      - europe-west1-d

The shoot region and zones must correspond to the region and zones of the provider cluster. A KubeVirt VM designated for specific region and zone will only be scheduled on provider cluster nodes belonging to these region and zone. If there are no such nodes, or they have insufficient resources, the KubeVirt VM may remain in Pending state for a longer period and the shoot reconciliation may fail. Therefore, always make sure that the provider cluster contains nodes for all zones specified in the shoot.

If multiple zones are specified for a worker pool, the KubeVirt VMs will be equally distributed over these zones in the specified order.

If your provider cluster is not region and zone aware, or if it contains nodes that don’t belong to any region or zone, you can use default as a region or zone name in the Shoot resource to target such nodes.

Note that the region and zones are mandatory fields in the Shoot resource, so you must specify either a concrete region / zone or default.

Once the KubeVirt VMs are scheduled on the correct provider cluster nodes, the KubeVirt Cloud Controller Manager (CCM) mentioned above will appropriately label the shoot worker nodes themselves with the appropriate region and zone labels, by propagating the region and zone from the provider cluster nodes, so that Kubernetes multi-zone capabilities are also available in the shoot cluster.

Example Shoot Manifest

Please find below an example Shoot manifest for one availability zone:

apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
  name: johndoe-kubevirt
  namespace: garden-dev
spec:
  cloudProfileName: kubevirt
  secretBindingName: provider-cluster-kubeconfig
  region: europe-west1
  provider:
    type: kubevirt
#   infrastructureConfig:
#     apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
#     kind: InfrastructureConfig
#     networks:
#       tenantNetworks:
#       - name: network-1
#         config: "{...}"
#         default: true
#   controlPlaneConfig:
#     apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
#     kind: ControlPlaneConfig
#     cloudControllerManager:
#       featureGates:
#         CustomResourceValidation: true
    workers:
    - name: cpu-worker
      machine:
        type: standard-1
        image:
          name: ubuntu
          version: "18.04"
      minimum: 1
      maximum: 2
      volume:
        type: default
        size: 20Gi
#     dataVolumes:
#     - name: volume-1
#       type: default
#       size: 10Gi
#     providerConfig:
#       apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
#       kind: WorkerConfig
#       disablePreAllocatedDataVolumes: true
      zones:
      - europe-west1-c
  networking:
    type: calico
    pods: 100.96.0.0/11
    # Must match the IPAM subnet of the default tenant network, if present.
    # Otherwise, must be the same as the provider cluster pod network range.
    nodes: 10.225.128.0/17 # 10.100.0.0/16
    services: 100.64.0.0/13
  kubernetes:
    version: 1.17.8
  maintenance:
    autoUpdate:
      kubernetesVersion: true
      machineImageVersion: true
  addons:
    kubernetesDashboard:
      enabled: true
    nginxIngress:
      enabled: true

4 - Usage As Operator

Using the KubeVirt provider extension with Gardener as operator

The core.gardener.cloud/v1beta1.CloudProfile resource declares a providerConfig field that is meant to contain provider-specific configuration. The core.gardener.cloud/v1beta1.Seed resource is structured in a similar way. Additionally, it allows configuring settings for the backups of the main etcds’ data of shoot clusters control planes running in this seed cluster.

This document explains what is necessary to configure for this provider extension.

CloudProfile resource

In this section we are describing how the configuration for CloudProfiles looks like for KubeVirt and provide an example CloudProfile manifest with minimal configuration that you can use to allow creating KubeVirt shoot clusters.

CloudProfileConfig

The cloud profile configuration contains information about the machine images source URLs. You have to map every version that you specify in .spec.machineImages[].versions here so that the KubeVirt extension could find the source URL for every version you want to offer.

An example CloudProfileConfig for the KubeVirt extension looks as follows:

apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
kind: CloudProfileConfig
machineImages:
- name: ubuntu
  versions:
  - version: "18.04"
    sourceURL: https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
# machineTypes extend cloud profile's spec.machineType object to KubeVirt provider specific config
machineTypes:
# name is used as a reference to the machineType object
- name: standard-1  
  # limits is equivalent to resource limits of pod
  # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container
  limits:
    cpu: "2"
    memory: 8Gi

Example CloudProfile manifest

Please find below an example CloudProfile manifest:

apiVersion: core.gardener.cloud/v1beta1
kind: CloudProfile
metadata:
  name: kubevirt
spec:
  type: kubevirt
  providerConfig:
    apiVersion: kubevirt.provider.extensions.gardener.cloud/v1alpha1
    kind: CloudProfileConfig
    machineImages:
    - name: ubuntu
      versions:
      - version: "18.04"
        sourceURL: https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
  kubernetes:
    versions:
    - version: 1.18.5
    - version: 1.17.8
  machineImages:
  - name: ubuntu
    versions:
    - version: "18.04"
  machineTypes:
  - name: standard-1
    cpu: "1"
    gpu: "0"
    memory: 4Gi
  volumeTypes:
  - name: default
    class: default
  regions:
  - name: europe-west1
    zones:
    - name: europe-west1-b
    - name: europe-west1-c
    - name: europe-west1-d

Seed resource

This provider extension does not support any provider configuration for the Seed’s .spec.provider.providerConfig field.