This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Provider Openstack

Gardener extension controller for the OpenStack cloud provider

Gardener Extension for OpenStack provider

REUSE status CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

This controller implements Gardener’s extension contract for the OpenStack provider.

An example for a ControllerRegistration resource that can be used to register this controller to Gardener can be found here.

Please find more information regarding the extensibility concepts and a detailed proposal here.

Supported Kubernetes versions

This extension controller supports the following Kubernetes versions:

VersionSupportConformance test results
Kubernetes 1.311.31.0+Gardener v1.31 Conformance Tests
Kubernetes 1.301.30.0+Gardener v1.30 Conformance Tests
Kubernetes 1.291.29.0+Gardener v1.29 Conformance Tests
Kubernetes 1.281.28.0+Gardener v1.28 Conformance Tests
Kubernetes 1.271.27.0+Gardener v1.27 Conformance Tests
Kubernetes 1.261.26.0+Gardener v1.26 Conformance Tests
Kubernetes 1.251.25.0+Gardener v1.25 Conformance Tests

Please take a look here to see which versions are supported by Gardener in general.


Compatibility

The following lists known compatibility issues of this extension controller with other Gardener components.

OpenStack ExtensionGardenerActionNotes
< v1.12.0> v1.10.0Please update the provider version to >= v1.12.0 or disable the feature gate MountHostCADirectories in the Gardenlet.Applies if feature flag MountHostCADirectories in the Gardenlet is enabled. This is to prevent duplicate volume mounts to /usr/share/ca-certificates in the Shoot API Server.

How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start.

Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

1 - Deployment

Deployment of the OpenStack provider extension

Disclaimer: This document is NOT a step by step installation guide for the OpenStack provider extension and only contains some configuration specifics regarding the installation of different components via the helm charts residing in the OpenStack provider extension repository.

gardener-extension-admission-openstack

Authentication against the Garden cluster

There are several authentication possibilities depending on whether or not the concept of Virtual Garden is used.

Virtual Garden is not used, i.e., the runtime Garden cluster is also the target Garden cluster.

Automounted Service Account Token The easiest way to deploy the gardener-extension-admission-openstack component will be to not provide kubeconfig at all. This way in-cluster configuration and an automounted service account token will be used. The drawback of this approach is that the automounted token will not be automatically rotated.

Service Account Token Volume Projection Another solution will be to use Service Account Token Volume Projection combined with a kubeconfig referencing a token file (see example below).

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: <CA-DATA>
    server: https://default.kubernetes.svc.cluster.local
  name: garden
contexts:
- context:
    cluster: garden
    user: garden
  name: garden
current-context: garden
users:
- name: garden
  user:
    tokenFile: /var/run/secrets/projected/serviceaccount/token

This will allow for automatic rotation of the service account token by the kubelet. The configuration can be achieved by setting both .Values.global.serviceAccountTokenVolumeProjection.enabled: true and .Values.global.kubeconfig in the respective chart’s values.yaml file.

Virtual Garden is used, i.e., the runtime Garden cluster is different from the target Garden cluster.

Service Account The easiest way to setup the authentication will be to create a service account and the respective roles will be bound to this service account in the target cluster. Then use the generated service account token and craft a kubeconfig which will be used by the workload in the runtime cluster. This approach does not provide a solution for the rotation of the service account token. However, this setup can be achieved by setting .Values.global.virtualGarden.enabled: true and following these steps:

  1. Deploy the application part of the charts in the target cluster.
  2. Get the service account token and craft the kubeconfig.
  3. Set the crafted kubeconfig and deploy the runtime part of the charts in the runtime cluster.

Client Certificate Another solution will be to bind the roles in the target cluster to a User subject instead of a service account and use a client certificate for authentication. This approach does not provide a solution for the client certificate rotation. However, this setup can be achieved by setting both .Values.global.virtualGarden.enabled: true and .Values.global.virtualGarden.user.name, then following these steps:

  1. Generate a client certificate for the target cluster for the respective user.
  2. Deploy the application part of the charts in the target cluster.
  3. Craft a kubeconfig using the already generated client certificate.
  4. Set the crafted kubeconfig and deploy the runtime part of the charts in the runtime cluster.

Projected Service Account Token This approach requires an already deployed and configured oidc-webhook-authenticator for the target cluster. Also the runtime cluster should be registered as a trusted identity provider in the target cluster. Then projected service accounts tokens from the runtime cluster can be used to authenticate against the target cluster. The needed steps are as follows:

  1. Deploy OWA and establish the needed trust.
  2. Set .Values.global.virtualGarden.enabled: true and .Values.global.virtualGarden.user.name. Note: username value will depend on the trust configuration, e.g., <prefix>:system:serviceaccount:<namespace>:<serviceaccount>
  3. Set .Values.global.serviceAccountTokenVolumeProjection.enabled: true and .Values.global.serviceAccountTokenVolumeProjection.audience. Note: audience value will depend on the trust configuration, e.g., <cliend-id-from-trust-config>.
  4. Craft a kubeconfig (see example below).
  5. Deploy the application part of the charts in the target cluster.
  6. Deploy the runtime part of the charts in the runtime cluster.
apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority-data: <CA-DATA>
    server: https://virtual-garden.api
  name: virtual-garden
contexts:
- context:
    cluster: virtual-garden
    user: virtual-garden
  name: virtual-garden
current-context: virtual-garden
users:
- name: virtual-garden
  user:
    tokenFile: /var/run/secrets/projected/serviceaccount/token

2 - Local Setup

admission-openstack

admission-openstack is an admission webhook server which is responsible for the validation of the cloud provider (OpenStack in this case) specific fields and resources. The Gardener API server is cloud provider agnostic and it wouldn’t be able to perform similar validation.

Follow the steps below to run the admission webhook server locally.

  1. Start the Gardener API server.

    For details, check the Gardener local setup.

  2. Start the webhook server

    Make sure that the KUBECONFIG environment variable is pointing to the local garden cluster.

    make start-admission
    
  3. Setup the ValidatingWebhookConfiguration.

    hack/dev-setup-admission-openstack.sh will configure the webhook Service which will allow the kube-apiserver of your local cluster to reach the webhook server. It will also apply the ValidatingWebhookConfiguration manifest.

    ./hack/dev-setup-admission-openstack.sh
    

You are now ready to experiment with the admission-openstack webhook server locally.

3 - Operations

Using the OpenStack provider extension with Gardener as operator

The core.gardener.cloud/v1beta1.CloudProfile resource declares a providerConfig field that is meant to contain provider-specific configuration.

In this document we are describing how this configuration looks like for OpenStack and provide an example CloudProfile manifest with minimal configuration that you can use to allow creating OpenStack shoot clusters.

CloudProfileConfig

The cloud profile configuration contains information about the real machine image IDs in the OpenStack environment (image names). You have to map every version that you specify in .spec.machineImages[].versions here such that the OpenStack extension knows the image ID for every version you want to offer.

It also contains optional default values for DNS servers that shall be used for shoots. In the dnsServers[] list you can specify IP addresses that are used as DNS configuration for created shoot subnets.

Also, you have to specify the keystone URL in the keystoneURL field to your environment.

Additionally, you can influence the HTTP request timeout when talking to the OpenStack API in the requestTimeout field. This may help when you have for example a long list of load balancers in your environment.

In case your OpenStack system uses Octavia for network load balancing then you have to set the useOctavia field to true such that the cloud-controller-manager for OpenStack gets correctly configured (it defaults to false).

Some hypervisors (especially those which are VMware-based) don’t automatically send a new volume size to a Linux kernel when a volume is resized and in-use. For those hypervisors you can enable the storage plugin interacting with Cinder to telling the SCSI block device to refresh its information to provide information about it’s updated size to the kernel. You might need to enable this behavior depending on the underlying hypervisor of your OpenStack installation. The rescanBlockStorageOnResize field controls this. Please note that it only applies for Kubernetes versions where CSI is used.

Some openstack configurations do not allow to attach more volumes than a specific amount to a single node. To tell the k8s scheduler to not over schedule volumes on a node, you can set nodeVolumeAttachLimit which defaults to 256. Some openstack configurations have different names for volume and compute availability zones, which might cause pods to go into pending state as there are no nodes available in the detected volume AZ. To ignore the volume AZ when scheduling pods, you can set ignoreVolumeAZ to true (it defaults to false). See CSI Cinder driver.

The cloud profile config also contains constraints for floating pools and load balancer providers that can be used in shoots.

If your OpenStack system supports server groups, the serverGroupPolicies property will enable your end-users to create shoots with workers where the nodes are managed by Nova’s server groups. Specifying serverGroupPolicies is optional and can be omitted. If enabled, the end-user can choose whether or not to use this feature for a shoot’s workers. Gardener will handle the creation of the server group and node assignment.

To enable this feature, an operator should:

  • specify the allowed policy values (e.g. affintity, anti-affinity) in this section. Only the policies in the allow-list will be available for end-users.
  • make sure that your OpenStack project has enough server group capacity. Otherwise, shoot creation will fail.

If your OpenStack system has multiple volume-types, the storageClasses property enables the creation of kubernetes storageClasses for shoots. Set storageClasses[].parameters.type to map it with an openstack volume-type. Specifying storageClasses is optional and can be omitted.

An example CloudProfileConfig for the OpenStack extension looks as follows:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: CloudProfileConfig
machineImages:
- name: coreos
  versions:
  - version: 2135.6.0
    # Fallback to image name if no region mapping is found
    # Only works for amd64 and is strongly discouraged. Prefer image IDs!
    image: coreos-2135.6.0
    regions:
    - name: europe
      id: "1234-amd64"
      architecture: amd64 # optional, defaults to amd64
    - name: europe
      id: "1234-arm64"
      architecture: arm64
    - name: asia
      id: "5678-amd64"
      architecture: amd64
# keystoneURL: https://url-to-keystone/v3/
# keystoneURLs:
# - region: europe
#   url: https://europe.example.com/v3/
# - region: asia
#   url: https://asia.example.com/v3/
# dnsServers:
# - 10.10.10.11
# - 10.10.10.12
# requestTimeout: 60s
# useOctavia: true
# useSNAT: true
# rescanBlockStorageOnResize: true
# ignoreVolumeAZ: true
# nodeVolumeAttachLimit: 30
# serverGroupPolicies:
# - soft-anti-affinity
# - anti-affinity
# resolvConfOptions:
# - rotate
# - timeout:1
# storageClasses:
# - name: example-sc
#   default: false
#   provisioner: cinder.csi.openstack.org
#   volumeBindingMode: WaitForFirstConsumer
#   parameters:
#     type: storage_premium_perf0
constraints:
  floatingPools:
  - name: fp-pool-1
#   region: europe
#   loadBalancerClasses:
#   - name: lb-class-1
#     floatingSubnetID: "1234"
#     floatingNetworkID: "4567"
#     subnetID: "7890"
# - name: "fp-pool-*"
#   region: europe
#   loadBalancerClasses:
#   - name: lb-class-1
#     floatingSubnetID: "1234"
#     floatingNetworkID: "4567"
#     subnetID: "7890"
# - name: "fp-pool-eu-demo"
#   region: europe
#   domain: demo
#   loadBalancerClasses:
#   - name: lb-class-1
#     floatingSubnetID: "1234"
#     floatingNetworkID: "4567"
#     subnetID: "7890"
# - name: "fp-pool-eu-dev"
#   region: europe
#   domain: dev
#   nonConstraining: true
#   loadBalancerClasses:
#   - name: lb-class-1
#     floatingSubnetID: "1234"
#     floatingNetworkID: "4567"
#     subnetID: "7890"
  loadBalancerProviders:
  - name: haproxy
# - name: f5
#   region: asia
# - name: haproxy
#   region: asia

Please note that it is possible to configure a region mapping for keystone URLs, floating pools, and load balancer providers. Additionally, floating pools can be constrainted to a keystone domain by specifying the domain field. Floating pool names may also contains simple wildcard expressions, like * or fp-pool-* or *-fp-pool. Please note that the * must be either single or at the beginning or at the end. Consequently, fp-*-pool is not possible/allowed. The default behavior is that, if found, the regional (and/or domain restricted) entry is taken. If no entry for the given region exists then the fallback value is the most matching entry (w.r.t. wildcard matching) in the list without a region field (or the keystoneURL value for the keystone URLs). If an additional floating pool should be selectable for a region and/or domain, you can mark it as non constraining with setting the optional field nonConstraining to true. Multiple loadBalancerProviders can be specified in the CloudProfile. Each provider may specify a region constraint for where it can be used. If at least one region specific entry exists in the CloudProfile, the shoot’s specified loadBalancerProvider must adhere to the list of the available providers of that region. Otherwise, one of the non-regional specific providers should be used. Each entry in the loadBalancerProviders must be uniquely identified by its name and if applicable, its region.

The loadBalancerClasses field is an optional list of load balancer classes which can be when the corresponding floating pool network is choosen. The load balancer classes can be configured in the same way as in the ControlPlaneConfig in the Shoot resource, therefore see here for more details.

Some OpenStack environments don’t need these regional mappings, hence, the region and keystoneURLs fields are optional. If your OpenStack environment only has regional values and it doesn’t make sense to provide a (non-regional) fallback then simply omit keystoneURL and always specify region.

If Gardener creates and manages the router of a shoot cluster, it is additionally possible to specify that the enable_snat field is set to true via useSNAT: true in the CloudProfileConfig.

On some OpenStack enviroments, there may be the need to set options in the file /etc/resolv.conf on worker nodes. If the field resolvConfOptions is set, a systemd service will be installed which copies /run/systemd/resolve/resolv.conf on every change to /etc/resolv.conf and appends the given options.

Example CloudProfile manifest

Please find below an example CloudProfile manifest:

apiVersion: core.gardener.cloud/v1beta1
kind: CloudProfile
metadata:
  name: openstack
spec:
  type: openstack
  kubernetes:
    versions:
    - version: 1.27.3
    - version: 1.26.8
      expirationDate: "2022-10-31T23:59:59Z"
  machineImages:
  - name: coreos
    versions:
    - version: 2135.6.0
      architectures: # optional, defaults to [amd64]
      - amd64
      - arm64
  machineTypes:
  - name: medium_4_8
    cpu: "4"
    gpu: "0"
    memory: 8Gi
    architecture: amd64 # optional, defaults to amd64
    storage:
      class: standard
      type: default
      size: 40Gi
  - name: medium_4_8_arm
    cpu: "4"
    gpu: "0"
    memory: 8Gi
    architecture: arm64
    storage:
      class: standard
      type: default
      size: 40Gi
  regions:
  - name: europe-1
    zones:
    - name: europe-1a
    - name: europe-1b
    - name: europe-1c
  providerConfig:
    apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
    kind: CloudProfileConfig
    machineImages:
    - name: coreos
      versions:
      - version: 2135.6.0
        # Fallback to image name if no region mapping is found
        # Only works for amd64 and is strongly discouraged. Prefer image IDs!
        image: coreos-2135.6.0
        regions:
        - name: europe
          id: "1234-amd64"
          architecture: amd64 # optional, defaults to amd64
        - name: europe
          id: "1234-arm64"
          architecture: arm64
        - name: asia
          id: "5678-amd64"
          architecture: amd64
    keystoneURL: https://url-to-keystone/v3/
    constraints:
      floatingPools:
      - name: fp-pool-1
      loadBalancerProviders:
      - name: haproxy

4 - Usage

Using the OpenStack provider extension with Gardener as end-user

The core.gardener.cloud/v1beta1.Shoot resource declares a few fields that are meant to contain provider-specific configuration.

In this document we are describing how this configuration looks like for OpenStack and provide an example Shoot manifest with minimal configuration that you can use to create an OpenStack cluster (modulo the landscape-specific information like cloud profile names, secret binding names, etc.).

Provider Secret Data

Every shoot cluster references a SecretBinding or a CredentialsBinding which itself references a Secret, and this Secret contains the provider credentials of your OpenStack tenant. This Secret must look as follows:

apiVersion: v1
kind: Secret
metadata:
  name: core-openstack
  namespace: garden-dev
type: Opaque
data:
  domainName: base64(domain-name)
  tenantName: base64(tenant-name)
  
  # either use username/password
  username: base64(user-name)
  password: base64(password)

  # or application credentials
  #applicationCredentialID: base64(app-credential-id)
  #applicationCredentialName: base64(app-credential-name) # optional
  #applicationCredentialSecret: base64(app-credential-secret)

Please look up https://docs.openstack.org/keystone/pike/admin/identity-concepts.html as well.

For authentication with username/password see Keystone username/password

Alternatively, for authentication with application credentials see Keystone Application Credentials.

⚠️ Depending on your API usage it can be problematic to reuse the same provider credentials for different Shoot clusters due to rate limits. Please consider spreading your Shoots over multiple credentials from different tenants if you are hitting those limits.

InfrastructureConfig

The infrastructure configuration mainly describes how the network layout looks like in order to create the shoot worker nodes in a later step, thus, prepares everything relevant to create VMs, load balancers, volumes, etc.

An example InfrastructureConfig for the OpenStack extension looks as follows:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
floatingPoolName: MY-FLOATING-POOL
# floatingPoolSubnetName: my-floating-pool-subnet-name
networks:
# id: 12345678-abcd-efef-08af-0123456789ab
# router:
#   id: 1234
  workers: 10.250.0.0/19

# shareNetwork:
#   enabled: true

The floatingPoolName is the name of the floating pool you want to use for your shoot. If you don’t know which floating pools are available look it up in the respective CloudProfile.

With floatingPoolSubnetName you can explicitly define to which subnet in the floating pool network (defined via floatingPoolName) the router should be attached to.

networks.id is an optional field. If it is given, you can specify the uuid of an existing private Neutron network (created manually, by other tooling, …) that should be reused. A new subnet for the Shoot will be created in it.

If a networks.id is given and calico shoot clusters are created without a network overlay within one network make sure that the pod CIDR specified in shoot.spec.networking.pods is not overlapping with any other pod CIDR used in that network. Overlapping pod CIDRs will lead to disfunctional shoot clusters.

The networks.router section describes whether you want to create the shoot cluster in an already existing router or whether to create a new one:

  • If networks.router.id is given then you have to specify the router id of the existing router that was created by other means (manually, other tooling, …). If you want to get a fresh router for the shoot then just omit the networks.router field.

  • In any case, the shoot cluster will be created in a new subnet.

The networks.workers section describes the CIDR for a subnet that is used for all shoot worker nodes, i.e., VMs which later run your applications.

You can freely choose these CIDRs and it is your responsibility to properly design the network layout to suit your needs.

Apart from the router and the worker subnet the OpenStack extension will also create a network, router interfaces, security groups, and a key pair.

The optional networks.shareNetwork.enabled field controls the creation of a share network. This is only needed if shared file system storage (like NFS) should be used. Note, that in this case, the ControlPlaneConfig needs additional configuration, too.

ControlPlaneConfig

The control plane configuration mainly contains values for the OpenStack-specific control plane components. Today, the only component deployed by the OpenStack extension is the cloud-controller-manager.

An example ControlPlaneConfig for the OpenStack extension looks as follows:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
loadBalancerProvider: haproxy
loadBalancerClasses:
- name: lbclass-1
  purpose: default
  floatingNetworkID: fips-1-id
  floatingSubnetName: internet-*
- name: lbclass-2
  floatingNetworkID: fips-1-id
  floatingSubnetTags: internal,private
- name: lbclass-3
  purpose: private
  subnetID: internal-id
# cloudControllerManager:
#   featureGates:
#     SomeKubernetesFeature: true
# storage:
#   csiManila:
#     enabled: true

The loadBalancerProvider is the provider name you want to use for load balancers in your shoot. If you don’t know which types are available look it up in the respective CloudProfile.

The loadBalancerClasses field contains an optional list of load balancer classes which will be available in the cluster. Each entry can have the following fields:

  • name to select the load balancer class via the kubernetes service annotations loadbalancer.openstack.org/class=name
  • purpose with values default or private
    • The configuration of the default load balancer class will be used as default for all other kubernetes loadbalancer services without a class annotation
    • The configuration of the private load balancer class will be also set to the global loadbalancer configuration of the cluster, but will be overridden by the default purpose
  • floatingNetworkID can be specified to receive an ip from an floating/external network, additionally the subnet in this network can be selected via
    • floatingSubnetName can be either a full subnet name or a regex/glob to match subnet name
    • floatingSubnetTags a comma seperated list of subnet tags
    • floatingSubnetID the id of a specific subnet
  • subnetID can be specified by to receive an ip from an internal subnet (will not have an effect in combination with floating/external network configuration)

The cloudControllerManager.featureGates contains a map of explicitly enabled or disabled feature gates. For production usage it’s not recommended to use this field at all as you can enable alpha features or disable beta/stable features, potentially impacting the cluster stability. If you don’t want to configure anything for the cloudControllerManager simply omit the key in the YAML specification.

The optional storage.csiManila.enabled field is used to enable the deployment of the CSI Manila driver to support NFS persistent volumes. In this case, please ensure to set networks.shareNetwork.enabled=true in the InfrastructureConfig, too. Additionally, if CSI Manila driver is enabled, for each availability zone a NFS StorageClass will be created on the shoot named like csi-manila-nfs-<zone>.

WorkerConfig

Each worker group in a shoot may contain provider-specific configurations and options. These are contained in the providerConfig section of a worker group and can be configured using a WorkerConfig object. An example of a WorkerConfig looks as follows:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: WorkerConfig
serverGroup:
  policy: soft-anti-affinity
# nodeTemplate: # (to be specified only if the node capacity would be different from cloudprofile info during runtime)
#   capacity:
#     cpu: 2
#     gpu: 0
#     memory: 50Gi
# machineLabels:
#  - name: my-label
#    value: foo
#  - name: my-rolling-label
#    value: bar
#    triggerRollingOnUpdate: true # means any change of the machine label value will trigger rolling of all machines of the worker pool

ServerGroups

When you specify the serverGroup section in your worker group configuration, a new server group will be created with the configured policy for each worker group that enabled this setting and all machines managed by this worker group will be assigned as members of the created server group.

For users to have access to the server group feature, it must be enabled on the CloudProfile by your operator. Existing clusters can take advantage of this feature by updating the server group configuration of their respective worker groups. Worker groups that are already configured with server groups can update their setting to change the policy used, or remove it altogether at any time.

Users must be aware that any change to the server group settings will result in a rolling deployment of new nodes for the affected worker group.

Please note the following restrictions when deploying workers with server groups:

  • The serverGroup section is optional, but if it is included in the worker configuration, it must contain a valid policy value.
  • The available policy values that can be used, are defined in the provider specific section of CloudProfile by your operator.
  • Certain policy values may induce further constraints. Using the affinity policy is only allowed when the worker group utilizes a single zone.

MachineLabels

The machineLabels section in the worker group configuration allows to specify additional machine labels. These labels are added to the machine instances only, but not to the node object. Additionally, they have an optional triggerRollingOnUpdate field. If it is set to true, changing the label value will trigger a rolling of all machines of this worker pool.

Node Templates

Node templates allow users to override the capacity of the nodes as defined by the server flavor specified in the CloudProfile’s machineTypes. This is useful for certain dynamic scenarios as it allows users to customize cluster-autoscaler’s behavior for these workergroup with their provided values.

Example Shoot manifest (one availability zone)

Please find below an example Shoot manifest for one availability zone:

apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
  name: johndoe-openstack
  namespace: garden-dev
spec:
  cloudProfile:
    name: openstack
  region: europe-1
  secretBindingName: core-openstack
  provider:
    type: openstack
    infrastructureConfig:
      apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
      kind: InfrastructureConfig
      floatingPoolName: MY-FLOATING-POOL
      networks:
        workers: 10.250.0.0/19
    controlPlaneConfig:
      apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
      kind: ControlPlaneConfig
      loadBalancerProvider: haproxy
    workers:
    - name: worker-xoluy
      machine:
        type: medium_4_8
      minimum: 2
      maximum: 2
      zones:
      - europe-1a
  networking:
    nodes: 10.250.0.0/16
    type: calico
  kubernetes:
    version: 1.28.2
  maintenance:
    autoUpdate:
      kubernetesVersion: true
      machineImageVersion: true
  addons:
    kubernetesDashboard:
      enabled: true
    nginxIngress:
      enabled: true

CSI volume provisioners

Every OpenStack shoot cluster will be deployed with the OpenStack Cinder CSI driver. It is compatible with the legacy in-tree volume provisioner that was deprecated by the Kubernetes community and will be removed in future versions of Kubernetes. End-users might want to update their custom StorageClasses to the new cinder.csi.openstack.org provisioner.

Kubernetes Versions per Worker Pool

This extension supports gardener/gardener’s WorkerPoolKubernetesVersion feature gate, i.e., having worker pools with overridden Kubernetes versions since gardener-extension-provider-openstack@v1.23.

Shoot CA Certificate and ServiceAccount Signing Key Rotation

This extension supports gardener/gardener’s ShootCARotation and ShootSARotation feature gates since gardener-extension-provider-openstack@v1.26.