#Documentation
##Table of Contents
This is the multi-page printable view of this section. Click here to print.
#Documentation
##Table of Contents
Disclaimer: This document is NOT a step by step installation guide for the vSphere provider extension and only contains some configuration specifics regarding the installation of different components via the helm charts residing in the vSphere provider extension repository.
There are several authentication possibilities depending on whether or not the concept of Virtual Garden is used.
runtime
Garden cluster is also the target
Garden cluster.Automounted Service Account Token
The easiest way to deploy the gardener-extension-validator-vsphere
component will be to not provide kubeconfig
at all. This way in-cluster configuration and an automounted service account token will be used. The drawback of this approach is that the automounted token will not be automatically rotated.
Service Account Token Volume Projection
Another solution will be to use Service Account Token Volume Projection combined with a kubeconfig
referencing a token file (see example below).
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <CA-DATA>
server: https://default.kubernetes.svc.cluster.local
name: garden
contexts:
- context:
cluster: garden
user: garden
name: garden
current-context: garden
users:
- name: garden
user:
tokenFile: /var/run/secrets/projected/serviceaccount/token
This will allow for automatic rotation of the service account token by the kubelet
. The configuration can be achieved by setting both .Values.global.serviceAccountTokenVolumeProjection.enabled: true
and .Values.global.kubeconfig
in the respective chart’s values.yaml
file.
runtime
Garden cluster is different from the target
Garden cluster.Service Account
The easiest way to setup the authentication will be to create a service account and the respective roles will be bound to this service account in the target
cluster. Then use the generated service account token and craft a kubeconfig
which will be used by the workload in the runtime
cluster. This approach does not provide a solution for the rotation of the service account token. However, this setup can be achieved by setting .Values.global.virtualGarden.enabled: true
and following these steps:
application
part of the charts in the target
cluster.kubeconfig
.kubeconfig
and deploy the runtime
part of the charts in the runtime
cluster.Client Certificate
Another solution will be to bind the roles in the target
cluster to a User
subject instead of a service account and use a client certificate for authentication. This approach does not provide a solution for the client certificate rotation. However, this setup can be achieved by setting both .Values.global.virtualGarden.enabled: true
and .Values.global.virtualGarden.user.name
, then following these steps:
target
cluster for the respective user.application
part of the charts in the target
cluster.kubeconfig
using the already generated client certificate.kubeconfig
and deploy the runtime
part of the charts in the runtime
cluster.Projected Service Account Token
This approach requires an already deployed and configured oidc-webhook-authenticator for the target
cluster. Also the runtime
cluster should be registered as a trusted identity provider in the target
cluster. Then projected service accounts tokens from the runtime
cluster can be used to authenticate against the target
cluster. The needed steps are as follows:
.Values.global.virtualGarden.enabled: true
and .Values.global.virtualGarden.user.name
. Note: username value will depend on the trust configuration, e.g., <prefix>:system:serviceaccount:<namespace>:<serviceaccount>
.Values.global.serviceAccountTokenVolumeProjection.enabled: true
and .Values.global.serviceAccountTokenVolumeProjection.audience
. Note: audience value will depend on the trust configuration, e.g., <cliend-id-from-trust-config>
.application
part of the charts in the target
cluster.runtime
part of the charts in the runtime
cluster.apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <CA-DATA>
server: https://virtual-garden.api
name: virtual-garden
contexts:
- context:
cluster: virtual-garden
user: virtual-garden
name: virtual-garden
current-context: virtual-garden
users:
- name: virtual-garden
user:
tokenFile: /var/run/secrets/projected/serviceaccount/token
admission-vsphere
is an admission webhook server which is responsible for the validation of the cloud provider (vSphere in this case) specific fields and resources. The Gardener API server is cloud provider agnostic and it wouldn’t be able to perform similar validation.
Follow the steps below to run the admission webhook server locally.
Start the Gardener API server.
For details, check the Gardener local setup.
Start the webhook server
Make sure that the KUBECONFIG
environment variable is pointing to the local garden cluster.
make start-admission
Setup the ValidatingWebhookConfiguration
.
hack/dev-setup-admission-vsphere.sh
will configure the webhook Service which will allow the kube-apiserver of your local cluster to reach the webhook server. It will also apply the ValidatingWebhookConfiguration
manifest.
./hack/dev-setup-admission-vsphere.sh
You are now ready to experiment with the admission-vsphere
webhook server locally.
Several preparational steps are necessary for VMware vSphere and NSX-T, before this extension can be used to create Gardener shoot clusters.
The main version target of this extension is vSphere 7.x together with NSX-T 3.x. The recommended environment is a system setup with VMware Cloud Foundation (VCF) 4.1. Older versions like vSphere 6.7U3 with NSX-T 2.5 or 3.0 should still work, but are not tested extensively.
This extension needs credentials for both the vSphere/vCenter and the NSX-T endpoints. This section guides through the creation of appropriate roles and users.
The vCenter/vSphere user used for this provider should have been assigned to a role including these permissions
(use vCenter/vSphere Client / Menu Administration / Access Control / Role to define a role and assign it to the user
with Global Permissions
)
The NSX-T API is accessed from the infrastructure controller of the vsphere-provider for setting up the network infrastructure resources and the cloud-controller-manager for managing load balancers. Currently, the NSX-T user must have the Enterprise Admin
role.
Two folders need to be created:
- a folder which will contain the VMs of the shoots (cloud profile spec.providerConfig.folder
)
- a folder containing templates (used by cloud profile spec.providerConfig.machineImages[*].versions[*].path
)
In vSphere client:
Upload gardenlinux OVA or flatcar OVA templates.
gardener/templates
choose Deploy OVF Template…This step has to be done regardless of whether you actually have more than a single region and zone or not!
Two labels need to be defined in the cloud profile (section spec.providerConfig.failureDomainLabels
):
failureDomainLabels:
region: k8s-region
zone: k8s-zone
A Kubernetes zone can either be a vCenter or one of its datacenters
Zones must be sub-resources of it. If the region is a complete vCenter, the zone must specify datacenter and either compute cluster or resource pool. Otherwise, i.e. tf the region is a datacenter, the zone must specify either compute cluster or resource pool.
In the following steps it is assumed: - the region is specified by a datacenter - the zone is specified by a compute cluster or one of its resource pools
Create a resource pool for every zone:
Each zone must be tagged with the category defined by the label defined in the cloud profile (spec.providerConfig.failureDomainLabels.region
).
Assuming that the region is a datacenter and the region label is k8s-region
:
Assuming that the zones are specified by resource pools and the zone label is k8s-zone
:
Each zone can have a separate storage. In this case a storage policy is needed to be compatible with all the zone storages.
For each zone tag the storage with the corresponding k8s-zone
tag for the zone.
From the Menu in the vSphere Client toolbar choose Policies and Profiles
In the Policies and Profiles list select VM Storage Policies
Create or clone an existing storage policy
a) set name, e.g. “spec.providerConfig.defaultClassStoragePolicyName
)
b) On the page Policy structure check only the checkbox Enable tag based placement rules
c) On the page Tage based placement press the ADD TAG RULE button.
d) For Rule 1 select
*Tag category* = *k8s-zone*
*Usage option* = *Use storage tagged with*
*Tags* = *all zone tags*.
e) Validate the compatible storages on the page Storage compatibility
f) Press FINISH on the Review and finish page
IMPORTANT: Repeat steps 1-3 and create a second StoragePolicy by the name of garden-etcd-fast-main
. This will be used by Gardener to provision shoot’s etcd PVCs.
A shared NSX-T is needed for all zones of a region. External IP address ranges are needed for SNAT and load balancers. Besides the edge cluster must be sized large enough to deal with the load balancers of all shoots.
Two IP pools are needed for external IP addresses.
spec.providerConfig.regions[*].snatIPPool
. Each shoot cluster needs one SNAT IP address for outgoing traffic.spec.providerConfig.contraints.loadBalancerConfig.classes[*].ipPoolName
. An IP address is needed for every port of every Kubernetes service of type LoadBalancer
.To create them, follow these steps in the NSX-T Manager UI in the web browser:
Each shoot cluster needs one IP address for SNAT and at least two IP addresses for load balancers VIPs (kube-apiservcer and Gardener shoot-seed VPN). A third IP address may be needed for ingress.
Depending on the payload of a shoot cluster, there may be additional services of type LoadBalancer
. An IP address is needed for every port of every Kubernetes service of type LoadBalancer
.
For load balancer related configurations limitations of NSX-T, please see the web pages VMWare Configuration Maximums. The link shows the limitations for NSX-T 3.1, if you have another version, please select the version from the left panel under Select Version and press the VIEW LIMITS button to update the view.
By default, settings, each shoot cluster has an own T1 gateway and an own LB service (instance) of “T-shirt” size SMALL
.
Examples for limitations on NSX-T 3.1 using Large Edge Node and SMALL load balancers instances:
There is a limit of 40 small LB instances per egde cluster (for HA 40 per pair of edge nodes)
=> maximum number of shoot clusters = 40 * (number of edge nodes) / 2
For SMALL
load balancers, there is a maximum of 20 virtual servers. A virtual server is needed for every port of a service of type LoadBalancer
=> maximum number of services/ports pairs = 20 * (number of edge nodes) / 2
The load balancer “T-shirt” size can be set on cloud profile level (spec.providerConfig.contraints.loadBalancerConfig.size
) or in the shoot manifest (spec.provider.controlPlaneConfig.loadBalancerSize
)
The number of pool members is limited to 7,500. For every K8s service port, every worker node is a pool member.
=> If every shoot cluster has an average number of 15 worker nodes, there can be 500 service/port pairs over all shoot clusters per pair of edge nodes
This step is only needed, if there are several VDS (virtual distributed switches) for each zone.
In this case, their UUIDs need to be fetched and set in the cloud profile at spec.providerConfig.regions[*].zones[*].switchUuid
.
Unfortunately, they are not displayed in the vSphere Client.
Here the command line tool govc
is used to look them
up.
govc find / -type DistributedVirtualSwitch
to get the full path of all vds/dvsgovc dvs.portgroup.info <switch-path> | grep DvsUuid
For gardener a Tanzu Kubernetes „guest” cluster is used. Look here for the vSphere documentation Provisioning Tanzu Kubernetes Clusters
For gardener the minimum Virtual Machine Classes must set to best-effort-large
.
For the deployment it is possible to provision the cluster with a minimal amount of configuration parameter. It is recommended to set the parameter Default Pod CIDR
, Default Services CIDR
with values which fit to your enviroment.
The storageClass
Parameter should be defined to avoid problems during deployment.
Example:
```yaml
apiVersion: run.tanzu.vmware.com/v1alpha1 #TKG API endpoint
kind: TanzuKubernetesCluster #required parameter
metadata:
name: tkg-cluster-1 #cluster name, user defined
namespace: ns1 #supervisor namespace
spec:
distribution:
version: v1.24 #resolved kubernetes version
topology:
controlPlane:
count: 1 #number of control plane nodes
class: best-effort-small #vmclass for control plane nodes
storageClass: vsan-default-storage-policy #storageclass for control plane
workers:
count: 3 #number of worker nodes
class: best-effort-large #vmclass for worker nodes
storageClass: vsan-default-storage-policy #storageclass for worker nodes
settings:
network:
cni:
name: calico
services:
cidrBlocks: ["198.51.100.0/12"] #Cannot overlap with Supervisor Cluster
pods:
cidrBlocks: ["192.0.2.0/16"] #Cannot overlap with Supervisor Cluster
```
The core.gardener.cloud/v1beta1.Shoot
resource declares a few fields that are meant to contain provider-specific configuration.
In this document we are describing how this configuration looks like for VMware vSphere and provide an example Shoot
manifest with minimal configuration that you can use to create an vSphere cluster (modulo the landscape-specific information like cloud profile names, secret binding names, etc.).
Every shoot cluster references a SecretBinding
which itself references a Secret
, and this Secret
contains the provider credentials of your vSphere tenant.
It contains two authentication sets. One for the vSphere host and another for the NSX-T host, which is needed to set up the network infrastructure.
This Secret
must look as follows:
apiVersion: v1
kind: Secret
metadata:
name: core-vsphere
namespace: garden-dev
type: Opaque
data:
vspherePassword: base64(vsphere-password)
vsphereUsername: base64(vSphere-UserName)
vsphereInsecureSSL: base64("true"|"false")
nsxtPassword: base64(NSX-T-password)
nsxtUserName: base64(NSX-T-UserName)
nsxtInsecureSSL: base64("true"|"false")
Here base64(...)
are only a placeholders for the Base64 encoded values.
InfrastructureConfig
The infrastructure configuration is used for advanced scenarios only. Nodes on all zones are using IP addresses from the common nodes network as the network is managed by NSX-T. The infrastructure controller will create several network objects using NSX-T. A network segment is used as the subnet for the VMs (nodes), a tier-1 gateway, a DHCP server, and a SNAT for the nodes.
An example InfrastructureConfig
for the vSphere extension looks as follows.
You only need to specify it, if you either want to use an existing Tier-1 gateway and load balancer service pair
or if you want to overwrite the automatic selection of the NSX-T version.
infrastructureConfig:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
#overwriteNSXTInfraVersion: '1'
#networks:
# tier1GatewayPath: /infra/tier-1s/tier1gw-b8213651-9659-4180-8bfd-1e16228e8dcb
# loadBalancerServicePath: /infra/lb-services/708c5cb1-e5d0-4b16-906f-ec7177a1485d
By default, the infrastructure controller creates a separate Tier-1 gateway for each shoot cluster
and the cloud controller manager (vsphere-cloud-provider
) creates a load balancer service.
If an existing Tier-1 gateway should be used, you can specify its ‘path’. In this case, there
must also be a load balancer service defined for this tier-1 gateway and its ‘path’ needs to be specified, too.
In the NSX-T manager UI, the path of the tier-1 gateway can be found at Networking / Tier-1 Gateways
.
Then select Copy path to clipboard
from the context menu of the tier-1 gateway
(click on the three vertical dots on the left of the row). Do the same with the
corresponding load balancer at Networking / Load balancing / Tab Load Balancers
For security reasons the referenced Tier-1 gateway in NSX-T must have a tag with scope authorized-shoots
and its
tag value consists of a comma-separated list of the allowed shoot names in the format shoot--<project>--<name>
(optionally with wildcard *
). Additionally, it must have a tag with scope garden
set to the garden ID.
Example:
infrastructureConfig:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
tier1GatewayPath: /infra/tier-1s/tier1gw-b8213651-9659-4180-8bfd-1e16228e8dcb
loadBalancerServicePath: /infra/lb-services/708c5cb1-e5d0-4b16-906f-ec7177a1485d
Please ensure, that the worker nodes cidr (shoot manifest spec.networking.nodes
) do not overlap with
other existing segments of the selected tier-1 gateway.
The option overwriteNSXTInfraVersion
can be used to change the network objects created during the initial infrastructure creation.
By default the infra-version is automatically selected according to the NSX-T version. The infra-version '1'
is used
for NSX-T 2.5, and infra-version '2'
for NSX-T versions >= 3.0. The difference is creation of the the logical DHCP server.
For NSX-T 2.5, only the DHCP server of the “Advanced API” is usable. For NSX-T >= 3.0 the new DHCP server is default,
but for special purposes infra-version '1'
is also allowed.
ControlPlaneConfig
The control plane configuration mainly contains values for the vSphere-specific control plane components.
Today, the only component deployed by the vSphere extension is the cloud-controller-manager
.
An example ControlPlaneConfig
for the vSphere extension looks as follows:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
loadBalancerClasses:
- name: mypubliclbclass
- name: myprivatelbclass
ipPoolName: pool42 # optional overwrite
loadBalancerSize: SMALL
cloudControllerManager:
featureGates:
CustomResourceValidation: true
The loadBalancerClasses
optionally defines the load balancer classes to be used.
The specified names must be defined in the constraints section of the cloud profile.
If the list contains a load balancer named “default”, it is used as the default load balancer.
Otherwise the first one is also the default.
If no classes are specified the default load balancer class is used as defined in the cloud profile constraints section.
If the ipPoolName is overwritten, the corresponding IP pool object in NSX-T must have a tag with scope authorized-shoots
and its
tag value consists of a comma-separated list of the allowed shoot names in the format shoot--<project>--<name>
(optionally with wildcard *
). Additionally, it must have a tag with scope garden
set to the garden ID.
The loadBalancerSize
is optional and overwrites the default value specified in the cloud profile config.
It must be one of the values SMALL
, MEDIUM
, or LARGE
. SMALL
can manage 10 service ports,
MEDIUM
100, and LARGE
1000.
The cloudControllerManager.featureGates
contains an optional map of explicitly enabled or disabled feature gates.
For production usage it’s not recommend to use this field at all as you can enable alpha features or disable beta/stable features, potentially impacting the cluster stability.
If you don’t want to configure anything for the cloudControllerManager
simply omit the key in the YAML specification.
Shoot
manifest (one availability zone)Please find below an example Shoot
manifest for one availability zone:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: johndoe-vsphere
namespace: garden-dev
spec:
cloudProfileName: vsphere
region: europe-1
secretBindingName: core-vsphere
provider:
type: vsphere
#infrastructureConfig:
# apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
# kind: InfrastructureConfig
# overwriteNSXTInfraVersion: '1'
controlPlaneConfig:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
# loadBalancerClasses:
# - name: mylbclass
workers:
- name: worker-xoluy
machine:
type: std-04
minimum: 2
maximum: 2
zones:
- europe-1a
networking:
nodes: 10.250.0.0/16
type: calico
kubernetes:
version: 1.24.3
maintenance:
autoUpdate:
kubernetesVersion: true
machineImageVersion: true
addons:
kubernetesDashboard:
enabled: true
nginxIngress:
enabled: true
This extension supports gardener/gardener
’s WorkerPoolKubernetesVersion
feature gate, i.e., having worker pools with overridden Kubernetes versions since gardener-extension-provider-vsphere@v0.12
.
ServiceAccount
Signing Key RotationThis extension supports gardener/gardener
’s ShootCARotation
feature gate since gardener-extension-provider-vsphere@v0.13
and ShootSARotation
feature gate since gardener-extension-provider-vsphere@v0.14
.
The core.gardener.cloud/v1beta1.CloudProfile
resource declares a providerConfig
field that is meant to contain provider-specific configuration.
In this document we are describing how this configuration looks like for VMware vSphere and provide an example CloudProfile
manifest with minimal configuration that you can use to allow creating vSphere shoot clusters.
CloudProfileConfig
The cloud profile configuration contains information about the real machine image paths in the vSphere environment (image names).
You have to map every version that you specify in .spec.machineImages[].versions
here such that the vSphere extension knows the image ID for every version you want to offer.
It also contains optional default values for DNS servers that shall be used for shoots.
In the dnsServers[]
list you can specify IP addresses that are used as DNS configuration for created shoot subnets.
The dhcpOptions
list allows to specify DHCP options. See BOOTP Vendor Extensions and DHCP Options
for valid codes (tags) and details about values. The code 15
(domain name) is only allowed for
when using NSX-T 2.5. For NSX-T >= 3.0 use 119
(search domain).
The dockerDaemonOptions
allow to adjust the docker daemon configuration.
dockerDaemonOptions.httpProxyConf
the content of the proxy configuration file can be set.
See Docker HTTP/HTTPS proxy for more detailsdockerDaemonOptions.insecureRegistries
insecure registries can be specified. This
should only be used for development or evaluation purposes.Also, you have to specify several name of NSX-T objects in the constraints.
An example CloudProfileConfig
for the vSphere extension looks as follows:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: CloudProfileConfig
namePrefix: my_gardener
defaultClassStoragePolicyName: "vSAN Default Storage Policy"
folder: my-vsphere-vm-folder
regions:
- name: region1
vsphereHost: my.vsphere.host
vsphereInsecureSSL: true
nsxtHost: my.vsphere.host
nsxtInsecureSSL: true
transportZone: "my-tz"
logicalTier0Router: "my-tier0router"
edgeCluster: "my-edgecluster"
snatIpPool: "my-snat-ip-pool"
datacenter: my-vsphere-dc
zones:
- name: zone1
computeCluster: my-vsphere-computecluster1
# resourcePool: my-resource-pool1 # provide either computeCluster or resourcePool or hostSystem
# hostSystem: my-host1 # provide either computeCluster or resourcePool or hostSystem
datastore: my-vsphere-datastore1
#datastoreCluster: my-vsphere-datastore-cluster # provide either datastore or datastoreCluster
- name: zone2
computeCluster: my-vsphere-computecluster2
# resourcePool: my-resource-pool2 # provide either computeCluster or resourcePool or hostSystem
# hostSystem: my-host2 # provide either computeCluster or resourcePool or hostSystem
datastore: my-vsphere-datastore2
#datastoreCluster: my-vsphere-datastore-cluster # provide either datastore or datastoreCluster
constraints:
loadBalancerConfig:
size: MEDIUM
classes:
- name: default
ipPoolName: gardener_lb_vip
# optional DHCP options like 119 (search domain), 42 (NTP), 15 (domain name (only NSX-T 2.5))
#dhcpOptions:
#- code: 15
# values:
# - foo.bar.com
#- code: 42
# values:
# - 136.243.202.118
# - 80.240.29.124
# - 78.46.53.8
# - 162.159.200.123
dnsServers:
- 10.10.10.11
- 10.10.10.12
machineImages:
- name: flatcar
versions:
- version: 3139.2.3
path: gardener/templates/flatcar-3139.2.3
guestId: other4xLinux64Guest
#dockerDaemonOptions:
# httpProxyConf: |
# [Service]
# Environment="HTTPS_PROXY=https://proxy.example.com:443"
# insecureRegistries:
# - myregistrydomain.com:5000
# - blabla.mycompany.local
CloudProfile
manifestPlease find below an example CloudProfile
manifest:
apiVersion: core.gardener.cloud/v1beta1
kind: CloudProfile
metadata:
name: vsphere
spec:
type: vsphere
providerConfig:
apiVersion: vsphere.provider.extensions.gardener.cloud/v1alpha1
kind: CloudProfileConfig
namePrefix: my_gardener
defaultClassStoragePolicyName: "vSAN Default Storage Policy"
folder: my-vsphere-vm-folder
regions:
- name: region1
vsphereHost: my.vsphere.host
vsphereInsecureSSL: true
nsxtHost: my.vsphere.host
nsxtInsecureSSL: true
transportZone: "my-tz"
logicalTier0Router: "my-tier0router"
edgeCluster: "my-edgecluster"
snatIpPool: "my-snat-ip-pool"
datacenter: my-vsphere-dc
zones:
- name: zone1
computeCluster: my-vsphere-computecluster1
# resourcePool: my-resource-pool1 # provide either computeCluster or resourcePool or hostSystem
# hostSystem: my-host1 # provide either computeCluster or resourcePool or hostSystem
datastore: my-vsphere-datastore1
#datastoreCluster: my-vsphere-datastore-cluster # provide either datastore or datastoreCluster
- name: zone2
computeCluster: my-vsphere-computecluster2
# resourcePool: my-resource-pool2 # provide either computeCluster or resourcePool or hostSystem
# hostSystem: my-host2 # provide either computeCluster or resourcePool or hostSystem
datastore: my-vsphere-datastore2
#datastoreCluster: my-vsphere-datastore-cluster # provide either datastore or datastoreCluster
constraints:
loadBalancerConfig:
size: MEDIUM
classes:
- name: default
ipPoolName: gardener_lb_vip
dnsServers:
- 10.10.10.11
- 10.10.10.12
machineImages:
- name: coreos
versions:
- version: 3139.2.3
path: gardener/templates/flatcar-3139.2.3
guestId: other4xLinux64Guest
kubernetes:
versions:
- version: 1.23.4
- version: 1.24.0
- version: 1.24.1
machineImages:
- name: flatcar
versions:
- version: 3139.2.3
machineTypes:
- name: std-02
cpu: "2"
gpu: "0"
memory: 8Gi
usable: true
- name: std-04
cpu: "4"
gpu: "0"
memory: 16Gi
usable: true
- name: std-08
cpu: "8"
gpu: "0"
memory: 32Gi
usable: true
regions:
- name: region1
zones:
- name: zone1
- name: zone2
This extension targets Kubernetes >= v1.20
and vSphere 6.7 U3
or later.
6.7 U3
or later,
and Kubernetes >= v1.16
(see VMware vSphere Container Storage Plug-in for more details)6.7 U3
or later,
and Kubernetes >= v1.11
(see cloud-provider-vsphere CPI - Cloud Provider Interface)Currently, only Gardenlinux and Flatcar (CoreOS fork) are supported. Virtual Machine Hardware must be version 15 or higher, but images are upgraded automatically if their hardware has an older version.