그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그
9 minute read
vSphere / NSX-T Preparation for Gardener Extension “vSphere Provider”
- vSphere / NSX-T Preparation for Gardener Extension “vSphere Provider”
Several preparational steps are necessary for VMware vSphere and NSX-T, before this extension can be used to create Gardener shoot clusters.
The main version target of this extension is vSphere 7.x together with NSX-T 3.x. The recommended environment is a system setup with VMware Cloud Foundation (VCF) 4.1. Older versions like vSphere 6.7U3 with NSX-T 2.5 or 3.0 should still work, but are not tested extensively.
vSphere Preparation
User and Role Creation
This extension needs credentials for both the vSphere/vCenter and the NSX-T endpoints. This section guides through the creation of appropriate roles and users.
vCenter/vSphere
The vCenter/vSphere user used for this provider should have been assigned to a role including these permissions
(use vCenter/vSphere Client / Menu Administration / Access Control / Role to define a role and assign it to the user
with Global Permissions
)
- Datastore
- Allocate space
- Browse datastore
- Low level file operations
- Remove file
- Update virtual machine files
- Update virtual machine metadata
- Global
- Cancel task
- Manage custom attributes
- Set custom attribute
- Network
- Assign network
- Resource
- Assign virtual machine to resource pool
- Tasks
- Create task
- Update task
- vApp
- Add virtual machine
- Assign resource pool
- Assign vApp
- Clone
- Power off
- Power on
- View OVF environment
- vApp application configuration
- vApp instance configuration
- vApp managedBy configuration
- vApp resource configuration
- Virtual machine
- Change Configuration
- Acquire disk lease
- Add existing disk
- Add new disk
- Add or remove device
- Advanced configuration
- Change CPU count
- Change Memory
- Change Settings
- Change Swapfile placement
- Change resource
- Configure Host USB device
- Configure Raw device
- Configure managedBy
- Display connection settings
- Extend virtual disk
- Modify device settings
- Query Fault Tolerance compatibility
- Query unowned files
- Reload from path
- Remove disk
- Rename
- Reset guest information
- Set annotation
- Toggle disk change tracking
- Toggle fork parent
- Upgrade virtual machine compatibility
- Edit Inventory
- Create from existing
- Create new
- Move
- Register
- Remove
- Unregister
- Guest operations
- Guest operation alias modification
- Guest operation alias query
- Guest operation modifications
- Guest operation program execution
- Guest operation queries
- Interaction
- Power off
- Power on
- Reset
- Provisioning
- Allow disk access
- Allow file access
- Allow read-only disk access
- Allow virtual machine files upload
- Clone template
- Clone virtual machine
- Customize guest
- Deploy template
- Mark as virtual machine
- Modify customization specification
- Promote disks
- Read customization specifications
- Change Configuration
NSX-T
The NSX-T API is accessed from the infrastructure controller of the vsphere-provider for setting up the network infrastructure resources and the cloud-controller-manager for managing load balancers. Currently, the NSX-T user must have the Enterprise Admin
role.
Create Folders
Two folders need to be created:
- a folder which will contain the VMs of the shoots (cloud profile spec.providerConfig.folder
)
- a folder containing templates (used by cloud profile spec.providerConfig.machineImages[*].versions[*].path
)
In vSphere client:
- From the Menu in the vSphere Client toolbar choose VMs and Templates
- Select the vSphere Datacenter of the work load vCenter in the browser
- From the context menu select New Folder > New VM and Template Folder, set folder name to e.g. “gardener”
- From the context menu of the new folder gardener select New Folder, set folder name to “templates”
Upload VM Templates for Worker Nodes
Upload gardenlinux OVA or flatcar OVA templates.
- From the context menu of the folder
gardener/templates
choose Deploy OVF Template… - Adjust name if needed
- Select any compute cluster as compute resource
- Select a storage (e.g. VSAN)
- Select any network (not important)
- No need to customize the template
- After deployment is finished select from the context menu of the new deployed VM Template > Convert To Template
Prepare for Kubernetes Zones and Regions
This step has to be done regardless of whether you actually have more than a single region and zone or not!
Two labels need to be defined in the cloud profile (section spec.providerConfig.failureDomainLabels
):
failureDomainLabels:
region: k8s-region
zone: k8s-zone
A Kubernetes zone can either be a vCenter or one of its datacenters
Zones must be sub-resources of it. If the region is a complete vCenter, the zone must specify datacenter and either compute cluster or resource pool. Otherwise, i.e. tf the region is a datacenter, the zone must specify either compute cluster or resource pool.
In the following steps it is assumed: - the region is specified by a datacenter - the zone is specified by a compute cluster or one of its resource pools
Create Resource Pool(s)
Create a resource pool for every zone:
- From the Menu in the vSphere Client toolbar choose Hosts and Clusters
- From the context menu of the compute cluster select New Resource Pool… and provide the name of the zone. CPU and Memory settings are optional.
Tag Regions and Zones
Each zone must be tagged with the category defined by the label defined in the cloud profile (spec.providerConfig.failureDomainLabels.region
).
Assuming that the region is a datacenter and the region label is k8s-region
:
- From the Menu in the vSphere Client toolbar choose Hosts and Clusters
- Select the region’s datacenter in the browser
- In the Summary tab there is a sub-window titled Tags. Click the Assign… link.
- In the Assign Tag dialog select the ADD TAG link above of the table
- In the Create Tag dialog choose the k8s-region category. If it is not defined, click the Create New Category link to create the category.
- Enter the Name of the region.
- Back in the Assign Tag mark the checkbox of the region tag you just have created.
- Click the ASSIGN button
Assuming that the zones are specified by resource pools and the zone label is k8s-zone
:
- From the Menu in the vSphere Client toolbar choose Hosts and Clusters
- Select the zone’s Compute Cluster in the browser
- In the Summary tab there is a sub-window titled Tags. Click the Assign… link.
- In the Assign Tag dialog select the ADD TAG link above of the table
- In the Create Tag dialog choose the k8s-zone category. If it is not defined, click the Create New Category link to create the category.
- Enter the Name of the zone.
- Back in the Assign Tag mark the checkbox of the zone tag you just have created.
- Click the ASSIGN button
Storage policies
Each zone can have a separate storage. In this case a storage policy is needed to be compatible with all the zone storages.
Tag Zone Storages
For each zone tag the storage with the corresponding k8s-zone
tag for the zone.
- From the Menu in the vSphere Client toolbar choose Storage
- Select the zone’s storage in the browser
- In the Summary tab there is a sub-window titled Tags. Click the Assign… link.
- In the Assign Tag dialog select the ADD TAG link above of the table
- In the Create Tag dialog choose the k8s-zone category. If it is not defined, click the Create New Category link to create the category.
- Enter the Name of the zone.
- Back in the Assign Tag mark the checkbox of the zone tag you just have created.
- Click the ASSIGN button
Create or clone VM Storage Policy
From the Menu in the vSphere Client toolbar choose Policies and Profiles
In the Policies and Profiles list select VM Storage Policies
Create or clone an existing storage policy
a) set name, e.g. “
Storage Policy” (will be needed for the cloud profile later in spec.providerConfig.defaultClassStoragePolicyName
)b) On the page Policy structure check only the checkbox Enable tag based placement rules
c) On the page Tage based placement press the ADD TAG RULE button.
d) For Rule 1 select
*Tag category* = *k8s-zone* *Usage option* = *Use storage tagged with* *Tags* = *all zone tags*.
e) Validate the compatible storages on the page Storage compatibility
f) Press FINISH on the Review and finish page
IMPORTANT: Repeat steps 1-3 and create a second StoragePolicy by the name of
garden-etcd-fast-main
. This will be used by Gardener to provision shoot’s etcd PVCs.
NSX-T Preparation
A shared NSX-T is needed for all zones of a region. External IP address ranges are needed for SNAT and load balancers. Besides the edge cluster must be sized large enough to deal with the load balancers of all shoots.
Create IP pools
Two IP pools are needed for external IP addresses.
- IP pool for SNAT
The IP pool name needs to be specified in the cloud profile at
spec.providerConfig.regions[*].snatIPPool
. Each shoot cluster needs one SNAT IP address for outgoing traffic. - IP pool(s) for the load balancers
The IP pool name(s) need to be specified in the cloud profile at
spec.providerConfig.contraints.loadBalancerConfig.classes[*].ipPoolName
. An IP address is needed for every port of every Kubernetes service of typeLoadBalancer
.
To create them, follow these steps in the NSX-T Manager UI in the web browser:
- From the toolbar at the top of the page choose Networking
- From the left side list choose IP Address Pools below the IP Management
- Press the ADD IP ADRESS POOL button
- Enter Name
- Enter at least one subnet by clicking on Sets
- Press the Save button
Sizing the IP pools
Each shoot cluster needs one IP address for SNAT and at least two IP addresses for load balancers VIPs (kube-apiservcer and Gardener shoot-seed VPN). A third IP address may be needed for ingress.
Depending on the payload of a shoot cluster, there may be additional services of type LoadBalancer
. An IP address is needed for every port of every Kubernetes service of type LoadBalancer
.
Check edge cluster sizing
For load balancer related configurations limitations of NSX-T, please see the web pages VMWare Configuration Maximums. The link shows the limitations for NSX-T 3.1, if you have another version, please select the version from the left panel under Select Version and press the VIEW LIMITS button to update the view.
By default, settings, each shoot cluster has an own T1 gateway and an own LB service (instance) of “T-shirt” size SMALL
.
Examples for limitations on NSX-T 3.1 using Large Edge Node and SMALL load balancers instances:
There is a limit of 40 small LB instances per egde cluster (for HA 40 per pair of edge nodes)
=> maximum number of shoot clusters = 40 * (number of edge nodes) / 2
For
SMALL
load balancers, there is a maximum of 20 virtual servers. A virtual server is needed for every port of a service of typeLoadBalancer
=> maximum number of services/ports pairs = 20 * (number of edge nodes) / 2
The load balancer “T-shirt” size can be set on cloud profile level (
spec.providerConfig.contraints.loadBalancerConfig.size
) or in the shoot manifest (spec.provider.controlPlaneConfig.loadBalancerSize
)The number of pool members is limited to 7,500. For every K8s service port, every worker node is a pool member.
=> If every shoot cluster has an average number of 15 worker nodes, there can be 500 service/port pairs over all shoot clusters per pair of edge nodes
Get VDS UUIDs
This step is only needed, if there are several VDS (virtual distributed switches) for each zone.
In this case, their UUIDs need to be fetched and set in the cloud profile at spec.providerConfig.regions[*].zones[*].switchUuid
.
Unfortunately, they are not displayed in the vSphere Client.
Here the command line tool govc
is used to look them
up.
- Run
govc find / -type DistributedVirtualSwitch
to get the full path of all vds/dvs - For each switch run
govc dvs.portgroup.info <switch-path> | grep DvsUuid