This is the multi-page printable view of this section. Click here to print.
Shoot
1 - Access Restrictions
Access Restrictions
Access restrictions can be configured in the CloudProfile
, Seed
, and Shoot
APIs.
They can be used to implement access restrictions for seed and shoot clusters (e.g., if you want to ensure “EU access”-only or similar policies).
CloudProfile
The .spec.regions
list contains all regions that can be selected by Shoot
s.
Operators can configure them with a list of access restrictions that apply for each region, for example:
spec:
regions:
- name: europe-central-1
accessRestrictions:
- name: eu-access-only
- name: us-west-1
This configuration means that Shoot
s selecting the europe-central-1
region can configure an eu-access-only
access restriction.
Shoot
s running in other regions cannot configure this access restriction in their specification.
Seed
The Seed
specification also allows to configure access restrictions that apply for this specific seed cluster, for example:
spec:
accessRestrictions:
- name: eu-access-only
This configuration means that this seed cluster can host shoot clusters that also have the eu-access-only
access restriction.
In addition, this seed cluster can also host shoot clusters without any access restrictions at all.
Shoot
If the CloudProfile
allows to configure access restrictions for the selected .spec.region
in the Shoot
(see above), then they can also be provided in the specification of the Shoot
, for example:
spec:
region: europe-central-1
accessRestrictions:
- name: eu-access-only
# options:
# support.gardener.cloud/eu-access-for-cluster-addons: "false"
# support.gardener.cloud/eu-access-for-cluster-nodes: "true"
In addition, it is possible to specify arbitrary options (key-value pairs) for the access restriction.
These options are not interpreted by Gardener, but can be helpful when evaluated by other tools (e.g., gardenctl
implements some of them).
Above configuration means that the Shoot
shall only be accessible by operators in the EU.
When configured for
- a newly created
Shoot
,gardener-scheduler
will automatically filter forSeed
s also supporting this access restriction. All otherSeed
s are not considered for scheduling. - an existing
Shoot
,gardener-apiserver
will allow removing access restrictions, but adding them is only possible if the currently selectedSeed
supports them. If it does not support them, theShoot
must first be migrated to another eligibleSeed
before they can be added. - an existing
Shoot
that is migrated,gardener-apiserver
will only allow the migration in case the targetedSeed
also supports the access restrictions configured on theShoot
.
❗ Important
There is no technical enforcement of these access restrictions - they are purely informational. Hence, it is the responsibility of the operator to ensure that they enforce the configured access restrictions.
2 - Accessing Shoot Clusters
Accessing Shoot Clusters
After creation of a shoot cluster, end-users require a kubeconfig
to access it. There are several options available to get to such kubeconfig
.
shoots/adminkubeconfig
Subresource
The shoots/adminkubeconfig
subresource allows users to dynamically generate temporary kubeconfig
s that can be used to access shoot cluster with cluster-admin
privileges. The credentials associated with this kubeconfig
are client certificates which have a very short validity and must be renewed before they expire (by calling the subresource endpoint again).
The username associated with such kubeconfig
will be the same which is used for authenticating to the Gardener API. Apart from this advantage, the created kubeconfig
will not be persisted anywhere.
In order to request such a kubeconfig
, you can run the following commands (targeting the garden cluster):
export NAMESPACE=garden-my-namespace
export SHOOT_NAME=my-shoot
export KUBECONFIG=<kubeconfig for garden cluster> # can be set using "gardenctl target --garden <landscape>"
kubectl create \
-f <(printf '{"spec":{"expirationSeconds":600}}') \
--raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/adminkubeconfig | \
jq -r ".status.kubeconfig" | \
base64 -d
You also can use controller-runtime client
(>= v0.14.3) to create such a kubeconfig from your go code like so:
expiration := 10 * time.Minute
expirationSeconds := int64(expiration.Seconds())
adminKubeconfigRequest := &authenticationv1alpha1.AdminKubeconfigRequest{
Spec: authenticationv1alpha1.AdminKubeconfigRequestSpec{
ExpirationSeconds: &expirationSeconds,
},
}
err := client.SubResource("adminkubeconfig").Create(ctx, shoot, adminKubeconfigRequest)
if err != nil {
return err
}
config = adminKubeconfigRequest.Status.Kubeconfig
In Python, you can use the native kubernetes
client to create such a kubeconfig like this:
# This script first loads an existing kubeconfig from your system, and then sends a request to the Gardener API to create a new kubeconfig for a shoot cluster.
# The received kubeconfig is then decoded and a new API client is created for interacting with the shoot cluster.
import base64
import json
from kubernetes import client, config
import yaml
# Set configuration options
shoot_name="my-shoot" # Name of the shoot
project_namespace="garden-my-namespace" # Namespace of the project
# Load kubeconfig from default ~/.kube/config
config.load_kube_config()
api = client.ApiClient()
# Create kubeconfig request
kubeconfig_request = {
'apiVersion': 'authentication.gardener.cloud/v1alpha1',
'kind': 'AdminKubeconfigRequest',
'spec': {
'expirationSeconds': 600
}
}
response = api.call_api(resource_path=f'/apis/core.gardener.cloud/v1beta1/namespaces/{project_namespace}/shoots/{shoot_name}/adminkubeconfig',
method='POST',
body=kubeconfig_request,
auth_settings=['BearerToken'],
_preload_content=False,
_return_http_data_only=True,
)
decoded_kubeconfig = base64.b64decode(json.loads(response.data)["status"]["kubeconfig"]).decode('utf-8')
print(decoded_kubeconfig)
# Create an API client to interact with the shoot cluster
shoot_api_client = config.new_client_from_config_dict(yaml.safe_load(decoded_kubeconfig))
v1 = client.CoreV1Api(shoot_api_client)
Note: The
gardenctl-v2
tool simplifies targeting shoot clusters. It automatically downloads a kubeconfig that uses the gardenlogin kubectl auth plugin. This transparently manages authentication and certificate renewal without containing any credentials.
shoots/viewerkubeconfig
Subresource
The shoots/viewerkubeconfig
subresource works similar to the shoots/adminkubeconfig
.
The difference is that it returns a kubeconfig with read-only access for all APIs except the core/v1.Secret
API and the resources which are specified in the spec.kubernetes.kubeAPIServer.encryptionConfig
field in the Shoot (see this document).
In order to request such a kubeconfig
, you can run follow almost the same code as above - the only difference is that you need to use the viewerkubeconfig
subresource.
For example, in bash this looks like this:
export NAMESPACE=garden-my-namespace
export SHOOT_NAME=my-shoot
kubectl create \
-f <(printf '{"spec":{"expirationSeconds":600}}') \
--raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/viewerkubeconfig | \
jq -r ".status.kubeconfig" | \
base64 -d
The examples for other programming languages are similar to the above and can be adapted accordingly.
OpenID Connect
Note: OpenID Connect is deprecated in favor of Structured Authentication configuration. Setting OpenID Connect configurations is forbidden for clusters with Kubernetes version
>= 1.32
The kube-apiserver
of shoot clusters can be provided with OpenID Connect configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
oidcConfig:
...
It is the end-user’s responsibility to incorporate the OpenID Connect configurations in the kubeconfig
for accessing the cluster (i.e., Gardener will not automatically generate the kubeconfig
based on these OIDC settings).
The recommended way is using the kubectl
plugin called kubectl oidc-login
for OIDC authentication.
If you want to use the same OIDC configuration for all your shoots by default, then you can use the ClusterOpenIDConnectPreset
and OpenIDConnectPreset
API resources. They allow defaulting the .spec.kubernetes.kubeAPIServer.oidcConfig
fields for newly created Shoot
s such that you don’t have to repeat yourself every time (similar to PodPreset
resources in Kubernetes).
ClusterOpenIDConnectPreset
specified OIDC configuration applies to Projects
and Shoots
cluster-wide (hence, only available to Gardener operators), while OpenIDConnectPreset
is Project
-scoped.
Shoots have to “opt-in” for such defaulting by using the oidc=enable
label.
For further information on (Cluster)OpenIDConnectPreset
, refer to ClusterOpenIDConnectPreset and OpenIDConnectPreset.
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthenticationConfiguration
feature gate enabled (enabled by default), it is advised to use Structured Authentication instead of configuring .spec.kubernetes.kubeAPIServer.oidcConfig
.
If oidcConfig
is configured, it is translated into an AuthenticationConfiguration
file to use for Structured Authentication configuration
Structured Authentication
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthenticationConfiguration
feature gate enabled (enabled by default), kube-apiserver
of shoot clusters can be provided with Structured Authentication configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
kubeAPIServer:
structuredAuthentication:
configMapName: name-of-configmap-containing-authentication-config
The configMapName
references a user created ConfigMap
in the project namespace containing the AuthenticationConfiguration
in it’s config.yaml
data field.
Here is an example of such ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-configmap-containing-authentication-config
namespace: garden-my-project
data:
config.yaml: |
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://issuer1.example.com
audiences:
- audience1
- audience2
claimMappings:
username:
expression: 'claims.username'
groups:
expression: 'claims.groups'
uid:
expression: 'claims.uid'
claimValidationRules:
expression: 'claims.hd == "example.com"'
message: "the hosted domain name must be example.com"
The user is responsible for the validity of the configured JWTAuthenticator
s.
Be aware that changing the configuration in the ConfigMap
will be applied in the next Shoot
reconciliation, but this is not automatically triggered.
If you want the changes to roll out immediately, trigger a reconciliation explicitly.
Structured Authorization
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthorizationConfiguration
feature gate enabled (enabled by default), kube-apiserver
of shoot clusters can be provided with Structured Authorization configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
kubeAPIServer:
structuredAuthorization:
configMapName: name-of-configmap-containing-authorization-config
kubeconfigs:
- authorizerName: my-webhook
secretName: webhook-kubeconfig
The configMapName
references a user created ConfigMap
in the project namespace containing the AuthorizationConfiguration
in it’s config.yaml
data field.
Here is an example of such ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-configmap-containing-authorization-config
namespace: garden-my-project
data:
config.yaml: |
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
authorizers:
- type: Webhook
name: my-webhook
webhook:
timeout: 3s
subjectAccessReviewVersion: v1
matchConditionSubjectAccessReviewVersion: v1
failurePolicy: Deny
matchConditions:
- expression: request.resourceAttributes.namespace == 'kube-system'
In addition, it is required to provide a Secret
for each authorizer.
This Secret
should contain a kubeconfig with the server address of the webhook server, and optionally credentials for authentication:
apiVersion: v1
kind: Secret
metadata:
name: webhook-kubeconfig
namespace: garden-my-project
data:
kubeconfig: <base64-encoded-kubeconfig-for-authz-webhook>
The user is responsible for the validity of the configured authorizers.
Be aware that changing the configuration in the ConfigMap
will be applied in the next Shoot
reconciliation, but this is not automatically triggered.
If you want the changes to roll out immediately, trigger a reconciliation explicitly.
ℹ️ Note
You can have one or more authorizers of type
Webhook
(no other types are supported).You are not allowed to specify the
authorizers[].webhook.connectionInfo
field. Instead, as mentioned above, provide a kubeconfig file containing the server address (and optionally, credentials that can be used bykube-apiserver
in order to authenticate with the webhook server) by creating aSecret
containing the kubeconfig (in the.data.kubeconfig
key). Reference thisSecret
by adding it to.spec.kubernetes.kubeAPIServer.structuredAuthorization.kubeconfigs[]
(choose the properauthorizerName
, see example above).
Be aware of the fact that all webhook authorizers are added only after the RBAC
/Node
authorizers.
Hence, if RBAC already allows a request, your webhook authorizer might not get called.
Static Token Kubeconfig
Note: Static token kubeconfig is not available for Shoot clusters using Kubernetes version >= 1.27. The
shoots/adminkubeconfig
subresource should be used instead.
This kubeconfig
contains a static token and provides cluster-admin
privileges.
It is created by default and persisted in the <shoot-name>.kubeconfig
secret in the project namespace in the garden cluster.
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
enableStaticTokenKubeconfig: true
...
It is not the recommended method to access the shoot cluster, as the static token kubeconfig
has some security flaws associated with it:
- The static token in the
kubeconfig
doesn’t have any expiration date. Read Credentials Rotation for Shoot Clusters to learn how to rotate the static token. - The static token doesn’t have any user identity associated with it. The user in that token will always be
system:cluster-admin
, irrespective of the person accessing the cluster. Hence, it is impossible to audit the events in cluster.
When the enableStaticTokenKubeconfig
field is not explicitly set in the Shoot spec:
- for Shoot clusters using Kubernetes version < 1.26, the field is defaulted to
true
. - for Shoot clusters using Kubernetes version >= 1.26, the field is defaulted to
false
.
Note: Starting with Kubernetes 1.27, the
enableStaticTokenKubeconfig
field will be locked tofalse
.
3 - Shoot Cluster Purposes
Shoot Cluster Purpose
The Shoot
resource contains a .spec.purpose
field indicating how the shoot is used, whose allowed values are as follows:
evaluation
(default): Indicates that the shoot cluster is for evaluation scenarios.development
: Indicates that the shoot cluster is for development scenarios.testing
: Indicates that the shoot cluster is for testing scenarios.production
: Indicates that the shoot cluster is for production scenarios.infrastructure
: Indicates that the shoot cluster is for infrastructure scenarios (only allowed for shoots in thegarden
namespace).
Behavioral Differences
The following enlists the differences in the way the shoot clusters are set up based on the selected purpose:
testing
shoot clusters do not get a monitoring or a logging stack as part of their control planes.- for
production
andinfrastructure
shoot clusters auto-scaling scale down of the main ETCD is disabled.
There are also differences with respect to how testing
shoots are scheduled after creation, please consult the Scheduler documentation.
Future Steps
We might introduce more behavioral difference depending on the shoot purpose in the future. As of today, there are no plans yet.
4 - Shoot Hibernation
Shoot Hibernation
Clusters are only needed 24 hours a day if they run productive workload. So whenever you do development in a cluster, or just use it for tests or demo purposes, you can save a lot of money if you scale-down your Kubernetes resources whenever you don’t need them. However, scaling them down manually can become time-consuming the more resources you have.
Gardener offers a clever way to automatically scale-down all resources to zero: cluster hibernation. You can either hibernate a cluster by pushing a button, or by defining a hibernation schedule.
To save costs, it’s recommended to define a hibernation schedule before the creation of a cluster. You can hibernate your cluster or wake up your cluster manually even if there’s a schedule for its hibernation.
What Is Hibernation?
When a cluster is hibernated, Gardener scales down the worker nodes and the cluster’s control plane to free resources at the IaaS provider. This affects:
- Your workload, for example, pods, deployments, custom resources.
- The virtual machines running your workload.
- The resources of the control plane of your cluster.
What Isn’t Affected by the Hibernation?
To scale up everything where it was before hibernation, Gardener doesn’t delete state-related information, that is, information stored in persistent volumes. The cluster state as persistent in etcd
is also preserved.
Hibernate Your Cluster Manually
The .spec.hibernation.enabled
field specifies whether the cluster needs to be hibernated or not. If the field is set to true
, the cluster’s desired state is to be hibernated. If it is set to false
or not specified at all, the cluster’s desired state is to be awakened.
To hibernate your cluster, you can run the following kubectl
command:
$ kubectl patch shoot -n $NAMESPACE $SHOOT_NAME -p '{"spec":{"hibernation":{"enabled": true}}}'
Wake Up Your Cluster Manually
To wake up your cluster, you can run the following kubectl
command:
$ kubectl patch shoot -n $NAMESPACE $SHOOT_NAME -p '{"spec":{"hibernation":{"enabled": false}}}'
Create a Schedule to Hibernate Your Cluster
You can specify a hibernation schedule to automatically hibernate/wake up a cluster.
Let’s have a look into the following example:
hibernation:
enabled: false
schedules:
- start: "0 20 * * *" # Start hibernation every day at 8PM
end: "0 6 * * *" # Stop hibernation every day at 6AM
location: "America/Los_Angeles" # Specify a location for the cron to run in
The above section configures a hibernation schedule that hibernates the cluster every day at 08:00 PM and wakes it up at 06:00 AM. The start
or end
fields can be omitted, though at least one of them has to be specified. Hence, it is possible to configure a hibernation schedule that only hibernates or wakes up a cluster. The location
field is the time location used to evaluate the cron expressions.
5 - Shoot Info Configmap
Shoot Info ConfigMap
Overview
The gardenlet maintains a ConfigMap inside the Shoot cluster that contains information about the cluster itself. The ConfigMap is named shoot-info
and located in the kube-system
namespace.
Fields
The following fields are provided:
apiVersion: v1
kind: ConfigMap
metadata:
name: shoot-info
namespace: kube-system
data:
domain: crazy-botany.core.my-custom-domain.com # .spec.dns.domain field from the Shoot resource
extensions: foobar,foobaz # List of extensions that are enabled
kubernetesVersion: 1.25.4 # .spec.kubernetes.version field from the Shoot resource
maintenanceBegin: 220000+0100 # .spec.maintenance.timeWindow.begin field from the Shoot resource
maintenanceEnd: 230000+0100 # .spec.maintenance.timeWindow.end field from the Shoot resource
nodeNetwork: 10.250.0.0/16 # .spec.networking.nodes field from the Shoot resource
podNetwork: 100.96.0.0/11 # .spec.networking.pods field from the Shoot resource
projectName: dev # .metadata.name of the Project
provider: <some-provider-name> # .spec.provider.type field from the Shoot resource
region: europe-central-1 # .spec.region field from the Shoot resource
serviceNetwork: 100.64.0.0/13 # .spec.networking.services field from the Shoot resource
shootName: crazy-botany # .metadata.name from the Shoot resource
6 - Shoot Maintenance
Shoot Maintenance
Shoots configure a maintenance time window in which Gardener performs certain operations that may restart the control plane, roll out the nodes, result in higher network traffic, etc. A summary of what was changed in the last maintenance time window in shoot specification is kept in the shoot status .status.lastMaintenance
field.
This document outlines what happens during a shoot maintenance.
Time Window
Via the .spec.maintenance.timeWindow
field in the shoot specification, end-users can configure the time window in which maintenance operations are executed.
Gardener runs one maintenance operation per day in this time window:
spec:
maintenance:
timeWindow:
begin: 220000+0100
end: 230000+0100
The offset (+0100
) is considered with respect to UTC time.
The minimum time window is 30m
and the maximum is 6h
.
⚠️ Please note that there is no guarantee that a maintenance operation that, e.g., starts a node roll-out will finish within the time window. Especially for large clusters, it may take several hours until a graceful rolling update of the worker nodes succeeds (also depending on the workload and the configured pod disruption budgets/termination grace periods).
Internally, Gardener is subtracting 15m
from the end of the time window to (best-effort) try to finish the maintenance until the end is reached, however, this might not work in all cases.
If you don’t specify a time window, then Gardener will randomly compute it. You can change it later, of course.
Automatic Version Updates
The .spec.maintenance.autoUpdate
field in the shoot specification allows you to control how/whether automatic updates of Kubernetes patch and machine image versions are performed.
Machine image versions are updated per worker pool.
spec:
maintenance:
autoUpdate:
kubernetesVersion: true
machineImageVersion: true
During the daily maintenance, the Gardener Controller Manager updates the Shoot’s Kubernetes and machine image version if any of the following criteria applies:
- There is a higher version available and the Shoot opted-in for automatic version updates.
- The currently used version is
expired
.
The target version for machine image upgrades is controlled by the updateStrategy
field for the machine image in the CloudProfile. Allowed update strategies are patch
, minor
and major
.
Gardener (gardener-controller-manager) populates the lastMaintenance
field in the Shoot status with the maintenance results.
Last Maintenance:
Description: "All maintenance operations successful. Control Plane: Updated Kubernetes version from 1.26.4 to 1.27.1. Reason: Kubernetes version expired - force update required"
State: Succeeded
Triggered Time: 2023-07-28T09:07:27Z
Additionally, Gardener creates events with the type MachineImageVersionMaintenance
or KubernetesVersionMaintenance
on the Shoot describing the action performed during maintenance, including the reason why an update has been triggered.
LAST SEEN TYPE REASON OBJECT MESSAGE
30m Normal MachineImageVersionMaintenance shoot/local Worker pool "local": Updated image from 'gardenlinux' version 'xy' to version 'abc'. Reason: Automatic update of the machine image version is configured (image update strategy: major).
30m Normal KubernetesVersionMaintenance shoot/local Control Plane: Updated Kubernetes version from "1.26.4" to "1.27.1". Reason: Kubernetes version expired - force update required.
15m Normal KubernetesVersionMaintenance shoot/local Worker pool "local": Updated Kubernetes version '1.26.3' to version '1.27.1'. Reason: Kubernetes version expired - force update required.
If at least one maintenance operation fails, the lastMaintenance
field in the Shoot status is set to Failed
:
Last Maintenance:
Description: "(1/2) maintenance operations successful: Control Plane: Updated Kubernetes version from 1.26.4 to 1.27.1. Reason: Kubernetes version expired - force update required, Worker pool x: 'gardenlinux' machine image version maintenance failed. Reason for update: machine image version expired"
FailureReason: "Worker pool x: either the machine image 'gardenlinux' is reaching end of life and migration to another machine image is required or there is a misconfiguration in the CloudProfile."
State: Failed
Triggered Time: 2023-07-28T09:07:27Z
Please refer to the Shoot Kubernetes and Operating System Versioning in Gardener topic for more information about Kubernetes and machine image versions in Gardener.
Cluster Reconciliation
Gardener administrators/operators can configure the gardenlet in a way that it only reconciles shoot clusters during their maintenance time windows. This behaviour is not controllable by end-users but might make sense for large Gardener installations. Concretely, your shoot will be reconciled regularly during its maintenance time window. Outside of the maintenance time window it will only reconcile if you change the specification or if you explicitly trigger it, see also Trigger Shoot Operations.
Confine Specification Changes/Updates Roll Out
Via the .spec.maintenance.confineSpecUpdateRollout
field you can control whether you want to make Gardener roll out changes/updates to your shoot specification only during the maintenance time window.
It is false
by default, i.e., any change to your shoot specification triggers a reconciliation (even outside of the maintenance time window).
This is helpful if you want to update your shoot but don’t want the changes to be applied immediately. One example use-case would be a Kubernetes version upgrade that you want to roll out during the maintenance time window.
Any update to the specification will not increase the .metadata.generation
of the Shoot
, which is something you should be aware of.
Also, even if Gardener administrators/operators have not enabled the “reconciliation in maintenance time window only” configuration (as mentioned above), then your shoot will only reconcile in the maintenance time window.
The reason is that Gardener cannot differentiate between create/update/reconcile operations.
⚠️ If confineSpecUpdateRollout=true
, please note that if you change the maintenance time window itself, then it will only be effective after the upcoming maintenance.
⚠️ As exceptions to the above rules, manually triggered reconciliations and changes to the .spec.hibernation.enabled
field trigger immediate rollouts.
I.e., if you hibernate or wake-up your shoot, or you explicitly tell Gardener to reconcile your shoot, then Gardener gets active right away.
Shoot Operations
In case you would like to perform a shoot credential rotation or a reconcile
operation during your maintenance time window, you can annotate the Shoot
with
maintenance.gardener.cloud/operation=<operation>
This will execute the specified <operation>
during the next maintenance reconciliation.
Note that Gardener will remove this annotation after it has been performed in the maintenance reconciliation.
⚠️ This is skipped when the
Shoot
’s.status.lastOperation.state=Failed
. Make sure to retry your shoot reconciliation beforehand.
Special Operations During Maintenance
The shoot maintenance controller triggers special operations that are performed as part of the shoot reconciliation.
Infrastructure
and DNSRecord
Reconciliation
The reconciliation of the Infrastructure
and DNSRecord
extension resources is only demanded during the shoot’s maintenance time window.
The rationale behind it is to prevent sending too many requests against the cloud provider APIs, especially on large landscapes or if a user has many shoot clusters in the same cloud provider account.
Restart Control Plane Controllers
Gardener operators can make Gardener restart/delete certain control plane pods during a shoot maintenance. This feature helps to automatically solve service denials of controllers due to stale caches, dead-locks or starving routines.
Please note that these are exceptional cases but they are observed from time to time.
Gardener, for example, takes this precautionary measure for kube-controller-manager
pods.
See Shoot Maintenance to see how extension developers can extend this behaviour.
Restart Some Core Addons
Gardener operators can make Gardener restart some core addons (at the moment only CoreDNS) during a shoot maintenance.
CoreDNS benefits from this feature as it automatically solve problems with clients stuck to single replica of the deployment and thus overloading it. Please note that these are exceptional cases but they are observed from time to time.
7 - Shoot Scheduling Profiles
balanced
and bin-packing
scheduling profilesShoot Scheduling Profiles
This guide describes the available scheduling profiles and how they can be configured in the Shoot cluster. It also clarifies how a custom scheduling profile can be configured.
Scheduling Profiles
The scheduling process in the kube-scheduler happens in a series of stages. A scheduling profile allows configuring the different stages of the scheduling.
As of today, Gardener supports two predefined scheduling profiles:
balanced
(default)Overview
The
balanced
profile attempts to spread Pods evenly across Nodes to obtain a more balanced resource usage. This profile provides the default kube-scheduler behavior.How it works?
The kube-scheduler is started without any profiles. In such case, by default, one profile with the scheduler name
default-scheduler
is created. This profile includes the default plugins. If a Pod doesn’t specify the.spec.schedulerName
field, kube-apiserver sets it todefault-scheduler
. Then, the Pod gets scheduled by thedefault-scheduler
accordingly.bin-packing
Overview
The
bin-packing
profile scores Nodes based on the allocation of resources. It prioritizes Nodes with the most allocated resources. By favoring the Nodes with the most allocation, some of the other Nodes become under-utilized over time (because new Pods keep being scheduled to the most allocated Nodes). Then, the cluster-autoscaler identifies such under-utilized Nodes and removes them from the cluster. In this way, this profile provides a greater overall resource utilization (compared to thebalanced
profile).Note: The decision of when to remove a Node is a trade-off between optimizing for utilization or the availability of resources. Removing under-utilized Nodes improves cluster utilization, but new workloads might have to wait for resources to be provisioned again before they can run.
How it works?
The kube-scheduler is configured with the following bin packing profile:
apiVersion: kubescheduler.config.k8s.io/v1beta3 kind: KubeSchedulerConfiguration profiles: - schedulerName: bin-packing-scheduler pluginConfig: - name: NodeResourcesFit args: scoringStrategy: type: MostAllocated plugins: score: disabled: - name: NodeResourcesBalancedAllocation
To impose the new profile, a
MutatingWebhookConfiguration
is deployed in the Shoot cluster. TheMutatingWebhookConfiguration
interceptsCREATE
operations for Pods and sets the.spec.schedulerName
field tobin-packing-scheduler
. Then, the Pod gets scheduled by thebin-packing-scheduler
accordingly. Pods that specify a custom scheduler (i.e., having.spec.schedulerName
different fromdefault-scheduler
andbin-packing-scheduler
) are not affected.
Configuring the Scheduling Profile
The scheduling profile can be configured via the .spec.kubernetes.kubeScheduler.profile
field in the Shoot:
spec:
# ...
kubernetes:
kubeScheduler:
profile: "balanced" # or "bin-packing"
Custom Scheduling Profiles
The kube-scheduler’s component configs allows configuring custom scheduling profiles to match the cluster needs. As of today, Gardener supports only two predefined scheduling profiles. The profile configuration in the component config is quite expressive and it is not possible to easily define profiles that would match the needs of every cluster. Because of these reasons, there are no plans to add support for new predefined scheduling profiles. If a cluster owner wants to use a custom scheduling profile, then they have to deploy (and maintain) a dedicated kube-scheduler deployment in the cluster itself.
8 - Shoot Status
Shoot Status
This document provides an overview of the ShootStatus.
Conditions
The Shoot status consists of a set of conditions. A Condition has the following fields:
Field name | Description |
---|---|
type | Name of the condition. |
status | Indicates whether the condition is applicable, with possible values True , False , Unknown or Progressing . |
lastTransitionTime | Timestamp for when the condition last transitioned from one status to another. |
lastUpdateTime | Timestamp for when the condition was updated. Usually changes when reason or message in condition is updated. |
reason | Machine-readable, UpperCamelCase text indicating the reason for the condition’s last transition. |
message | Human-readable message indicating details about the last status transition. |
codes | Well-defined error codes in case the condition reports a problem. |
Currently, the available Shoot condition types are:
APIServerAvailable
ControlPlaneHealthy
EveryNodeReady
ObservabilityComponentsHealthy
SystemComponentsHealthy
The Shoot conditions are maintained by the shoot care reconciler of the gardenlet. Find more information in the gardelent documentation.
Sync Period
The condition checks are executed periodically at an interval which is configurable in the GardenletConfiguration
(.controllers.shootCare.syncPeriod
, defaults to 1m
).
Condition Thresholds
The GardenletConfiguration
also allows configuring condition thresholds (controllers.shootCare.conditionThresholds
). A condition threshold is the amount of time to consider a condition as Processing
on condition status changes.
Let’s check the following example to get a better understanding. Let’s say that the APIServerAvailable
condition of our Shoot is with status True
. If the next condition check fails (for example kube-apiserver becomes unreachable), then the condition first goes to Processing
state. Only if this state remains for condition threshold amount of time, then the condition is finally updated to False
.
Constraints
Constraints represent conditions of a Shoot’s current state that constraint some operations on it. The current constraints are:
HibernationPossible
:
This constraint indicates whether a Shoot is allowed to be hibernated.
The rationale behind this constraint is that a Shoot can have ValidatingWebhookConfiguration
s or MutatingWebhookConfiguration
s acting on resources that are critical for waking up a cluster.
For example, if a webhook has rules for CREATE/UPDATE
Pods or Nodes and failurePolicy=Fail
, the webhook will block joining Nodes
and creating critical system component Pods and thus block the entire wakeup operation, because the server backing the webhook is not running.
Even if the failurePolicy
is set to Ignore
, high timeouts (>15s
) can lead to blocking requests of control plane components.
That’s because most control-plane API calls are made with a client-side timeout of 30s
, so if a webhook has timeoutSeconds=30
the overall request might still fail as there is overhead in communication with the API server and potential other webhooks.
Generally, it’s best practice to specify low timeouts in WebhookConfigs.
As an effort to correct this common problem, the webhook remediator has been created. This is enabled by setting .controllers.shootCare.webhookRemediatorEnabled=true
in the gardenlet
’s configuration. This feature simply checks whether webhook configurations in shoot clusters match a set of rules described here. If at least one of the rules matches, it will change set status=False
for the .status.constraints
of type HibernationPossible
and MaintenancePreconditionsSatisfied
in the Shoot
resource. In addition, the failurePolicy
in the affected webhook configurations will be set from Fail
to Ignore
. Gardenlet will also add an annotation to make it visible to end-users that their webhook configurations were mutated and should be fixed/adapted according to the rules and best practices.
In most cases, you can avoid this by simply excluding the kube-system
namespace from your webhook via the namespaceSelector
:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
namespaceSelector:
matchExpressions:
- key: gardener.cloud/purpose
operator: NotIn
values:
- kube-system
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
scope: "Namespaced"
However, some other resources (some of them cluster-scoped) might still trigger the remediator, namely:
- endpoints
- nodes
- clusterroles
- clusterrolebindings
- customresourcedefinitions
- apiservices
- certificatesigningrequests
- priorityclasses
If one of the above resources triggers the remediator, the preferred solution is to remove that particular resource from your webhook’s rules
. You can also use the objectSelector
to reduce the scope of webhook’s rules
. However, in special cases where a webhook is absolutely needed for the workload, it is possible to add the remediation.webhook.shoot.gardener.cloud/exclude=true
label to your webhook so that the remediator ignores it. This label should not be used to silence an alert, but rather to confirm that a webhook won’t cause problems. Note that all of this is no perfect solution and just done on a best effort basis, and only the owner of the webhook can know whether it indeed is problematic and configured correctly.
In a special case, if a webhook has a rule for CREATE/UPDATE
lease resources in kube-system
namespace, its timeoutSeconds
is updated to 3 seconds. This is required to ensure the proper functioning of the leader election of essential control plane controllers.
You can also find more help from the Kubernetes documentation
MaintenancePreconditionsSatisfied
:
This constraint indicates whether all preconditions for a safe maintenance operation are satisfied (see Shoot Maintenance for more information about what happens during a shoot maintenance).
As of today, the same checks as in the HibernationPossible
constraint are being performed (user-deployed webhooks that might interfere with potential rolling updates of shoot worker nodes).
There is no further action being performed on this constraint’s status (maintenance is still being performed).
It is meant to make the user aware of potential problems that might occur due to his configurations.
CACertificateValiditiesAcceptable
:
This constraint indicates that there is at least one CA certificate which expires in less than 1y
.
It will not be added to the .status.constraints
if there is no such CA certificate.
However, if it’s visible, then a credentials rotation operation should be considered.
CRDsWithProblematicConversionWebhooks
:
This constraint indicates that there is at least one CustomResourceDefinition
in the cluster which has multiple stored versions and a conversion webhook configured. This could break the reconciliation flow of a Shoot
cluster in some cases. See https://github.com/gardener/gardener/issues/7471 for more details.
It will not be added to the .status.constraints
if there is no such CRD.
However, if it’s visible, then you should consider upgrading the existing objects to the current stored version. See Upgrade existing objects to a new stored version for detailed steps.
Last Operation
The Shoot status holds information about the last operation that is performed on the Shoot. The last operation field reflects overall progress and the tasks that are currently being executed. Allowed operation types are Create
, Reconcile
, Delete
, Migrate
, and Restore
. Allowed operation states are Processing
, Succeeded
, Error
, Failed
, Pending
, and Aborted
. An operation in Error
state is an operation that will be retried for a configurable amount of time (controllers.shoot.retryDuration
field in GardenletConfiguration
, defaults to 12h
). If the operation cannot complete successfully for the configured retry duration, it will be marked as Failed
. An operation in Failed
state is an operation that won’t be retried automatically (to retry such an operation, see Retry failed operation).
Last Errors
The Shoot status also contains information about the last occurred error(s) (if any) during an operation. A LastError consists of identifier of the task returned error, human-readable message of the error and error codes (if any) associated with the error.
Error Codes
Known error codes and their classification are:
Error code | User error | Description |
---|---|---|
ERR_INFRA_UNAUTHENTICATED | true | Indicates that the last error occurred due to the client request not being completed because it lacks valid authentication credentials for the requested resource. It is classified as a non-retryable error code. |
ERR_INFRA_UNAUTHORIZED | true | Indicates that the last error occurred due to the server understanding the request but refusing to authorize it. It is classified as a non-retryable error code. |
ERR_INFRA_QUOTA_EXCEEDED | true | Indicates that the last error occurred due to infrastructure quota limits. It is classified as a non-retryable error code. |
ERR_INFRA_RATE_LIMITS_EXCEEDED | false | Indicates that the last error occurred due to exceeded infrastructure request rate limits. |
ERR_INFRA_DEPENDENCIES | true | Indicates that the last error occurred due to dependent objects on the infrastructure level. It is classified as a non-retryable error code. |
ERR_RETRYABLE_INFRA_DEPENDENCIES | false | Indicates that the last error occurred due to dependent objects on the infrastructure level, but the operation should be retried. |
ERR_INFRA_RESOURCES_DEPLETED | true | Indicates that the last error occurred due to depleted resource in the infrastructure. |
ERR_CLEANUP_CLUSTER_RESOURCES | true | Indicates that the last error occurred due to resources in the cluster that are stuck in deletion. |
ERR_CONFIGURATION_PROBLEM | true | Indicates that the last error occurred due to a configuration problem. It is classified as a non-retryable error code. |
ERR_RETRYABLE_CONFIGURATION_PROBLEM | true | Indicates that the last error occurred due to a retryable configuration problem. “Retryable” means that the occurred error is likely to be resolved in a ungraceful manner after given period of time. |
ERR_PROBLEMATIC_WEBHOOK | true | Indicates that the last error occurred due to a webhook not following the Kubernetes best practices. |
Please note: Errors classified as User error: true
do not require a Gardener operator to resolve but can be remediated by the user (e.g. by refreshing expired infrastructure credentials).
Even though ERR_INFRA_RATE_LIMITS_EXCEEDED
and ERR_RETRYABLE_INFRA_DEPENDENCIES
is mentioned as User error: false` operator can’t provide any resolution because it is related to cloud provider issue.
Status Label
Shoots will be automatically labeled with the shoot.gardener.cloud/status
label.
Its value might either be healthy
, progressing
, unhealthy
or unknown
depending on the .status.conditions
, .status.lastOperation
, and status.lastErrors
of the Shoot
.
This can be used as an easy filter method to find shoots based on their “health” status.
9 - Shoot Supported Architectures
Supported CPU Architectures for Shoot Worker Nodes
Users can create shoot clusters with worker groups having virtual machines of different architectures. CPU architecture of each worker pool can be specified in the Shoot
specification as follows:
Example Usage in a Shoot
spec:
provider:
workers:
- name: cpu-worker
machine:
architecture: <some-cpu-architecture> # optional
If no value is specified for the architecture field, it defaults to amd64
. For a valid shoot object, a machine type should be present in the respective CloudProfile
with the same CPU architecture as specified in the Shoot
yaml. Also, a valid machine image should be present in the CloudProfile
that supports the required architecture specified in the Shoot
worker pool.
Example Usage in a CloudProfile
spec:
machineImages:
- name: test-image
versions:
- architectures: # optional
- <architecture-1>
- <architecture-2>
version: 1.2.3
machineTypes:
- architecture: <some-cpu-architecture>
cpu: "2"
gpu: "0"
memory: 8Gi
name: test-machine
Currently, Gardener supports two of the most widely used CPU architectures:
amd64
arm64
10 - Shoot Worker Nodes Settings
Shoot Worker Nodes Settings
Users can configure settings affecting all worker nodes via .spec.provider.workersSettings
in the Shoot
resource.
SSH Access
SSHAccess
indicates whether the sshd.service
should be running on the worker nodes. This is ensured by a systemd service called sshd-ensurer.service
which runs every 15 seconds on each worker node. When set to true
, the systemd service ensures that the sshd.service
is unmasked, enabled and running. If it is set to false
, the systemd service ensures that sshd.service
is disabled, masked and stopped. This also terminates all established SSH connections on the host. In addition, when this value is set to false
, existing Bastion
resources are deleted during Shoot
reconciliation and new ones are prevented from being created, SSH keypairs are not created/rotated, SSH keypair secrets are deleted from the Garden cluster, and the gardener-user.service
is not deployed to the worker nodes.
sshAccess.enabled
is set to true
by default.
Example Usage in a Shoot
spec:
provider:
workersSettings:
sshAccess:
enabled: false
11 - Workerless `Shoot`s
Workerless Shoot
s
Starting from v1.71
, users can create a Shoot
without any workers, known as a “workerless Shoot
”. Previously, worker nodes had to always be included even if users only needed the Kubernetes control plane. With workerless Shoot
s, Gardener will not create any worker nodes or anything related to them.
Here’s an example manifest for a local workerless Shoot
:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: local
namespace: garden-local
spec:
cloudProfile:
name: local
region: local
provider:
type: local
kubernetes:
version: 1.26.0
⚠️ It’s important to note that a workerless
Shoot
cannot be converted to aShoot
with workers or vice versa.
As part of the control plane, the following components are deployed in the seed cluster for workerless Shoot
:
- etcds
- kube-apiserver
- kube-controller-manager
- gardener-resource-manager
- logging and monitoring components
- extension components (if they support workerless
Shoot
s, see here)