그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그 그
7 minute read
Accessing Shoot Clusters
After creation of a shoot cluster, end-users require a kubeconfig
to access it. There are several options available to get to such kubeconfig
.
shoots/adminkubeconfig
Subresource
The shoots/adminkubeconfig
subresource allows users to dynamically generate temporary kubeconfig
s that can be used to access shoot cluster with cluster-admin
privileges. The credentials associated with this kubeconfig
are client certificates which have a very short validity and must be renewed before they expire (by calling the subresource endpoint again).
The username associated with such kubeconfig
will be the same which is used for authenticating to the Gardener API. Apart from this advantage, the created kubeconfig
will not be persisted anywhere.
In order to request such a kubeconfig
, you can run the following commands (targeting the garden cluster):
export NAMESPACE=garden-my-namespace
export SHOOT_NAME=my-shoot
export KUBECONFIG=<kubeconfig for garden cluster> # can be set using "gardenctl target --garden <landscape>"
kubectl create \
-f <(printf '{"spec":{"expirationSeconds":600}}') \
--raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/adminkubeconfig | \
jq -r ".status.kubeconfig" | \
base64 -d
You also can use controller-runtime client
(>= v0.14.3) to create such a kubeconfig from your go code like so:
expiration := 10 * time.Minute
expirationSeconds := int64(expiration.Seconds())
adminKubeconfigRequest := &authenticationv1alpha1.AdminKubeconfigRequest{
Spec: authenticationv1alpha1.AdminKubeconfigRequestSpec{
ExpirationSeconds: &expirationSeconds,
},
}
err := client.SubResource("adminkubeconfig").Create(ctx, shoot, adminKubeconfigRequest)
if err != nil {
return err
}
config = adminKubeconfigRequest.Status.Kubeconfig
In Python, you can use the native kubernetes
client to create such a kubeconfig like this:
# This script first loads an existing kubeconfig from your system, and then sends a request to the Gardener API to create a new kubeconfig for a shoot cluster.
# The received kubeconfig is then decoded and a new API client is created for interacting with the shoot cluster.
import base64
import json
from kubernetes import client, config
import yaml
# Set configuration options
shoot_name="my-shoot" # Name of the shoot
project_namespace="garden-my-namespace" # Namespace of the project
# Load kubeconfig from default ~/.kube/config
config.load_kube_config()
api = client.ApiClient()
# Create kubeconfig request
kubeconfig_request = {
'apiVersion': 'authentication.gardener.cloud/v1alpha1',
'kind': 'AdminKubeconfigRequest',
'spec': {
'expirationSeconds': 600
}
}
response = api.call_api(resource_path=f'/apis/core.gardener.cloud/v1beta1/namespaces/{project_namespace}/shoots/{shoot_name}/adminkubeconfig',
method='POST',
body=kubeconfig_request,
auth_settings=['BearerToken'],
_preload_content=False,
_return_http_data_only=True,
)
decoded_kubeconfig = base64.b64decode(json.loads(response.data)["status"]["kubeconfig"]).decode('utf-8')
print(decoded_kubeconfig)
# Create an API client to interact with the shoot cluster
shoot_api_client = config.new_client_from_config_dict(yaml.safe_load(decoded_kubeconfig))
v1 = client.CoreV1Api(shoot_api_client)
Note: The
gardenctl-v2
tool simplifies targeting shoot clusters. It automatically downloads a kubeconfig that uses the gardenlogin kubectl auth plugin. This transparently manages authentication and certificate renewal without containing any credentials.
shoots/viewerkubeconfig
Subresource
The shoots/viewerkubeconfig
subresource works similar to the shoots/adminkubeconfig
.
The difference is that it returns a kubeconfig with read-only access for all APIs except the core/v1.Secret
API and the resources which are specified in the spec.kubernetes.kubeAPIServer.encryptionConfig
field in the Shoot (see this document).
In order to request such a kubeconfig
, you can run follow almost the same code as above - the only difference is that you need to use the viewerkubeconfig
subresource.
For example, in bash this looks like this:
export NAMESPACE=garden-my-namespace
export SHOOT_NAME=my-shoot
kubectl create \
-f <(printf '{"spec":{"expirationSeconds":600}}') \
--raw /apis/core.gardener.cloud/v1beta1/namespaces/${NAMESPACE}/shoots/${SHOOT_NAME}/viewerkubeconfig | \
jq -r ".status.kubeconfig" | \
base64 -d
The examples for other programming languages are similar to the above and can be adapted accordingly.
OpenID Connect
Note: OpenID Connect is deprecated in favor of Structured Authentication configuration. Setting OpenID Connect configurations is forbidden for clusters with Kubernetes version
>= 1.32
The kube-apiserver
of shoot clusters can be provided with OpenID Connect configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
oidcConfig:
...
It is the end-user’s responsibility to incorporate the OpenID Connect configurations in the kubeconfig
for accessing the cluster (i.e., Gardener will not automatically generate the kubeconfig
based on these OIDC settings).
The recommended way is using the kubectl
plugin called kubectl oidc-login
for OIDC authentication.
If you want to use the same OIDC configuration for all your shoots by default, then you can use the ClusterOpenIDConnectPreset
and OpenIDConnectPreset
API resources. They allow defaulting the .spec.kubernetes.kubeAPIServer.oidcConfig
fields for newly created Shoot
s such that you don’t have to repeat yourself every time (similar to PodPreset
resources in Kubernetes).
ClusterOpenIDConnectPreset
specified OIDC configuration applies to Projects
and Shoots
cluster-wide (hence, only available to Gardener operators), while OpenIDConnectPreset
is Project
-scoped.
Shoots have to “opt-in” for such defaulting by using the oidc=enable
label.
For further information on (Cluster)OpenIDConnectPreset
, refer to ClusterOpenIDConnectPreset and OpenIDConnectPreset.
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthenticationConfiguration
feature gate enabled (enabled by default), it is advised to use Structured Authentication instead of configuring .spec.kubernetes.kubeAPIServer.oidcConfig
.
If oidcConfig
is configured, it is translated into an AuthenticationConfiguration
file to use for Structured Authentication configuration
Structured Authentication
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthenticationConfiguration
feature gate enabled (enabled by default), kube-apiserver
of shoot clusters can be provided with Structured Authentication configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
kubeAPIServer:
structuredAuthentication:
configMapName: name-of-configmap-containing-authentication-config
The configMapName
references a user created ConfigMap
in the project namespace containing the AuthenticationConfiguration
in it’s config.yaml
data field.
Here is an example of such ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-configmap-containing-authentication-config
namespace: garden-my-project
data:
config.yaml: |
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://issuer1.example.com
audiences:
- audience1
- audience2
claimMappings:
username:
expression: 'claims.username'
groups:
expression: 'claims.groups'
uid:
expression: 'claims.uid'
claimValidationRules:
expression: 'claims.hd == "example.com"'
message: "the hosted domain name must be example.com"
The user is responsible for the validity of the configured JWTAuthenticator
s.
Be aware that changing the configuration in the ConfigMap
will be applied in the next Shoot
reconciliation, but this is not automatically triggered.
If you want the changes to roll out immediately, trigger a reconciliation explicitly.
Structured Authorization
For shoots with Kubernetes version >= 1.30
, which have StructuredAuthorizationConfiguration
feature gate enabled (enabled by default), kube-apiserver
of shoot clusters can be provided with Structured Authorization configuration via the Shoot spec:
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
kubeAPIServer:
structuredAuthorization:
configMapName: name-of-configmap-containing-authorization-config
kubeconfigs:
- authorizerName: my-webhook
secretName: webhook-kubeconfig
The configMapName
references a user created ConfigMap
in the project namespace containing the AuthorizationConfiguration
in it’s config.yaml
data field.
Here is an example of such ConfigMap
:
apiVersion: v1
kind: ConfigMap
metadata:
name: name-of-configmap-containing-authorization-config
namespace: garden-my-project
data:
config.yaml: |
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
authorizers:
- type: Webhook
name: my-webhook
webhook:
timeout: 3s
subjectAccessReviewVersion: v1
matchConditionSubjectAccessReviewVersion: v1
failurePolicy: Deny
matchConditions:
- expression: request.resourceAttributes.namespace == 'kube-system'
In addition, it is required to provide a Secret
for each authorizer.
This Secret
should contain a kubeconfig with the server address of the webhook server, and optionally credentials for authentication:
apiVersion: v1
kind: Secret
metadata:
name: webhook-kubeconfig
namespace: garden-my-project
data:
kubeconfig: <base64-encoded-kubeconfig-for-authz-webhook>
The user is responsible for the validity of the configured authorizers.
Be aware that changing the configuration in the ConfigMap
will be applied in the next Shoot
reconciliation, but this is not automatically triggered.
If you want the changes to roll out immediately, trigger a reconciliation explicitly.
ℹ️ Note
You can have one or more authorizers of type
Webhook
(no other types are supported).You are not allowed to specify the
authorizers[].webhook.connectionInfo
field. Instead, as mentioned above, provide a kubeconfig file containing the server address (and optionally, credentials that can be used bykube-apiserver
in order to authenticate with the webhook server) by creating aSecret
containing the kubeconfig (in the.data.kubeconfig
key). Reference thisSecret
by adding it to.spec.kubernetes.kubeAPIServer.structuredAuthorization.kubeconfigs[]
(choose the properauthorizerName
, see example above).
Be aware of the fact that all webhook authorizers are added only after the RBAC
/Node
authorizers.
Hence, if RBAC already allows a request, your webhook authorizer might not get called.
Static Token Kubeconfig
Note: Static token kubeconfig is not available for Shoot clusters using Kubernetes version >= 1.27. The
shoots/adminkubeconfig
subresource should be used instead.
This kubeconfig
contains a static token and provides cluster-admin
privileges.
It is created by default and persisted in the <shoot-name>.kubeconfig
secret in the project namespace in the garden cluster.
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
...
spec:
kubernetes:
enableStaticTokenKubeconfig: true
...
It is not the recommended method to access the shoot cluster, as the static token kubeconfig
has some security flaws associated with it:
- The static token in the
kubeconfig
doesn’t have any expiration date. Read Credentials Rotation for Shoot Clusters to learn how to rotate the static token. - The static token doesn’t have any user identity associated with it. The user in that token will always be
system:cluster-admin
, irrespective of the person accessing the cluster. Hence, it is impossible to audit the events in cluster.
When the enableStaticTokenKubeconfig
field is not explicitly set in the Shoot spec:
- for Shoot clusters using Kubernetes version < 1.26, the field is defaulted to
true
. - for Shoot clusters using Kubernetes version >= 1.26, the field is defaulted to
false
.
Note: Starting with Kubernetes 1.27, the
enableStaticTokenKubeconfig
field will be locked tofalse
.