Gardener has an excellent ability to automatically scale machines for the cluster. From the point of view
of scalability, there is no need for manual intervention.
This tutorial is useful for those end-users who need specifically configured nodes, which are not yet supported
by Gardener. For example: an end-user who wants some workload that requires runnc instead of runc as container
In summer 2018, the Gardener project team asked Kinvolk to execute several penetration tests in its role as third-party contractor. The goal of this ongoing work is to increase the security of
all Gardener stakeholders in the open source community. Following the Gardener architecture, the control plane of a
Gardener managed shoot cluster resides in the corresponding seed cluster. This is a
a network air gap.
Along the way we found various kinds of security issues, for example, due to misconfiguration or missing isolation,
as well as two special problems with upstream Kubernetes and its Control-Plane-as-a-Service
The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes. It consists of a container
that has access to an AWS EFS resource. The container reads a configmap containing the EFS filesystem ID, the
AWS region and the name identifying the efs-provisioner. This name will be used later when you create a
When you have application running on multiple nodes which require shared access to a file system
When you have an application that requires multiple virtual machines to access the same file system at the same time,
AWS EFS is a tool that you can use.
EFS supports encryption.
EFS is SSD based storage and its storage capacity and pricing will scale in or out as needed, so there is no need
for the system administrator to do additional operations. It can grow to a petabyte scale.
EFS now supports NFSv4 lock upgrading and downgrading, so yes, you can use sqlite with EFS… even if it was possible
Easy to setup
Why Not EFS
Sometimes when you think about using a service like EFS, you may also think about vendor lock-in and its negative sides
Making an EFS backup may decrease your production FS performance; the throughput used by backup counts towards
your total file system throughput.
EFS is expensive compared to EBS (roughly twice the price of EBS storage)
EFS is not the magical solution for all your distributed FS problems, it can be slow in many cases. Test, benchmark
and measure to ensure your if EFS is a good solution for your use case.
EFS distributed architecture results in a latency overhead for each file read/write operation.
If you have the possibility to use a CDN, don’t use EFS, use it for the files which can’t be stored in a CDN.
Don’t use EFS as a caching system, sometimes you could be doing this unintentionally.
Last but not least, even if EFS is a fully managed NFS, you will face performance problems in many cases,
resolving them takes time and needs effort.
Whenever possible, do not run containers as root users. One could be
tempted to say that Kubernetes Pods and Node are well separated. The host and the container
share the same kernel. If the container is compromised, a root user can damage the underlying
node. Use RUN groupadd -r anygroup && useradd -r -g anygroup myuser to create a group
and a user in it. Use the USER command to switch to this user.
Storing data or logs in containers
Containers are ideal for stateless applications
and should be transient. This means that no data or logs should be stored in the
container, as they are lost when the container is closed. If absolutely necessary,
you can use persistence volumes instead to persist them outside the containers.
However, an ELK stack is preferred for storing and processing log files.
The storage is definitely the most complex and important part of an application setup, once this part is completed,
one of the most problematic parts could be solved.
Mounting a S3 bucket into a pod using FUSE allows to access data stored in S3 via
the filesystem. The mount is a pointer to an S3 location, so the data is never synced locally. Once mounted, any pod
can read or even write from that directory without the need for explicit keys.
However, it can be used to import and parse large amounts of data into a database.
You can configure a NetworkPolicy to deny all traffic from other namespaces while allowing all traffic
coming from the same namespace the pod is deployed to. There are many reasons why you may chose to configure Kubernetes
- Isolate multi-tenant deployments
- Regulatory compliance
- Ensure containers assigned to different environments (e.g. dev/staging/prod) cannot interfere with each another
All user accounts should run at all times as few privileges as possible, and also
launch applications with as few privileges as possible. If you share a cluster for
different user separated by a namespace, all user has access to all namespaces and
services per default. It can happen that a user accidentally uses and destroys the
namespace of a productive application or the namespace of another developer.
Keep in mind: By default namespaces don’t provide:
- Network isolation
- Access Control
- Audit Logging on user level
One thing that always bothered me was that I couldn’t get logs of several pods at once with kubectl. A simple
tail -f <path-to-logfile> isn’t possible. Certainly you can use kubectl logs -f <pod-id>, but it doesn’t
help if you want to monitor more than one pod at a time.
This is something you really need a lot, at least if you run several instances of a pod behind a deployment
and you don’t have setup a log viewer service like Kibana.
kubetail comes to the rescue, it is a small bash script that allows you to aggregate log files of several pods at
the same time in a simple way. The script is called kubetail and is available at
Microservices tend to use smaller runtimes but you can use what you have today - and this can be
a problem in kubernetes.
Switching your architecture from a monolith to microservices has many advantages, both in the
way you write software and the way it is used throughout its lifecycle. In this post, my attempt is to
cover one problem which does not get as much attention and discussion - size of the technology stack.
General purpose technology stack
There is a tendency to be more generalized in development and to apply this pattern to all services. One feels
that a homogeneous image of the technology stack is good if it is the same for all services.
One forgets, however, that a large percentage of the integrated infrastructure is not used by all services in
the same way, and is therefore only a burden. Thus, resources are wasted and the entire application becomes
expensive in operation and scales very badly.
Light technology stack
Due to the lightweight nature of your service, you can run more containers on a physical server and virtual
machines. The result is higher resource utilization.
Additionally, microservices are developed and deployed as containers independently of each another. This means that a development
team can develop, optimize and deploy a microservice without impacting other subsystems.
Kubernetes is available in Docker for Mac 17.12 CE
Kubernetes is only available in Docker for Mac 17.12 CE and higher on the Edge channel. Kubernetes
support is not included in Docker for Mac Stable releases. To find out more about Stable and Edge channels
and how to switch between them, see
Docker for Mac 17.12 CE (and higher) Edge includes a standalone Kubernetes server that runs on Mac,
so that you can test deploying your Docker workloads on Kubernetes.
The Kubernetes client command, kubectl, is included and configured to connect to the local Kubernetes server.
If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster,
be sure to change context so that kubectl is pointing to docker-for-desktop:
See a typo? Have a picture to recommend? Want to edit some words/phrases/sentences?
You can simply submit a ticket to request we make the change. If you are github savvy, submit a
Open Github Issue