The weekly kube

user-image

Hibernate a Cluster to save money

Updated by Herz, Andreas

You want to experiment with Kubernetes or have set up a customer scenario, but you don’t want to run the cluster 24 / 7 for reasons of cost?

The Gardener gives you the possibility to scale your cluster down to zero nodes.

..read some more on Hibernate a Cluster.

user-image

ReadWriteMany - Dynamically Provisioned Persistent Volumes Using Amazon EFS

Updated by Andreas Herz

The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes. It consists of a container that has access to an AWS EFS resource. The container reads a configmap containing the EFS filesystem ID, the AWS region and the name identifying the efs-provisioner. This name will be used later when you create a storage class.

Why EFS

  1. When you have application running on multiple nodes which require shared access to a file system
  2. When you have an application that requires multiple virtual machines to access the same file system at the same time, AWS EFS is a tool that you can use.
  3. EFS supports encryption.
  4. EFS is SSD based storage and its storage capacity and pricing will scale in or out as needed, so there is no need for the system administrator to do additional operations. It can grow to a petabyte scale.
  5. EFS now supports NFSv4 lock upgrading and downgrading, so yes, you can use sqlite with EFS… even if it was possible before.
  6. Easy to setup

Why Not EFS

  1. Sometimes when you think about using a service like EFS, you may also think about vendor lock-in and its negative sides
  2. Making an EFS backup may decrease your production FS performance; the throughput used by backup counts towards your total file system throughput.
  3. EFS is expensive compared to EBS (roughly twice the price of EBS storage)
  4. EFS is not the magical solution for all your distributed FS problems, it can be slow in many cases. Test, benchmark and measure to ensure your if EFS is a good solution for your use case.
  5. EFS distributed architecture results in a latency overhead for each file read/write operation.
  6. If you have the possibility to use a CDN, don’t use EFS, use it for the files which can’t be stored in a CDN.
  7. Don’t use EFS as a caching system, sometimes you could be doing this unintentionally.
  8. Last but not least, even if EFS is a fully managed NFS, you will face performance problems in many cases, resolving them takes time and needs effort.

..read some more on ReadWriteMany.

user-image

Anti Patterns

Updated by Andreas Herz

Running as root user

Whenever possible, do not run containers as root users. One could be tempted to say that Kubernetes Pods and Node are well separated. The host and the container share the same kernel. If the container is compromised, a root user can damage the underlying node. Use RUN groupadd -r anygroup && useradd -r -g anygroup myuser to create a group and a user in it. Use the USER command to switch to this user.

Storing data or logs in containers

Containers are ideal for stateless applications and should be transient. This means that no data or logs should be stored in the container, as they are lost when the container is closed. If absolutely necessary, you can use persistence volumes instead to persist them outside the containers. However, an ELK stack is preferred for storing and processing log files.

..read some more on Common Kubernetes Antipattern.

user-image

Frontend HTTPS

Updated by Andreas Herz

For encrypted communication between the client to the load balancer, you need to specify a TLS private key and certificate to be used by the ingress controller.

Create a secret in the namespace of the ingress containing the TLS private key and certificate. Then configure the secret name in the TLS configuration section of the ingress specification.

..read on HTTPS - Self Signed Certificates how to configure it.

user-image

Shared storage with S3 backend

Updated by Andreas Herz

The storage is definitely the most complex and important part of an application setup, once this part is completed, one of the most problematic parts could be solved.

Mounting a S3 bucket into a pod using FUSE allows to access data stored in S3 via the filesystem. The mount is a pointer to an S3 location, so the data is never synced locally. Once mounted, any pod can read or even write from that directory without the need for explicit keys.

However, it can be used to import and parse large amounts of data into a database.

..read on Shared S3 Storage how to configure it.

user-image

Namespace Isolation

Updated by Andreas Herz

…or DENY all traffic from other namespaces

You can configure a NetworkPolicy to deny all traffic from other namespaces while allowing all traffic coming from the same namespace the pod is deployed to. There are many reasons why you may chose to configure Kubernetes network policies: - Isolate multi-tenant deployments - Regulatory compliance - Ensure containers assigned to different environments (e.g. dev/staging/prod) cannot interfere with each another

..read on Namespace Isolation how to configure it.

user-image

Namespace Scope

Updated by Andreas Herz

Should I use:

  • ❌ one namespace per user/developer?
  • ❌ one namespace per team?
  • ❌ one per service type?
  • ❌ one namespace per application type?
  • 😄 one namespace per running instance of your application?

Apply the Principle of Least Privilege

All user accounts should run at all times as few privileges as possible, and also launch applications with as few privileges as possible. If you share a cluster for different user separated by a namespace, all user has access to all namespaces and services per default. It can happen that a user accidentally uses and destroys the namespace of a productive application or the namespace of another developer.

Keep in mind: By default namespaces don’t provide: - Network isolation - Access Control - Audit Logging on user level

user-image

Watching logs of several pods

Updated by Andreas Herz

One thing that always bothered me was that I couldn’t get logs of several pods at once with kubectl. A simple tail -f <path-to-logfile> isn’t possible. Certainly you can use kubectl logs -f <pod-id>, but it doesn’t help if you want to monitor more than one pod at a time.

This is something you really need a lot, at least if you run several instances of a pod behind a deployment and you don’t have setup a log viewer service like Kibana.

kubetail comes to the rescue, it is a small bash script that allows you to aggregate log files of several pods at the same time in a simple way. The script is called kubetail and is available at https://github.com/johanhaleby/kubetail.

user-image

Big things come in small packages

Updated by Andreas Herz

Microservices tend to use smaller runtimes but you can use what you have today - and this can be a problem in kubernetes.

Switching your architecture from a monolith to microservices has many advantages, both in the way you write software and the way it is used throughout its lifecycle. In this post, my attempt is to cover one problem which does not get as much attention and discussion - size of the technology stack.

General purpose technology stack

There is a tendency to be more generalized in development and to apply this pattern to all services. One feels that a homogeneous image of the technology stack is good if it is the same for all services.

One forgets, however, that a large percentage of the integrated infrastructure is not used by all services in the same way, and is therefore only a burden. Thus, resources are wasted and the entire application becomes expensive in operation and scales very badly.

Light technology stack

Due to the lightweight nature of your service, you can run more containers on a physical server and virtual machines. The result is higher resource utilization.

Additionally, microservices are developed and deployed as containers independently of each another. This means that a development team can develop, optimize and deploy a microservice without impacting other subsystems.

user-image

Kubernetes is available in Docker for Mac 17.12 CE

Updated by Andreas Herz


Kubernetes is only available in Docker for Mac 17.12 CE and higher on the Edge channel. Kubernetes support is not included in Docker for Mac Stable releases. To find out more about Stable and Edge channels and how to switch between them, see general configuration.
Docker for Mac 17.12 CE (and higher) Edge includes a standalone Kubernetes server that runs on Mac, so that you can test deploying your Docker workloads on Kubernetes.

The Kubernetes client command, kubectl, is included and configured to connect to the local Kubernetes server. If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change context so that kubectl is pointing to docker-for-desktop:

…see more on Docker.com

I recommend to setup your shell to see which KUBECONFIG is active.

user-image

Whats New

Updated by Andreas Herz

Report an issue

See a typo? Have a picture to recommend? Want to edit some words/phrases/sentences? You can simply submit a ticket to request we make the change. If you are github savvy, submit a pull request. Open Github Issue