Abdel's Notes
  • Welcome
  • Kubernetes
    • KCNA Notes
      • Kubernetes Fundamentals
      • Container Orchestration
      • Cloud Native Architecture
      • Cloud Native Observability
      • Cloud Native Application Delivery
      • Exam Pointers
Powered by GitBook
On this page
  1. Kubernetes
  2. KCNA Notes

Container Orchestration

PreviousKubernetes FundamentalsNextCloud Native Architecture

Last updated 8 days ago

Contrary to popular belief, container technologies are much older than one would expect. One of the earliest ancestors of modern container technologies is the chroot command that was introduced in Version 7 Unix in 1979. The chroot command could be used to isolate a process from the root filesystem and basically "hide" the files from the process and simulate a new root directory. The isolated environment is a so-called chroot jail, where the files can’t be accessed by the process, but are still present on the system.

To isolate a process even more than chroot can do, current Linux kernels provide features like namespaces and cgroups.

Namespaces are used to isolate various resources, for example the network. A network namespace can be used to provide a complete abstraction of network interfaces and routing tables. This allows a process to have its own IP address. The Linux Kernel provides 8 namespaces: pid, net, mnt, ipc, user, uts, cgroup and time.

cgroups are used to organize processes in hierarchical groups and assign them resources like memory and CPU. When you want to limit your application container to let’s say 4GB of memory, cgroups are used under the hood to ensure these limits.

traditional vs container deployment

To run industry-standard containers, you don't need to use Docker; you can just follow the OCI runtime-spec standard instead. The Open Container Initiative also maintains a container runtime reference implementation called runC. This low-level runtime is used in a variety of tools to start containers, including Docker itself.

Container images are what makes containers portable and easy to reuse on a variety of systems. Docker describes a container image as following: “A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.”

When containers are started on a machine, they always share the same kernel, which then becomes a risk for the whole system, if containers are allowed to call kernel functions like for example killing other processes or modifying the host network by creating routing rules.

A fairly new attack surface that was introduced with containers is the use of public images. Two of the most popular public image registries are Docker Hub and Quay and while it’s great that they provide publicly accessible images, you have to make sure that these images were not modified to include malicious software.

Security in general is not something that can be achieved only at the container layer. It’s a continuous process that needs to be adapted all the time.

The 4C's of Cloud Native security can give a rough idea what layers need to be protected if you’re using containers:

  • Code

  • Container

  • Cluster

  • Cloud

Microservice architecture depends heavily on network communication. Unlike in monolithic applications, a microservice implements an interface that can be called to make a request. For example, you could have a service that responds with a list of products in an e-commerce application.

Network namespaces allow each container to have its own unique IP address, therefore multiple applications can open the same network port; for example, you could have multiple containerized web servers that all open port 8080.

Most modern implementations of container networking are based on the Container Network Interface (CNI). Because the networking is such a crucial part of microservices and containers, the networking can get very complex and opaque for developers and administrators. In addition to that, a lot of functionality like monitoring, access control or encryption of the networking traffic is desired when containers communicate with each other.

Instead of implementing all of this functionality into your application, you can just start a second container that has this functionality implemented. The software you can use to manage network traffic is called a proxy. This is a server application that sits between a client and server and can modify or filter network traffic before it reaches the server. Popular representatives are nginx, haproxy or envoy.

When a service mesh is used, applications don’t talk to each other directly, but the traffic is routed through the proxies instead. The most popular service meshes at the moment are istio and linkerd. While they have differences in implementation, the architecture is the same. The proxies in a service mesh form the data plane. This is where networking rules are implemented and shape the traffic flow.

Generally speaking, container images are read-only and consist of different layers that include everything that you added during the build phase. That ensures that every time you start a container from an image, you get the same behavior and functionality. The problem here is that this read-write layer is lost when the container is stopped or deleted. Just like the memory of your computer gets erased when you shut it down. To persist data, you need to write it to your disk.

When you orchestrate a lot of containers, persisting the data on the host where the container was started might not be the only challenge.

Storage is provisioned via a central storage system. Containers on Server A and Server B can share a volume to read and write data.