Why Kubernetes

Deployment, scaling, monitoring, service discovery, configuration, we may or not be aware of such problems when we start but we definitely have to find a solution for them if we want to have a sustainable infrastructure for implementing software which is able to support increasing levels of success.

There are many approaches to tackle the aforementioned situation, and a good first step is to standardize the application representation from the infrastructure point of view. What I mean with that is: «An application is an application independently of the underlying technology in which it was implemented» Docker is useful for that since it provides a nice abstraction of what represents an application. I exposed my thoughts on that topic here Why Docker.

At some point, applications are ready for deployment, for that we need infrastructure (hosts), at the same time we need an architecture to support deployment. A basic infrastructure architecture might look something like this:

Basic infrastructure Architecture.

There are many ways in which we can scale the above infrastructure, but levels of complexity for management and deployment can grow that depends on what solution we pick.

Kubernetes (shorten as k8s) is a tool for automating deployment, scaling and management of containers across clusters of hosts. It creates an abstraction on top of those clusters, so you can visualize them as a single resource. Container operations (deployment, scaling, etc.) on top the Kubernetes cluster are performed using the API, internally Kubernetes performs the necessary operations to schedule the container in a host which is run based on several parameters such resources available on the different hosts of the cluster.

One of the biggest advantages of using Kuberentes is its declarative approach for defining and managing containers/applications state (for example number of instances). You define a YAML or JSON file with the desired state, call the API to define or update that state, and the necessary actions for that to be a reality are performed internally. I will explain a little more about that later.

Kubernetes Concepts

API Server

The API Server is the interface you will use for defining the state of the cluster. Internally the API stores the desired state in the highly available data store etcd, actions to reach the desired state are performed by controllers.

Controllers

After you have defined the desired state for the cluster, controllers are in charge to match that state in the cluster.

Standard out of the box controllers are kube-controller-manager and cloud-controller-manager.

From the Kubernetes documentation: For simplicity you can think of this as the following

1. What is the current state of the cluster (X)?
2. What is the desired state of the cluster (Y)?
3. X == Y ?
   true - Do nothing.
   false - Perform tasks to get to Y (such as starting or restarting containers, or scaling the number of replicas of a given application).
(Return to 1)

Kubernetes Objects

The basic Kubernentes objects are:

  • Pod
  • Service
  • Volume
  • Namespace
  • Ingress

These are some of the basic objects but there are many more.

Pod

A pod is a set of containers with shared storage/network. Containers within a pod share IP address and port space and can find each other via localhost. You can think of them as many processes running on the same OS.

There are interesting applications for pods (besides just serving or running a single application), for example, «Sidecar Containers» which enhances the functionality of the main container. An example I created is a Pod with Hugo Static Site Framework. The pod contains 3 containers which share a common files volume, git-sync keeps the master branch of the site up to date, ottogiron/hugo regenerates the site when new changes are found, and a standard Nginx image serves the generated static content. You can check an example here: https://github.com/ottogiron/hugo-k8s-test.

Some other applications are described here https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/

Service

Pods are created dynamically and have a limited lifespan, that is how it’s meant to be because you can scale them or destroy them on demand.

Here comes the Service. Services are an abstraction which defines a set of Pods to access them, you can also think of it as what we call a microservice.

Volume

Files on containers are ephemeral, what that means is if a container (inside a pod) is destroyed, all the files will be destroyed as well.

Volumes allow us to define a file system volume with an explicit lifetime from an explicit source using volumes types, for example, a distributed file system such nfs or a cloud AWS or GCP bucket. Implementation of volume sources is based on plugins, so there are some interesting implementations such as gitRepo. You can check available types here

Namespaces

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. When you first create a cluster by default there is an pre-existing namespace called «default».

Ingress

By default services in a cluster are not accessible from the outside, you must explicitly expose them. There are many ways of achieving that. Ingress is a collection of rules that allow inbound connections to reach the cluster services, you can think of them as something such NGINX which allows you to define different rules for accessing the available services.

Conclusion

When we create a new application, implicitly there are many things we need to take into account in order expose it as a service for our end users, and not only that, also to make sure it meets the quality standards to be successful. Networking, load balancing, security, monitoring, orchestration, service discovery are some of the most important topics. Kubernetes already takes into account and solves many of our problems letting us focus on what is important which is implementing a great product.


comments powered by Disqus