Let’s be honest, Kubernetes is
Cool Kool. And to introduce you to its coolness I’ve started this series. We’ll start this one too like the Docker series by first answering the “why” and then have a look at “what” Kubernetes (also called K8s) is and explore a bit of its architecture.
If you’re not familiar with the concept of containerization you might want to read this post first.
To answer “why” K8s exists you will have to understand a bit about how deployment with containers work.
The simplest way to understand that is to imagine your containers running on computers present “somewhere”. This “somewhere” is generally referred to as “the cloud” and services like AWS, Azure, and Google Cloud simply provide you access to these computers. These computers can be best thought of as our own remote hosting machines where we can install Docker and run containers. Simple as that.
Now there are a few problems with this way:
The containers you run might shut down and will need to be replaced.
If there is greater traffic you might need to spin up more containers.
You might also want to ensure that only one container isn’t doing the heavy lifting and that the load is equally distributed amongst all running instances.
It is all these problems that K8s aims to solve. For those of you who’re aware of services like AWS ECS might argue that they play a similar role so why bother with K8s?
Yes, you’re right in saying that these services can solve this problem but this, in turn, would mean that you would have to learn that particular service and if want to switch to something else in the future then learn that particular new one. So why not familiarize yourself with a standardized way that will work regardless of the provider you chose?
That is simply why one would prefer K8s over these services.
You would need some provider-specific setup with Kubernetes too but that would be a lot less than what you would need when not using K8s.
The official K8s website describes Kubernetes as:
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
This explanation should make a lot of sense now after our discussion on “why” K8s. The gist is that it will make our life easy by helping us with deploying containers, scaling them based on the traffic we receive, and overall management of our containerized application.
A lot of people when starting with Kubernetes get overwhelmed or confused by the way its architecture works. I will try to simplify this as much as possible here. I do recommend you go check the official documentation after reading this since here instead of technical accuracy my aim to simplify things so that you can understand the big picture.
We will analyze this chart from right to left.
The rightmost unit in the diagram is a Pod. It can basically be described as the smallest unit in the world of K8s. K8s doesn’t run containers directly and uses these “pods” to wrap one or more containers. The containers in a pod share the same resources. The Pods are created and managed by K8s.
In short: Just imagine a Pod as a wrapper for our container/s.
Like we talked about in the “why” section, a K8s cluster is nothing but a network of computers. The term “Node” can be interpreted as a single computer in this network. There are two kinds of Nodes: Worker and Master.
Ther Worker Nodes host the pods which run the containers like we talked about above. There can be multiple pods running different containers present in the same worker node. This should not be a surprise because like I already said a Node is just a computer somewhere on the internet (offered by a cloud provider) with a certain amount of CPU and memory and therefore we can, of course, run totally different containers and tasks on it.
Apart from pods, three important things are present in the Worker Node:
Docker: This is a no brainer since we need Docker to run the application containers.
kubelet: This can be simply understood as an application that is responsible for communication between the Master and the Worker Nodes.
kube-proxy: This can be simplified by understanding its function which is to handle network communications between the pods and network sessions inside or outside the entire K8s cluster.
In short: Just imagine a Worker Node as a computer that has the required tools and pods.
The final thing we need to talk about is the Master Node. The master node hosts the “Control Plane” which can be understood as the brain of our K8s cluster. The control plane basically ensures that our K8s cluster is working like we configured it too.
A few important things running in the master node are:
API Server: It is the most important service running on the master node and is the counterpart for the
kubeletwe talked about above. That is, it is responsible for communication with the worker nodes.
Scheduler: It is responsible for watching our pods and choosing the worker nodes on which new pods should be created.
And why would we need new pods?
Incase a pod got unhealthy and went down or because of scaling.
So it is the scheduler that is responsible for telling the API Server “what” to tell the worker nodes.
There are some other things present too which you can look at in the official docs. But for now, this would suffice.
In Short: The master node is the brain of our K8s cluster.
This concludes our introduction to the world of Kubernetes. This article is a part of the second series I plan to write in the coming weeks. I recently finished my Demystifying Docker series where I discussed about docker fundamentals. Reading the previous series isn’t a necessity as such but it is highly recommended that you go through it.
Thanks for reading! 🙂