K8s Kubernetes
02/20/2023
Kubernetes is a powerful open-source platform for managing containerized applications. At its core, Kubernetes is made up of several key components that work together to provide a scalable, reliable, and efficient platform for deploying and managing containerized applications.
In this article, we will provide an in-depth overview of the various components that make up Kubernetes, including the master components, node components, add-ons, persistent volumes, and services. We will explore the purpose and functionality of each component, and how they work together to create a robust and scalable container orchestration platform.
A group of worker computers, known as nodes, that run containerized apps make up a Kubernetes cluster. There is at least one worker node in each cluster.
The Pods that make up the application workload are hosted on the worker node(s). The cluster's worker nodes and Pods are controlled by the control plane. To provide fault tolerance and high availability, the control plane typically runs across many computers in production scenarios, and a cluster typically contains numerous nodes.
The different parts you need to have for a complete and functional Kubernetes cluster are described in this article.
Here's a brief overview of the core components of Kubernetes:
Master Components: These are the components that control the Kubernetes cluster. They include the API server, etcd, the scheduler, and the controller manager.
Node Components: These are the components that run on each node in the cluster. They include kubelet, kube-proxy, and container runtime.
Add-Ons: These are optional components that can be added to Kubernetes to provide additional functionality. Examples of add-ons include the dashboard, DNS service, and load balancer.
Persistent Volumes: This component allows you to store data in a way that is independent of the container and the node it's running on.
Services: This component provides a stable IP address and DNS name for a set of pods, allowing them to be accessed by other pods or external clients.
Overall, Kubernetes components work together to provide a powerful platform for managing containerized applications and scaling them efficiently.
The components of a Kubernetes cluster
The components of the control plane take general cluster choices (such as scheduling), as well as notice and react to cluster events (such as creating a new pod when a deployment's replicas field is not satisfied).
Each machine in the cluster can be used to run control plane components. To keep things simple, setup scripts do not run user containers on this machine and instead start all the control plane components on the same machine. For a sample control plane architecture that runs across multiple machines, see Building Highly Available clusters with Kubeadm.
The Kubernetes API is made available by the API server, which is a part of the Kubernetes control plane. The Kubernetes control plane's front end is the API server.
Kube-apiserver is the primary example of a Kubernetes API server. The kube-apiserver is made to scale horizontally, or by adding more instances. It can run in several instances, with traffic distribution across them.
All cluster data is stored in a reliable and highly accessible key value store that is used by Kubernetes.
Assuming etcd is the backing store for your Kubernetes cluster, make sure you have a backup strategy in place for those data.
The official documentation contains in-depth information about etcd.
Component of the control plane that searches for freshly formed Pods that have not yet been given a node and chooses one for them to run on.
Individual and group resource needs, hardware/software/policy restrictions, affinity and anti-affinity standards, data locality, inter-workload interference, and deadlines are all taken into consideration while making scheduling decisions.
component of the control plane that controls controller operations.
Although technically each controller should operate as a separate process, they are all compiled into a single binary and run in a single process to decrease complexity.
Some examples of these controllers include:
Node controller: In charge of identifying and reacting to node failures.
Job controller: Keeps an eye out for Job objects that stand for sporadic duties, then constructs Pods to carry out those tasks.
EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
ServiceAccount controller: Create default ServiceAccounts for new namespaces.
an element of the Kubernetes control plane that incorporates cloud-specific control logic. The cloud controller manager separates the components that communicate with the cloud platform from those that only interface with your cluster and enables you to connect your cluster to the API of your cloud provider.
Only controllers unique to your cloud provider are used by the cloud-controller-manager. The cluster lacks a cloud controller manager if Kubernetes is being used on your own premises or in a learning environment on your personal computer.
The cloud-controller-manager, like the kube-controller-manager, assembles various conceptually separate control loops into a single binary that you run as a single process. To increase performance or make it easier to handle errors, you can scale horizontally (run more than one copy).
The following controllers may be dependent on a cloud provider:
Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
Route controller: For setting up routes in the underlying cloud infrastructure
Service controller: For creating, updating and deleting cloud provider load balancers
Every node has components running on it that keep running pods up and running and provide the Kubernetes runtime environment.
An agent that is installed on each cluster node. It confirms that containers are operating within a Pod.
The kubelet checks that the containers described in a set of PodSpecs that are delivered via various mechanisms are active and in good condition. Containers that weren't built by Kubernetes are not managed by the kubelet.
Every node in your cluster runs kube-proxy, a network proxy that executes a portion of the Kubernetes Service concept.
On nodes, kube-proxy keeps track of network policies. These network rules permit network communication to your Pods from sessions both inside and outside of your cluster.
If an operating system packet filtering layer exists and is available, kube-proxy uses it. If not, kube-proxy automatically forwards the traffic.
The program in charge of operating containers is known as the container runtime.
Container runtimes like containerd, CRI-O, and any other implementation of the Kubernetes CRI are supported by Kubernetes (Container Runtime Interface).
To implement cluster capabilities, addons employ Kubernetes resources (such as DaemonSet, Deployment, etc.). Namespaced resources for addons should be located in the kube-system namespace because they are supplying cluster-level functionalities.
A short list of available addons is described below; for a longer list, please visit Addons.
Although cluster DNS is recommended for all Kubernetes clusters because many examples rely on it, the other addons are not strictly necessary.
In addition to the other DNS server(s) in your environment, Cluster DNS is a DNS server that provides DNS records for Kubernetes services.
This DNS server is automatically incorporated into DNS queries for containers launched by Kubernetes.
For Kubernetes clusters, Dashboard is a multipurpose, web-based user interface. It enables users to control and debug both the cluster itself and any running applications.
Container Resource Monitoring offers a UI for exploring general time-series metrics about containers that are recorded in a central database.
The saving of container logs to a central log store with a search/browsing interface is done by a cluster-level logging system.
In Apprecode we already use the latest version of Kubernetes for our leading customers. Please contact us for more information.