angacom expo

17-19 June

Bella Center, Copenhagen, Denmark

DTW Ignite 2025

Let's meet!
CEO Volodymyr Shynkar
HomeBlogUnderstanding Kubernetes Architecture: Components and Communication Flow
KubernetesMigration

Understanding Kubernetes Architecture: Components and Communication Flow

Image
9 mins
06.05.2025

Nazar Zastavnyy

COO

Kubernetes Architecture Overview

Kubernetes (K8s) operates using a distributed system pattern. The k8s architecture follows a primary/worker design, where a control plane (previously called the master) oversees worker nodes that actually run your applications.

When developers are introduced to Kubernetes, they typically struggle with grasping how all of the pieces fit together. The Kubernetes structure, while it can look complex at the start, is not as bad as it looks when you dissect it.

At its core, the Kubernetes architecture consists of two main categories:

  1. Control Plane components – The “brains” of the operation
  2. Node components – Running on every worker machine in the cluster

But before diving deeper, let’s acknowledge something: understanding this architecture isn’t just academic. When things break at 3 AM (and they will), knowing how the Kubernetes components communicate can mean the difference between a quick fix and a prolonged outage.

Control Plane Components

The control plane makes global decisions about your cluster. Think of it as air traffic control for your containers. Let’s examine what makes up this critical part of the Kubernetes cluster architecture:

API Server

The API server is the front door to your Kubernetes kingdom. All communications—internal and external—flow through this gatekeeper. When you run a kubectl command, you’re talking to the API server.

What makes it special? The API server validates and processes REST requests, updating the corresponding objects in etcd. It doesn’t just blindly accept commands—it enforces authentication, authorization, and admission control policies.

etcd

If the API server is the front door, etcd is the vault inside. This distributed key-value store holds the entire state of your cluster—your single source of truth.

When something changes in your cluster, that change isn’t real until it’s in etcd. This consistent and highly available store ensures that if your control plane crashes, it can rebuild itself based on the data in etcd.

Scheduler

The scheduler watches for newly created Pods with no assigned node and selects a node for them to run on. This isn’t random—it’s a sophisticated decision-making process considering:

  • resource requirements;
  • hardware/software constraints;
  • affinity/anti-affinity specifications;
  • data locality;
  • deadlines.

The scheduler doesn’t actually place the Pod – it just decides where it should go, updating the Pod definition accordingly.

Controller Manager

The controller manager runs controller processes that regulate the state of your cluster. These controllers are like little daemon processes, each watching a specific aspect of your cluster:

  • Node Controller: Notices and responds when nodes go down
  • Job Controller: Watches for one-off Job objects and creates Pods to run those jobs
  • Endpoints Controller: Populates the Endpoints object (joins Services & Pods)
  • Service Account & Token Controllers: Create accounts and API access tokens

They all work on a simple principle: observe the current state, compare it to the desired state, and make changes to reach the desired state.

Cloud Controller Manager

For clusters running on public cloud platforms, the cloud controller manager links your cluster with the cloud provider’s API. This component allows Kubernetes to interact with cloud resources like load balancers, routes, and storage.

Node Components

Now let’s move to where the actual work happens. Each node in your Kubernetes architecture runs several components:

Kubelet

The kubelet is the primary “node agent,” ensuring containers are running in a pod. It takes a set of PodSpecs provided through various mechanisms and ensures the containers described in those PodSpecs are running and healthy.

Interestingly, the kubelet doesn’t manage containers that weren’t created by Kubernetes.

Container Runtime

Kubernetes does not run containers directly. Rather, it relies on a container runtime to pull container images and start, stop, and manage the networking and storage for containers. Kubernetes can rely on containerd, CRI-O, or Docker (with cri-dockerd).

Kube-proxy

Network communication within and outside your cluster is managed by kube-proxy. This network proxy maintains network rules on nodes, allowing network communication to your pods from inside or outside the cluster.

Kube-proxy implements part of the Kubernetes Service concept, maintaining network rules that allow communication with Pods from network sessions inside or outside your cluster.

Kubernetes Architecture Diagram and Communication Flow

A Kubernetes architecture diagram helps visualize how these components interact. Let’s mentally walk through what happens when you create a deployment:

  1. You (or a CI/CD system) send a request to the API server to create a Deployment.
  2. The API server validates your request and stores it in etcd.
  3. The Deployment controller notices the new Deployment and creates a ReplicaSet.
  4. The ReplicaSet controller creates Pod objects.
  5. The Scheduler notices unassigned Pods and decides which nodes should run them.
  6. The kubelet on each selected node sees it’s been assigned Pods.
  7. The kubelet tells the container runtime to pull the container images and run them.
  8. The kube-proxy creates the necessary network rules for the Pods.

This flow demonstrates the decentralized yet coordinated approach of the Kubernetes cluster architecture diagram. No single component does everything – they each have specific responsibilities and communicate through the API server.

How Kubernetes Achieves High Availability and Scalability

The Kubernetes architecture explained above naturally supports high availability and scalability:

Control Plane High Availability

In production environments, the control plane typically runs multiple instances across different physical machines. This allows:

  • API server: Run multiple instances behind a load balancer
  • etcd: Operate as a distributed cluster
  • Scheduler and controller manager: Run in active-passive configuration (only one active at a time)

This redundancy ensures your cluster can survive the loss of individual control plane components.

Node Scalability

The k8s architecture diagram shows how nodes can scale horizontally. Adding more nodes to your cluster is relatively straightforward:

  1. Provision a new machine
  2. Install node components
  3. Connect it to the control plane

Kubernetes will automatically start scheduling workloads on the new node if resources are needed there.

Pod Scaling

Within the Kubernetes architecture overview, Pods can scale:

  • Horizontally: Running more instances of a Pod (via Deployments/ReplicaSets)
  • Vertically: Increasing resources allocated to Pods

The Horizontal Pod Autoscaler can automatically adjust the number of Pods based on observed metrics like CPU utilization.

Real-world Considerations for Kubernetes Architecture

When implementing Kubernetes in production, several factors need attention:

Network Topology

Your Kubernetes components need to communicate efficiently. Consider:

  • Network latency between control plane and nodes
  • Network policies to secure Pod-to-Pod communication
  • Service mesh implementations for advanced traffic management

A proper network setup requires understanding your infrastructure and application requirements. Many organizations leverage devops services and solutions to properly architect and implement their Kubernetes networking.

Security Architecture

The Kubernetes cluster architecture requires robust security:

  • TLS encryption for all control plane communications
  • RBAC for API access control
  • Pod security policies
  • Network policies
  • Secret management

Security should be layered throughout your cluster. Consider implementing managed cloud security services for continuous monitoring and protection.

Monitoring and Observability

The distributed nature of the k8s architecture necessitates comprehensive monitoring:

  • Control plane component health
  • Node metrics
  • Pod resource utilization
  • Application performance

Implementing proper application performance monitoring tools provides visibility into your entire stack.

Final Thoughts

Grasping the Kubernetes architecture components is not only helpful for passing your credentials and certification, but it is key foundational knowledge to deploy containerized workloads correctly in production.

The beauty of the architecture is its separation of concerns. Each component has a job to perform, and they are designed to work together with well-defined interfaces. This leads to a built-for-cloud architecture that is inherently resilient, scalable, and maintainable.

For organizations starting on their Kubernetes path, partnering with experienced cloud managed services providers with experience can augment speed to adoption and confirmation that the implementation is performing correctly.

Kubernetes architecture may change over time, but its graduate-debated tenets will remain fairly stable. By establishing these foundations, you will gain competency when deploying, troubleshooting, and refining operational efforts of your containerized applications.

Frequently Asked Questions

What is etcd and why is it critical to Kubernetes?

Etcd is the database for the cluster, containing configuration and state information; it is critical because it is the one true source for the entire cluster. If etcd falters or becomes corrupted, you would lose all of your information about your deployments, services, and resources. That’s why production clusters typically will perform etcd in a highly available configuration with multiple nodes and make regular backups.

Can Kubernetes run without kube-proxy?

While it is technically feasible to run a container orchestration platform without kube-proxy, it will constrain your networking capabilities at a fundamental level. Without kube-proxy, services rely on kube-proxy to implement the service abstraction and create and maintain network rules to define service endpoints. Alternative service mesh implementations (like Istio) can provide similar service abstractions, but they typically run with kube-proxy instead of being a replacement.

How do control plane components stay in sync?

The control plane components use etcd as the primitive mechanism for staying in sync. Control plane components do not directly communicate with one another; rather, they have a means to look at cluster state and modify it using the API server, which then reads and writes to etcd. This means that there is significant indirection here, which is why it is a loosely coupled structure, since the control plane components do not have to know about each other directly.

What happens when a node goes offline in the cluster?

When a node goes offline, several mechanisms activate:

  1. The node controller notices the node hasn’t sent a heartbeat
  2. After a grace period, it marks the node as unhealthy
  3. The Pod eviction process begins
  4. The scheduler places the evicted Pods on other healthy nodes
  5. If using Deployments, the ReplicaSet controller ensures the desired number of replicas are running

This self-healing process is a key benefit of the Kubernetes cluster architecture.

How do Kubernetes components handle authentication and RBAC?

Authentication in the Kubernetes architecture happens at the API server level. The API server supports multiple authentication methods (certificates, tokens, OIDC, etc.). Once authenticated, authorization occurs through Role-Based Access Control (RBAC).

RBAC defines who can access what resources through

  • Roles and ClusterRoles (what actions can be performed on which resources)
  • RoleBindings and ClusterRoleBindings (which users get which roles)

The control plane components themselves typically use service account credentials with appropriate permissions to perform their functions.

Did you like the article?

1 ratings, average 5 out of 5

Comments

Loading...

Blog

OUR SERVICES

REQUEST A SERVICE

651 N Broad St, STE 205, Middletown, Delaware, 19709
Ukraine, Lviv, Studynskoho 14

Get in touch

Contact us today to find out how DevOps consulting and development services can improve your business tomorrow.