Kubernetes Cluster Architecture

Focus: Control Plane vs Worker Nodes + Ingress + CNI Networking. Key areas: NGINX Ingress Controller, Kubernetes Ingress API, cert-manager (optional).

Use this as a block diagram of the system when explaining architecture.

Preview
Edit this example
Diagram caption: Kubernetes Cluster Architecture (Control Plane vs Worker Nodes + Ingress + CNI Networking) has 4 layers: External Access & Networking Edge, Control Plane (Cluster Management), Worker Nodes (Compute Plane), Observability & Cluster Services (Optional Supporting Plane).

Prompt

Kubernetes cluster architecture diagram illustrating the distinction between the Control Plane and Worker Nodes. The Control Plane must include the API Server, etcd key-value store, Kube-Scheduler, and Controller Manager. Depict multiple Worker Nodes, each containing a Kubelet, Kube-proxy, and Container Runtime hosting multiple Pods. Show the networking layer with an Ingress Controller managing external access and a CNI plugin handling pod-to-pod communication.
Highlights
  • Layer details · Control Plane (Cluster Management): Modules include kube-apiserver, etcd (Key-Value Store), kube-scheduler, kube-controller-manager.
  • Layer details · External Access & Networking Edge: Modules include Ingress Controller, Service Networking (ClusterIP/NodePort/LoadBalancer), CNI Pod Network (Pod-to-Pod).
  • Module responsibilities · External Access & Networking Edge / Ingress Controller: Manage external HTTP/HTTPS access into the cluster; Route requests to the correct Kubernetes Service; Enforce ingress policies and TLS settings

Overview

Kubernetes Cluster Architecture (Control Plane vs Worker Nodes + Ingress + CNI Networking) has 4 layers: External Access & Networking Edge, Control Plane (Cluster Management), Worker Nodes (Compute Plane), Observability & Cluster Services (Optional Supporting Plane).

Layer details

Show all (4)
  • External Access & Networking Edge: Modules include Ingress Controller, Service Networking (ClusterIP/NodePort/LoadBalancer), CNI Pod Network (Pod-to-Pod).
  • Control Plane (Cluster Management): Modules include kube-apiserver, etcd (Key-Value Store), kube-scheduler, kube-controller-manager.
  • Worker Nodes (Compute Plane): Modules include Worker Node A, Worker Node B, Worker Node C.
  • Observability & Cluster Services (Optional Supporting Plane): Modules include DNS & Service Discovery, Metrics & Logging (Optional).

Module responsibilities

Show all (12)
  • External Access & Networking Edge / Ingress Controller: Manage external HTTP/HTTPS access into the cluster; Route requests to the correct Kubernetes Service; Enforce ingress policies and TLS settings
  • External Access & Networking Edge / Service Networking (ClusterIP/NodePort/LoadBalancer): Provide stable service discovery and load balancing; Abstract Pod IP churn behind a Service VIP; Enable internal east-west traffic routing
  • External Access & Networking Edge / CNI Pod Network (Pod-to-Pod): Provide flat L3 networking for Pods; Enable cross-node pod-to-pod communication; Apply network policies when configured
  • Control Plane (Cluster Management) / kube-apiserver: Expose the Kubernetes API for clients and components; Validate and persist desired state to etcd; Enforce authentication, authorization, and admission policies
  • Control Plane (Cluster Management) / etcd (Key-Value Store): Persist all cluster configuration and desired state; Provide strongly consistent reads/writes for control plane; Enable recovery via snapshot backups
  • Control Plane (Cluster Management) / kube-scheduler: Assign Pods to suitable Worker Nodes; Optimize placement based on resources and policies; Respect constraints (affinity, topology, taints)
  • Control Plane (Cluster Management) / kube-controller-manager: Reconcile actual cluster state to desired state; Perform self-healing (restart/replace); Manage lifecycle of Kubernetes resources
  • Worker Nodes (Compute Plane) / Worker Node A: Run Pods and report node/pod status; Enforce desired pod spec via kubelet; Provide service forwarding via kube-proxy
  • Worker Nodes (Compute Plane) / Worker Node B: Host scheduled workloads (Pods); Maintain node health and heartbeats; Participate in pod networking via CNI
  • Worker Nodes (Compute Plane) / Worker Node C: Execute containers and manage pod lifecycle; Attach pod network interfaces via CNI; Expose node-level metrics/logs (optional)
  • Observability & Cluster Services (Optional Supporting Plane) / DNS & Service Discovery: Resolve Service/Pod names to IPs; Enable service-to-service connectivity via DNS; Support internal discovery patterns
  • Observability & Cluster Services (Optional Supporting Plane) / Metrics & Logging (Optional): Provide cluster-wide observability; Detect and alert on failures and saturation; Support troubleshooting and capacity planning

Key flows

Show all (3)
  • Control loop: users and controllers submit desired state to kube-apiserver; the API server validates requests and persists state in etcd, while kube-controller-manager continuously reconciles resources to match the desired state.
  • Scheduling and execution: kube-scheduler watches for unscheduled Pods via the API server and assigns them to Worker Nodes; kubelet on the target node pulls images via the container runtime (containerd/CRI-O) and starts containers to form Pods, reporting status back to the API server.
  • Networking and ingress: the CNI plugin configures Pod networking for pod-to-pod communication across nodes; kube-proxy programs Service forwarding rules (iptables/ipvs), and the Ingress Controller provides external access by routing north-south traffic to Services and their backing Pods.