GridFix Labs Reference Series | Cloud & Software-Defined
CNI: Networking in Kubernetes
The Overlay Architecture of Containers
GridFix Technical Team Last Updated: January 31, 2026 13 min read Read
Verified by Engineering
In a Nutshell
Kubernetes networking is notoriously complex because it operates on a 'flat' IP per-pod model. The Container Networking Interface (CNI) is the standard that allows different networking providers (Calico, Flannel, Cilium) to plug into Kubernetes to handlepod-to-pod and pod-to-service communication. This article explores how overlay networks like VXLAN bridge the gap between virtual pods and physical nodes.
The Pod-to-Pod Mandate
In Kubernetes, every Pod gets its own unique IP address. The fundamental networking requirement is that any Pod must be able to communicate with any other Pod on any Node without using Network Address Translation (NAT).
How CNI Works
When a Pod is created, the Kubernetes agent (Kubelet) calls a CNI Plugin. The plugin is responsible for:
Assigning an IP address to the Pod.
Updating the routing table on the host node.
Establishing the tunnel (if using an overlay) to other nodes.
Pod Networking Visualizer
CNI Data Plane & Encapsulation Logic
Worker Node A (10.1.0.10)
Pod A10.244.1.2
eth0
Pod B10.244.1.3
eth0
cbr0 (Bridge) / veth-pair junction
eth0
Worker Node B (10.1.0.11)
Pod C10.244.2.2
cbr0
eth0
Pod IP Space
Node IP Space (Underlay)
Local Communication: When Pod A talks to Pod B on the same node, the traffic never leaves the Linux internal bridge (cbr0). It's purely virtual switching via veth pairs.
Overlay Networking (VXLAN): To cross nodes, the pod packet is "encapsulated" inside a regular UDP packet from Node A to Node B. This is why you see the Pod IP inside the Node IP.
Service Networking & Kube-Proxy
Pods are ephemeral (they die and restart with new IPs). We use Services to provide a stable IP. The magic of mapping a Service IP to a Pod IP happens via Kube-Proxy using IPtables or IPVS rules on every node.
Conclusion
Kubernetes networking is the ultimate abstraction. It hides the complexity of physical routing from the application developer, but it requires the platform engineer to deeply understand the tunnels and interfaces that make that abstraction possible.