Cloud-Native Networking
Ingress, Mesh, and the Death of the Static IP
1. North-South: The Ingress Controller
North-South traffic refers to communication between the outside world (the internet) and your services inside the cloud. In Kubernetes, a bare LoadBalancer Service provisions one cloud load balancer per application — an expensive and unscalable approach. The Ingress resource solves this by routing all external HTTP/S traffic through a single intelligent proxy.
- Ingress: Acts as the entry point. It handles SSL termination, URL routing (e.g.,
pingdo.net/apivspingdo.net/app), and load balancing. - Gateway API: The modern replacement for Ingress, providing role-based configuration (infrastructure vs. developer) and more granular control for multi-cloud deployments.
2. East-West: The Service Mesh
East-West traffic refers to microservices talking to each other inside the cluster. As applications grow to hundreds of services, managing cross-service communication becomes a serious operational burden. Each service team would need to independently implement retries, timeouts, circuit breaking, and mutual authentication — creating inconsistency and security gaps.
A Service Mesh (like Istio or Linkerd) solves this by injecting a tiny proxy (Sidecar) next to every application container. The sidecar intercepts all inbound and outbound traffic, applying policy without requiring any application code changes.
Service Mesh & Sidecar Lab
L7 Traffic Policies & Identity-Based Security
Insecure Channel Warning
Traffic is traversing the network in cleartext. Anyone with access to the cluster networking can sniff headers.
The Envoy proxy is "injected" into the pod. The application thinks it's talking to a database, but it's actually talking to the sidecar, which then negotiates the secure connection.
Implementing TLS in code is hard. Implementing it at the mesh level is zero-code. The mesh handles certificate rotation and encryption automatically.
Every time a packet moves through a proxy, it adds a tiny fraction of a millisecond. In high-frequency trading, this matters. In standard web apps, the security gains far outweigh the 0.5ms delay.
Benefits of a Service Mesh
- mTLS (Mutual TLS): Automatically encrypts every service-to-service connection without changing any code. Each sidecar presents a SPIFFE-based X.509 certificate, enabling cryptographic identity verification — not just network-level trust.
- Observability: Provides a real-time "map" of which services are talking and where the latency is occurring. Distributed traces (via Jaeger or Zipkin) are automatically generated for every request, even across multiple microservices.
- Traffic Splitting (Canary): Allows you to send 1% of traffic to a new version of a service to test it before a full roll-out, based on weighted routing rules — not random luck.
3. The eBPF Revolution: Ambient Mesh
The sidecar model has a significant drawback: every pod gets an additional Envoy container, consuming CPU and memory even when the pod is idle. For clusters with hundreds or thousands of pods, this overhead becomes substantial.
Ambient Mesh (Istio's new architecture) and Cilium address this by moving the mesh data plane into the kernel using eBPF (Extended Berkeley Packet Filter). eBPF programs run in a sandboxed environment within the Linux kernel, intercepting and processing packets at wire speed without the overhead of a user-space proxy.
Conclusion
Ingress manages the entrance; Service Mesh manages the interior. Together, they create a "Zero Trust" network where every packet is authenticated and every connection is monitored, allowing developers to focus on features instead of connectivity. The evolution from sidecar proxies toward eBPF-based ambient mesh signals that the cloud-native networking stack is maturing: the goal is to make the security and observability guarantees invisible to application developers while remaining fully programmable for platform engineers.