TCP Congestion Control
The Math Behind the Internet's Speed
1. The Sliding Window: Flow Control
Before managing the network traffic, TCP must manage the individual connection. The Receiver Window (rwnd) is how much data the receiver can handle at once. If the receiver has a small buffer, the sender must slow down, regardless of how fast the network is.
2. The Congestion Phases
A TCP session involves a constant "feeling out" of the network:
- Slow Start: Double the number of packets sent every RTT. (1, 2, 4, 8...).
- Congestion Avoidance: Once a threshold is hit, increase linearly (+1 packet per RTT).
- Fast Recovery: If a packet is lost, cut the speed in half and start growing again.
TCP Congestion Control Simulator
Visualizing Window Scaling Algorithms (cwnd)
Standard Sawtooth: Linear growth during Avoidance. Cuts window by 50% on loss.
3. CUBIC vs. BBR: A New Philosophy
For decades, TCP algorithms like Reno and CUBIC were Loss-Based. They assumed that if a packet was lost, the network was congested.
Google's BBR (Bottleneck Bandwidth and RTT) is Model-Based. It ignores random packet loss and instead looks at the actual capacity of the link. It tries to stay at the "sweet spot" where throughput is maximized and latency (queuing) is minimized.
Conclusion
Congestion control is the primary reason the internet survived its transition from megabits to terabits. As we move toward 6G and satellite links (Starlink) with highly variable latency, the mathematics of BBR and other advanced controllers will become even more critical to the user experience.