Bandwidth vs. Throughput
The Engineering Reality of Data Transmission
How to measure Network Stability? The Shannon-Hartley Theorem
Before we can analyze the flow of data, we must understand the container. Bandwidth is fundamentally a measurement of the range of frequencies (spectrum) available for transmission. This relationship is defined by the Shannon-Hartley Theorem:
This formula teaches us that even with infinite bandwidth, the noise floor will eventually clamp the effective capacity. However, even in a noise-free environment, the Bandwidth-Delay Product (BDP) often acts as the primary bottleneck.
1. The BDP Calculation
The BDP represents the maximum number of bits that can be "in flight" on the wire at any given moment. For a 1 Gbps link with a 100ms RTT:
If your TCP window is smaller than 12.5 MB, the sender will stop and wait for an ACK, causing the link to sit idle—effectively reducing your throughput below the available bandwidth.
2. What defines Effective Throughput? The Impact of Header Tax
Throughput is the rate of *successful* message delivery. Even if every packet arrives perfectly, your application will never see the full bandwidth because of Protocol Overhead.
| Layer | Protocol | Overhead (Bytes) |
|---|---|---|
| Data Link | Ethernet II | 26 Bytes (Preamble, Header, CRC, IFG) |
| Network | IPv4 | 20 Bytes (Minimum) |
| Transport | TCP | 20 Bytes (Minimum) |
For a standard 1500-byte MTU, the data payload (MSS) is 1460 bytes. The Efficiency Factor can be calculated as:
How does Congestion affect Network Stability?
When demand (load) exceeds capacity (bandwidth), packets must wait in queues. As we explored in our guide to Jitter, these queues introduce delay variation. If the buffers overflow, we experience Bufferbloat, leading to a catastrophic drop in throughput known as "Congestion Collapse."
RFC 6349: Measuring True Throughput
Most throughput tests are performed incorrectly. Simply running a file transfer and measuring speed with a stopwatch measures many things other than the link capacity — including disk I/O, CPU overhead, and TCP slow-start behavior. The IETF's RFC 6349 Framework for TCP Throughput Testing defines a rigorous methodology:
- Measure RTT first using ICMP ping with large packets to determine the BDP.
- Configure the TCP Window to at least the BDP value to prevent window-limited throughput.
- Use multiple parallel streams if testing a high-bandwidth link, as a single TCP stream may not fill the pipe due to congestion window dynamics.
- Measure during steady-state, excluding the slow-start ramp-up period, to get a true measure of maximum achievable throughput.
Conclusion
Bandwidth is the road; throughput is the traffic that actually moves. Every physical characteristic of the network — noise, distance, cable quality — reduces your headroom from Shannon's theoretical limit. Every protocol layer adds an additional tax. Understanding these dimensions allows an engineer to systematically identify which constraint is the binding one, rather than randomly upgrading hardware hoping the problem resolves.