In a Nutshell

In modern packet-switched networks, the **throughput of a router** is rarely defined by its raw bit rate. Instead, the true bottleneck is defined by its **Packet-Per-Second (PPS) capacity**. As packet sizes shrink, the overhead of header processing in the switch ASIC or firewall CPU grows exponentially relative to the data being moved. This "Small Packet Paradox" means a 100Gbps router might struggle at just 10Gbps if the traffic consists primarily of 64-byte voice or ACK frames. This article provides a clinical model for calculating the **IMIX Efficiency Curve** and explores the relationship between packet size, serialization delay, and hardware saturation.

BACK TO TOOLKIT

Packet Size & IMIX Modeler

A precision simulator for network component throughput. Model the relationship between packet size distributions and your hardware's processing limits.

Packet Overhead Calculator

Protocol Overheads
L2 Ethernet
Ethernet II
bytes
L3 Network
IPv4
bytes
L4 Transport
TCP
bytes
Payload
1024
bytes
Total Overhead
58
bytes
Total Frame Size
1082
bytes (94.6% efficient)
Payload
Overhead
Share Article

1. The IMIX Model: Real-World Traffic Distribution

A single packet size never exists in isolation on a live network. To test for "real-world" performance, engineers use an **Internet Mix (IMIX)** profile.

Standard 7-Packet IMIX Profile

7x 64-Byte Packets (ACKs/VoIP)
4x 570-Byte Packets (Medium Data)
1x 1518-Byte Packet (Large Data)
Average Packet Size: ~340 Bytes

This mix represents the "Average" internet user. For a database backup, the mix shifts heavily toward 1518B. For a PUBG server, it shifts heavily toward 64B.

2. The PPS Envelope: ASIC Processing Limits

Every packet requires a "Header Processing Cycle" (L2/L3 lookups, ACL checks, NAT). If your processing limit is 10M PPS, you hitting a wall regardless of link speed.

PPSMax=ThroughputBits(PacketSize+L1Overhead)8PPS_{Max} = \frac{Throughput_{Bits}}{ (Packet_{Size} + L1_{Overhead}) \cdot 8 }

Small Packet Crisis (64B)

On a 10Gbps link, you are pushing 14.88 Million PPS. If your CPU or ASIC can only handle 10 Million, you will see 30% packet loss even though the "Bandwidth" utilization is only 70%.

Large Packet Peak (1518B)

On the same 10Gbps link, you only need 812,000 PPS. The hardware is barely idling, comfortably moving the same amount of data with 18x less processing work.

3. Serialization Delay: The Speed of the Bit

While large packets are efficient for throughput, they are terrible for latency. This is the **Serialization Delay**.

Wire Time (at 100Mbps)

1. **1518B Packet**: Takes 121μs to put on the wire.
2. **64B Packet**: Takes 5.1μs to put on the wire.
3. **Impact**: In a low-speed Link (e.g., T1 or Satcom), a large packet arriving in front of a voice packet can cause 'jitter' that exceeds the tolerance for VoIP.

4. Fragmentation: The Invisible CPU killer

If a packet is too large for a link's MTU, it is fragmented. This is worse than just overhead.

Frequently Asked Questions

Technical Standards & References

Bradner, S. and McQuaid, J. (IETF)
RFC 2544: Benchmarking Methodology for Network Interconnect Devices
VIEW OFFICIAL SOURCE
Spirent Communications
Understanding Internet Mix (IMIX) Traffic Profiles
VIEW OFFICIAL SOURCE
Cisco Support
The Physics of Serialization and Propagation Delay
VIEW OFFICIAL SOURCE
NVIDIA Networking
ASIC Architecture and PPS Limits in High-Radix Switches
VIEW OFFICIAL SOURCE
Mathematical models derived from standard engineering protocols. Not for human safety critical systems without redundant validation.

Related Engineering Resources

Partner in Accuracy

"You are our partner in accuracy. If you spot a discrepancy in calculations, a technical typo, or have a field insight to share, don't hesitate to reach out. Your expertise helps us maintain the highest standards of reliability."

Contributors are acknowledged in our technical updates.

Share Article