diff options
author | Eric Dumazet <edumazet@google.com> | 2014-01-09 14:12:19 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-01-13 11:43:46 -0800 |
commit | 600adc18eba823f9fd8ed5fec8b04f11dddf3884 (patch) | |
tree | c165e3f973dd98b731fbdd771c5eb751af475b77 /net/sched/act_csum.c | |
parent | e6a767582942d6fd9da0ddea673f5a7017a365c7 (diff) | |
download | blackbird-obmc-linux-600adc18eba823f9fd8ed5fec8b04f11dddf3884.tar.gz blackbird-obmc-linux-600adc18eba823f9fd8ed5fec8b04f11dddf3884.zip |
net: gro: change GRO overflow strategy
GRO layer has a limit of 8 flows being held in GRO list,
for performance reason.
When a packet comes for a flow not yet in the list,
and list is full, we immediately give it to upper
stacks, lowering aggregation performance.
With TSO auto sizing and FQ packet scheduler, this situation
happens more often.
This patch changes strategy to simply evict the oldest flow of
the list. This works better because of the nature of packet
trains for which GRO is efficient. This also has the effect
of lowering the GRO latency if many flows are competing.
Tested :
Used a 40Gbps NIC, with 4 RX queues, and 200 concurrent TCP_STREAM
netperf.
Before patch, aggregate rate is 11Gbps (while a single flow can reach
30Gbps)
After patch, line rate is reached.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jerry Chu <hkchu@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/act_csum.c')
0 files changed, 0 insertions, 0 deletions