summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorIlpo Järvinen <ilpo.jarvinen@helsinki.fi>2008-11-24 21:12:28 -0800
committerDavid S. Miller <davem@davemloft.net>2008-11-24 21:12:28 -0800
commite8bae275d9354104f7ae24a48a90d1a6286e7bd9 (patch)
tree90f4bb2abd9eb31b3faa6393f1e164ac48b57238
parente1aa680fa40e7492260a09cb57d94002245cc8fe (diff)
downloadblackbird-op-linux-e8bae275d9354104f7ae24a48a90d1a6286e7bd9.tar.gz
blackbird-op-linux-e8bae275d9354104f7ae24a48a90d1a6286e7bd9.zip
tcp: more aggressive skipping
I knew already when rewriting the sacktag that this condition was too conservative, change it now since it prevent lot of useless work (especially in the sack shifter decision code that is being added by a later patch). This shouldn't change anything really, just save some processing regardless of the shifter. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r--net/ipv4/tcp_input.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 8085704863fb..3f26599ddc88 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1402,7 +1402,7 @@ static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
if (skb == tcp_send_head(sk))
break;
- if (!before(TCP_SKB_CB(skb)->end_seq, skip_to_seq))
+ if (after(TCP_SKB_CB(skb)->end_seq, skip_to_seq))
break;
*fack_count += tcp_skb_pcount(skb);
OpenPOWER on IntegriCloud