diff options
author | Tom Herbert <therbert@google.com> | 2011-04-04 22:30:30 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-04-04 22:30:30 -0700 |
commit | c6e1a0d12ca7b4f22c58e55a16beacfb7d3d8462 (patch) | |
tree | 6955c20538050329d0bdffdf24a787507ae6fdf1 /include/net/sock.h | |
parent | 14f98f258f1936e0dba77474bd7eda63f61a9826 (diff) | |
download | blackbird-obmc-linux-c6e1a0d12ca7b4f22c58e55a16beacfb7d3d8462.tar.gz blackbird-obmc-linux-c6e1a0d12ca7b4f22c58e55a16beacfb7d3d8462.zip |
net: Allow no-cache copy from user on transmit
This patch uses __copy_from_user_nocache on transmit to bypass data
cache for a performance improvement. skb_add_data_nocache and
skb_copy_to_page_nocache can be called by sendmsg functions to use
this feature, initial support is in tcp_sendmsg. This functionality is
configurable per device using ethtool.
Presumably, this feature would only be useful when the driver does
not touch the data. The feature is turned on by default if a device
indicates that it does some form of checksum offload; it is off by
default for devices that do no checksum offload or indicate no checksum
is necessary. For the former case copy-checksum is probably done
anyway, in the latter case the device is likely loopback in which case
the no cache copy is probably not beneficial.
This patch was tested using 200 instances of netperf TCP_RR with
1400 byte request and one byte reply. Platform is 16 core AMD x86.
No-cache copy disabled:
672703 tps, 97.13% utilization
50/90/99% latency:244.31 484.205 1028.41
No-cache copy enabled:
702113 tps, 96.16% utilization,
50/90/99% latency 238.56 467.56 956.955
Using 14000 byte request and response sizes demonstrate the
effects more dramatically:
No-cache copy disabled:
79571 tps, 34.34 %utlization
50/90/95% latency 1584.46 2319.59 5001.76
No-cache copy enabled:
83856 tps, 34.81% utilization
50/90/95% latency 2508.42 2622.62 2735.88
Note especially the effect on latency tail (95th percentile).
This seems to provide a nice performance improvement and is
consistent in the tests I ran. Presumably, this would provide
the greatest benfits in the presence of an application workload
stressing the cache and a lot of transmit data happening.
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/sock.h')
-rw-r--r-- | include/net/sock.h | 53 |
1 files changed, 53 insertions, 0 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index da0534d3401c..43bd515e92fd 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -52,6 +52,7 @@ #include <linux/mm.h> #include <linux/security.h> #include <linux/slab.h> +#include <linux/uaccess.h> #include <linux/filter.h> #include <linux/rculist_nulls.h> @@ -1389,6 +1390,58 @@ static inline void sk_nocaps_add(struct sock *sk, int flags) sk->sk_route_caps &= ~flags; } +static inline int skb_do_copy_data_nocache(struct sock *sk, struct sk_buff *skb, + char __user *from, char *to, + int copy) +{ + if (skb->ip_summed == CHECKSUM_NONE) { + int err = 0; + __wsum csum = csum_and_copy_from_user(from, to, copy, 0, &err); + if (err) + return err; + skb->csum = csum_block_add(skb->csum, csum, skb->len); + } else if (sk->sk_route_caps & NETIF_F_NOCACHE_COPY) { + if (!access_ok(VERIFY_READ, from, copy) || + __copy_from_user_nocache(to, from, copy)) + return -EFAULT; + } else if (copy_from_user(to, from, copy)) + return -EFAULT; + + return 0; +} + +static inline int skb_add_data_nocache(struct sock *sk, struct sk_buff *skb, + char __user *from, int copy) +{ + int err; + + err = skb_do_copy_data_nocache(sk, skb, from, skb_put(skb, copy), copy); + if (err) + __skb_trim(skb, skb->len); + + return err; +} + +static inline int skb_copy_to_page_nocache(struct sock *sk, char __user *from, + struct sk_buff *skb, + struct page *page, + int off, int copy) +{ + int err; + + err = skb_do_copy_data_nocache(sk, skb, from, + page_address(page) + off, copy); + if (err) + return err; + + skb->len += copy; + skb->data_len += copy; + skb->truesize += copy; + sk->sk_wmem_queued += copy; + sk_mem_charge(sk, copy); + return 0; +} + static inline int skb_copy_to_page(struct sock *sk, char __user *from, struct sk_buff *skb, struct page *page, int off, int copy) |