diff options
author | Chen, Kenneth W <kenneth.w.chen@intel.com> | 2006-03-22 00:09:03 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-03-22 07:54:04 -0800 |
commit | d5d4b0aa4e1430d73050babba999365593bdb9d2 (patch) | |
tree | 67199d156f61217f9493d31aa4a9bfbb9c97412e /mm/vmscan.c | |
parent | bba1e9b2111b14625f670bd07e57fd7ed57ce804 (diff) | |
download | blackbird-op-linux-d5d4b0aa4e1430d73050babba999365593bdb9d2.tar.gz blackbird-op-linux-d5d4b0aa4e1430d73050babba999365593bdb9d2.zip |
[PATCH] optimize follow_hugetlb_page
follow_hugetlb_page() walks a range of user virtual address and then fills
in list of struct page * into an array that is passed from the argument
list. It also gets a reference count via get_page(). For compound page,
get_page() actually traverse back to head page via page_private() macro and
then adds a reference count to the head page. Since we are doing a virt to
pte look up, kernel already has a struct page pointer into the head page.
So instead of traverse into the small unit page struct and then follow a
link back to the head page, optimize that with incrementing the reference
count directly on the head page.
The benefit is that we don't take a cache miss on accessing page struct for
the corresponding user address and more importantly, not to pollute the
cache with a "not very useful" round trip of pointer chasing. This adds a
moderate performance gain on an I/O intensive database transaction
workload.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/vmscan.c')
0 files changed, 0 insertions, 0 deletions