summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
...
* | | mm/mmap.c: eliminate the ret variable from mm_take_all_locks()Kautuk Consul2011-10-311-6/+3
* | | mm-add-comment-explaining-task-state-setting-in-bdi_forker_thread-fixAndrew Morton2011-10-311-3/+2
* | | ksm: fix the comment of try_to_unmap_one()Wanlong Gao2011-10-311-1/+1
* | | mm/vmalloc.c: report more vmalloc failuresJoe Perches2011-10-311-3/+8
* | | kswapd: assign new_order and new_classzone_idx after wakeup in sleepingAlex,Shi2011-10-311-0/+2
* | | mm/memblock.c: small function definition fixesJonghwan Choi2011-10-311-1/+1
* | | kswapd: avoid unnecessary rebalance after an unsuccessful balancingAlex,Shi2011-10-311-3/+11
* | | debug-pagealloc: add support for highmem pagesAkinobu Mita2011-10-311-34/+10
* | | mm: neaten warn_alloc_failedJoe Perches2011-10-312-7/+13
* | | thp: mremap support and TLB optimizationAndrea Arcangeli2011-10-312-4/+63
* | | mremap: avoid sending one IPI per pageAndrea Arcangeli2011-10-311-6/+9
* | | mremap: check for overflow using deltasAndrea Arcangeli2011-10-311-2/+3
* | | memblock: add NO_BOOTMEM config symbolSam Ravnborg2011-10-311-0/+3
* | | memblock: add memblock_start_of_DRAM()Sam Ravnborg2011-10-311-0/+6
* | | mm: avoid null pointer access in vm_struct via /proc/vmallocinfoMitsuo Hayasaka2011-10-311-17/+48
* | | mm/debug-pagealloc.c: use memchr_invAkinobu Mita2011-10-311-5/+3
* | | lib/string.c: introduce memchr_inv()Akinobu Mita2011-10-311-45/+2
* | | mm/debug-pagealloc.c: use plain __ratelimit() instead of printk_ratelimit()Akinobu Mita2011-10-311-1/+3
* | | vmscan: count pages into balanced for zone with good watermarkShaohua Li2011-10-311-0/+2
* | | mm: vmscan: immediately reclaim end-of-LRU dirty pages when writeback completesMel Gorman2011-10-312-2/+10
* | | mm: vmscan: throttle reclaim if encountering too many dirty pages under write...Mel Gorman2011-10-311-3/+39
* | | mm: vmscan: do not writeback filesystem pages in kswapd except in high priorityMel Gorman2011-10-311-5/+8
* | | mm: vmscan: remove dead code related to lumpy reclaim waiting on pages under ...Mel Gorman2011-10-311-16/+5
* | | mm: vmscan: do not writeback filesystem pages in direct reclaimMel Gorman2011-10-312-0/+10
* | | mm: vmscan: drop nr_force_scan[] from get_scan_countJohannes Weiner2011-10-311-24/+12
* | | mm: output a list of loaded modules when we hit bad_page()Dave Jones2011-10-311-0/+1
* | | oom: fix race while temporarily setting current's oom_score_adjDavid Rientjes2011-10-313-2/+22
* | | oom: remove oom_disable_countDavid Rientjes2011-10-311-18/+5
* | | oom: avoid killing kthreads if they assume the oom killed thread's mmDavid Rientjes2011-10-311-2/+3
* | | oom: thaw threads if oom killed thread is frozen before deferringDavid Rientjes2011-10-311-1/+5
* | | mm/page-writeback.c: document bdi_min_ratioJohannes Weiner2011-10-311-1/+3
* | | vmscan: add block plug for page reclaimShaohua Li2011-10-311-0/+3
* | | mm: migration: clean up unmap_and_move()Minchan Kim2011-10-311-35/+40
* | | mm: zone_reclaim: make isolate_lru_page() filter-awareMinchan Kim2011-10-311-2/+18
* | | mm: compaction: make isolate_lru_page() filter-awareMinchan Kim2011-10-312-2/+8
* | | mm: change isolate mode from #define to bitwise typeMinchan Kim2011-10-313-19/+24
* | | mm: compaction: trivial clean up in acct_isolated()Minchan Kim2011-10-311-13/+5
* | | Cross Memory AttachChristopher Yeoh2011-10-312-1/+498
| |/ |/|
* | Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/hch/...Linus Torvalds2011-10-281-0/+3
|\ \
| * | vfs: iov_iter: have iov_iter_advance decrement nr_segs appropriatelyJeff Layton2011-10-281-0/+3
| | |
| \ \
*-. \ \ Merge branches 'slab/next' and 'slub/partial' into slab/for-linusPekka Enberg2011-10-262-178/+399
|\ \ \ \
| | * | | slub: Discard slab page when node partial > minimum partial numberAlex Shi2011-09-271-1/+1
| | * | | slub: correct comments error for per cpu partialAlex Shi2011-09-271-1/+1
| | * | | slub: Code optimization in get_partial_node()Alex,Shi2011-09-131-4/+2
| | * | | slub: per cpu cache for partial pagesChristoph Lameter2011-08-191-47/+292
| | * | | slub: return object pointer from get_partial() / new_slab().Christoph Lameter2011-08-191-60/+73
| | * | | slub: pass kmem_cache_cpu pointer to get_partial()Christoph Lameter2011-08-191-15/+15
| | * | | slub: Prepare inuse field in new_slab()Christoph Lameter2011-08-191-3/+2
| | * | | slub: Remove useless statements in __slab_allocChristoph Lameter2011-08-191-4/+0
| | * | | slub: free slabs without holding locksChristoph Lameter2011-08-191-13/+13
OpenPOWER on IntegriCloud