diff options
author | Dmitry Torokhov <dmitry.torokhov@gmail.com> | 2015-07-20 10:08:17 -0700 |
---|---|---|
committer | Dmitry Torokhov <dmitry.torokhov@gmail.com> | 2015-07-20 10:08:17 -0700 |
commit | c57d5621d2f2dc238f4b9c4d00b2a54187a75445 (patch) | |
tree | ece13738a44545fb110e5d73adbf2625bc7a1ea6 /mm/percpu.c | |
parent | 6ccfe64c770139675a080ee5029ded7d89d9ea0d (diff) | |
parent | 52721d9d3334c1cb1f76219a161084094ec634dc (diff) | |
download | talos-op-linux-c57d5621d2f2dc238f4b9c4d00b2a54187a75445.tar.gz talos-op-linux-c57d5621d2f2dc238f4b9c4d00b2a54187a75445.zip |
Merge tag 'v4.2-rc3' into next
Sync up with Linux 4.2-rc3 to bring in infrastructure (OF) pieces.
Diffstat (limited to 'mm/percpu.c')
-rw-r--r-- | mm/percpu.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/percpu.c b/mm/percpu.c index 73c97a5f4495..2dd74487a0af 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1030,7 +1030,7 @@ area_found: memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size); ptr = __addr_to_pcpu_ptr(chunk->base_addr + off); - kmemleak_alloc_percpu(ptr, size); + kmemleak_alloc_percpu(ptr, size, gfp); return ptr; fail_unlock: @@ -1310,7 +1310,7 @@ bool is_kernel_percpu_address(unsigned long addr) * and, from the second one, the backing allocator (currently either vm or * km) provides translation. * - * The addr can be tranlated simply without checking if it falls into the + * The addr can be translated simply without checking if it falls into the * first chunk. But the current code reflects better how percpu allocator * actually works, and the verification can discover both bugs in percpu * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current @@ -1762,7 +1762,7 @@ early_param("percpu_alloc", percpu_alloc_setup); * and other parameters considering needed percpu size, allocation * atom size and distances between CPUs. * - * Groups are always mutliples of atom size and CPUs which are of + * Groups are always multiples of atom size and CPUs which are of * LOCAL_DISTANCE both ways are grouped together and share space for * units in the same group. The returned configuration is guaranteed * to have CPUs on different nodes on different groups and >=75% usage |