summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorChristoph Lameter <cl@linux.com>2013-01-23 21:45:48 +0000
committerPekka Enberg <penberg@kernel.org>2013-04-05 14:23:06 +0300
commit7cccd80b4397699902aced1ad3d692d384aaab77 (patch)
tree010bad7b7e3d3969f6050406b448fbcbc57cdca0
parent4d7868e6475d478172581828021bd8a356726679 (diff)
downloadblackbird-op-linux-7cccd80b4397699902aced1ad3d692d384aaab77.tar.gz
blackbird-op-linux-7cccd80b4397699902aced1ad3d692d384aaab77.zip
slub: tid must be retrieved from the percpu area of the current processor
As Steven Rostedt has pointer out: rescheduling could occur on a different processor after the determination of the per cpu pointer and before the tid is retrieved. This could result in allocation from the wrong node in slab_alloc(). The effect is much more severe in slab_free() where we could free to the freelist of the wrong page. The window for something like that occurring is pretty small but it is possible. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
-rw-r--r--mm/slub.c12
1 files changed, 9 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c
index 8b1b99d399cb..4df2c0c337fb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2332,13 +2332,18 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
s = memcg_kmem_get_cache(s, gfpflags);
redo:
-
/*
* Must read kmem_cache cpu data via this cpu ptr. Preemption is
* enabled. We may switch back and forth between cpus while
* reading from one cpu area. That does not matter as long
* as we end up on the original cpu again when doing the cmpxchg.
+ *
+ * Preemption is disabled for the retrieval of the tid because that
+ * must occur from the current processor. We cannot allow rescheduling
+ * on a different processor between the determination of the pointer
+ * and the retrieval of the tid.
*/
+ preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);
/*
@@ -2348,7 +2353,7 @@ redo:
* linked list in between.
*/
tid = c->tid;
- barrier();
+ preempt_enable();
object = c->freelist;
page = c->page;
@@ -2595,10 +2600,11 @@ redo:
* data is retrieved via this pointer. If we are on the same cpu
* during the cmpxchg then the free will succedd.
*/
+ preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);
tid = c->tid;
- barrier();
+ preempt_enable();
if (likely(page == c->page)) {
set_freepointer(s, object, c->freelist);
OpenPOWER on IntegriCloud