diff options
author | Ben Blum <bblum@andrew.cmu.edu> | 2011-11-02 13:38:05 -0700 |
---|---|---|
committer | Ziyan <jaraidaniel@gmail.com> | 2016-01-08 10:43:05 +0100 |
commit | eadda7dcfbee8bce9d892211e6649f6329f991b3 (patch) | |
tree | f7a33965ac441983ba4cd60765b08e8ee3d14c7d /kernel | |
parent | 798fc122bcec00eb7e1841e5353246fba2a7f259 (diff) | |
download | kernel_samsung_tuna-eadda7dcfbee8bce9d892211e6649f6329f991b3.zip kernel_samsung_tuna-eadda7dcfbee8bce9d892211e6649f6329f991b3.tar.gz kernel_samsung_tuna-eadda7dcfbee8bce9d892211e6649f6329f991b3.tar.bz2 |
cgroups: more safe tasklist locking in cgroup_attach_proc
Fix unstable tasklist locking in cgroup_attach_proc.
According to this thread - https://lkml.org/lkml/2011/7/27/243 - RCU is
not sufficient to guarantee the tasklist is stable w.r.t. de_thread and
exit. Taking tasklist_lock for reading, instead of rcu_read_lock, ensures
proper exclusion.
Signed-off-by: Ben Blum <bblum@andrew.cmu.edu>
Acked-by: Paul Menage <paul@paulmenage.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/cgroup.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 1b15cf2..c3cebf2 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -2041,7 +2041,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) goto out_free_group_list; /* prevent changes to the threadgroup list while we take a snapshot. */ - rcu_read_lock(); + read_lock(&tasklist_lock); if (!thread_group_leader(leader)) { /* * a race with de_thread from another thread's exec() may strip @@ -2050,7 +2050,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) * throw this task away and try again (from cgroup_procs_write); * this is "double-double-toil-and-trouble-check locking". */ - rcu_read_unlock(); + read_unlock(&tasklist_lock); retval = -EAGAIN; goto out_free_group_list; } @@ -2075,7 +2075,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) } while_each_thread(leader, tsk); /* remember the number of threads in the array for later. */ group_size = i; - rcu_read_unlock(); + read_unlock(&tasklist_lock); /* * step 1: check that we can legitimately attach to the cgroup. |