aboutsummaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorBrian King <brking@linux.vnet.ibm.com>2010-09-10 09:03:21 +0200
committerJens Axboe <jaxboe@fusionio.com>2010-09-10 09:03:21 +0200
commitbe14eb619108fa8b7120eb2c42d66d5f623ae10e (patch)
treecfc8a496c62a429a97c4bc2c5e638c39374f560b /block
parentedce6820a9fdda85521211cb334a183e34cc455e (diff)
downloadkernel_samsung_tuna-be14eb619108fa8b7120eb2c42d66d5f623ae10e.zip
kernel_samsung_tuna-be14eb619108fa8b7120eb2c42d66d5f623ae10e.tar.gz
kernel_samsung_tuna-be14eb619108fa8b7120eb2c42d66d5f623ae10e.tar.bz2
block: Range check cpu in blk_cpu_to_group
While testing CPU DLPAR, the following problem was discovered. We were DLPAR removing the first CPU, which in this case was logical CPUs 0-3. CPUs 0-2 were already marked offline and we were in the process of offlining CPU 3. After marking the CPU inactive and offline in cpu_disable, but before the cpu was completely idle (cpu_die), we ended up in __make_request on CPU 3. There we looked at the topology map to see which CPU to complete the I/O on and found no CPUs in the cpu_sibling_map. This resulted in the block layer setting the completion cpu to be NR_CPUS, which then caused an oops when we tried to complete the I/O. Fix this by sanity checking the value we return from blk_cpu_to_group to be a valid cpu value. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Diffstat (limited to 'block')
-rw-r--r--block/blk.h8
1 files changed, 6 insertions, 2 deletions
diff --git a/block/blk.h b/block/blk.h
index 6e7dc87..d6b911a 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -142,14 +142,18 @@ static inline int queue_congestion_off_threshold(struct request_queue *q)
static inline int blk_cpu_to_group(int cpu)
{
+ int group = NR_CPUS;
#ifdef CONFIG_SCHED_MC
const struct cpumask *mask = cpu_coregroup_mask(cpu);
- return cpumask_first(mask);
+ group = cpumask_first(mask);
#elif defined(CONFIG_SCHED_SMT)
- return cpumask_first(topology_thread_cpumask(cpu));
+ group = cpumask_first(topology_thread_cpumask(cpu));
#else
return cpu;
#endif
+ if (likely(group < NR_CPUS))
+ return group;
+ return cpu;
}
/*