diff options
author | Tejun Heo <tj@kernel.org> | 2010-08-25 10:33:56 +0200 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2010-08-25 10:33:56 +0200 |
commit | 8a2e8e5dec7e29c56a46ba176c664ab6a3d04118 (patch) | |
tree | 57da96451bead4986dfcd82aadf47ba2c05745ac /sound/sh | |
parent | e41e704bc4f49057fc68b643108366e6e6781aa3 (diff) | |
download | kernel_samsung_aries-8a2e8e5dec7e29c56a46ba176c664ab6a3d04118.zip kernel_samsung_aries-8a2e8e5dec7e29c56a46ba176c664ab6a3d04118.tar.gz kernel_samsung_aries-8a2e8e5dec7e29c56a46ba176c664ab6a3d04118.tar.bz2 |
workqueue: fix cwq->nr_active underflow
cwq->nr_active is used to keep track of how many work items are active
for the cpu workqueue, where 'active' is defined as either pending on
global worklist or executing. This is used to implement the
max_active limit and workqueue freezing. If a work item is queued
after nr_active has already reached max_active, the work item doesn't
increment nr_active and is put on the delayed queue and gets activated
later as previous active work items retire.
try_to_grab_pending() which is used in the cancellation path
unconditionally decremented nr_active whether the work item being
cancelled is currently active or delayed, so cancelling a delayed work
item makes nr_active underflow. This breaks max_active enforcement
and triggers BUG_ON() in destroy_workqueue() later on.
This patch fixes this bug by adding a flag WORK_STRUCT_DELAYED, which
is set while a work item in on the delayed list and making
try_to_grab_pending() decrement nr_active iff the work item is
currently active.
The addition of the flag enlarges cwq alignment to 256 bytes which is
getting a bit too large. It's scheduled to be reduced back to 128
bytes by merging WORK_STRUCT_PENDING and WORK_STRUCT_CWQ in the next
devel cycle.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Diffstat (limited to 'sound/sh')
0 files changed, 0 insertions, 0 deletions