aboutsummaryrefslogtreecommitdiffstats
path: root/block/ll_rw_blk.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge mulgrave-w:git/linux-2.6James Bottomley2006-09-231-0/+12
|\ | | | | | | | | | | | | | | Conflicts: include/linux/blkdev.h Trivial merge to incorporate tag prototypes.
| * Add a real API for dealing with blk_congestion_wait()Trond Myklebust2006-09-221-0/+12
| | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | [SCSI] block: add support for shared tag mapsJames Bottomley2006-08-311-21/+88
|/ | | | | | | | | | The current block queue implementation already contains most of the machinery for shared tag maps. The only remaining pieces are a way to allocate and destroy a tag map independently of the queues (so that the maps can be managed on the life cycle of the overseeing entity) Acked-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [PATCH] Fix current_io_context() vs set_task_ioprio() raceOleg Nesterov2006-08-211-0/+2
| | | | | | | | | | | I know nothing about io scheduler, but I suspect set_task_ioprio() is not safe. current_io_context() initializes "struct io_context", then sets ->io_context. set_task_ioprio() running on another cpu may see the changes out of order, so ->set_ioprio(ioc) may use io_context which was not initialized properly. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] Only the first two bits in bio->bi_rw and rq->flags matchJens Axboe2006-07-061-2/+2
| | | | | | | Not three, as assumed. This causes the barrier bit to be needlessly set for some IO. Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] lockdep: annotate on-stack completionsIngo Molnar2006-07-031-1/+1
| | | | | | | | | | | | lockdep needs to have the waitqueue lock initialized for on-stack waitqueues implicitly initialized by DECLARE_COMPLETION(). Annotate on-stack completions accordingly. Has no effect on non-lockdep kernels. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivialLinus Torvalds2006-06-301-1/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: Remove obsolete #include <linux/config.h> remove obsolete swsusp_encrypt arch/arm26/Kconfig typos Documentation/IPMI typos Kconfig: Typos in net/sched/Kconfig v9fs: do not include linux/version.h Documentation/DocBook/mtdnand.tmpl: typo fixes typo fixes: specfic -> specific typo fixes in Documentation/networking/pktgen.txt typo fixes: occuring -> occurring typo fixes: infomation -> information typo fixes: disadvantadge -> disadvantage typo fixes: aquire -> acquire typo fixes: mecanism -> mechanism typo fixes: bandwith -> bandwidth fix a typo in the RTC_CLASS help text smb is no longer maintained Manually merged trivial conflict in arch/um/kernel/vmlinux.lds.S
| * Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | | | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* | [PATCH] Light weight event countersChristoph Lameter2006-06-301-2/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The remaining counters in page_state after the zoned VM counter patches have been applied are all just for show in /proc/vmstat. They have no essential function for the VM. We use a simple increment of per cpu variables. In order to avoid the most severe races we disable preempt. Preempt does not prevent the race between an increment and an interrupt handler incrementing the same statistics counter. However, that race is exceedingly rare, we may only loose one increment or so and there is no requirement (at least not in kernel) that the vm event counters have to be accurate. In the non preempt case this results in a simple increment for each counter. For many architectures this will be reduced by the compiler to a single instruction. This single instruction is atomic for i386 and x86_64. And therefore even the rare race condition in an interrupt is avoided for both architectures in most cases. The patchset also adds an off switch for embedded systems that allows a building of linux kernels without these counters. The implementation of these counters is through inline code that hopefully results in only a single instruction increment instruction being emitted (i386, x86_64) or in the increment being hidden though instruction concurrency (EPIC architectures such as ia64 can get that done). Benefits: - VM event counter operations usually reduce to a single inline instruction on i386 and x86_64. - No interrupt disable, only preempt disable for the preempt case. Preempt disable can also be avoided by moving the counter into a spinlock. - Handling is similar to zoned VM counters. - Simple and easily extendable. - Can be omitted to reduce memory use for embedded use. References: RFC http://marc.theaimsgroup.com/?l=linux-kernel&m=113512330605497&w=2 RFC http://marc.theaimsgroup.com/?l=linux-kernel&m=114988082814934&w=2 local_t http://marc.theaimsgroup.com/?l=linux-kernel&m=114991748606690&w=2 V2 http://marc.theaimsgroup.com/?t=115014808400007&r=1&w=2 V3 http://marc.theaimsgroup.com/?l=linux-kernel&m=115024767022346&w=2 V4 http://marc.theaimsgroup.com/?l=linux-kernel&m=115047968808926&w=2 Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpu hotplug: use hotplug version of cpu notifier in appropriate placesChandra Seetharaman2006-06-271-3/+1
| | | | | | | | | | Make use the of newly defined hotplug version of cpu_notifier functionality wherever appropriate. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpu hotplug: revert initdata patch submitted for 2.6.17Chandra Seetharaman2006-06-271-1/+1
| | | | | | | | | This patch reverts notifier_block changes made in 2.6.17 Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* spelling fixesAndreas Mohr2006-06-261-2/+2
| | | | | | | | | | | | acquired (aquired) contiguous (contigious) successful (succesful, succesfull) surprise (suprise) whether (weather) some other misspellings Signed-off-by: Andreas Mohr <andi@lisas.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [BLOCK] Fix bounce limit address checkAndi Kleen2006-06-231-1/+1
| | | | | | | | Do a safer check for when to enable DMA. Currently we enable ISA DMA for cases that do not need it, resulting in OOM conditions when ZONE_DMA runs out of space. Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] Kill PF_SYNCWRITE flagJens Axboe2006-06-231-0/+3
| | | | | | | | | | | | | | | A process flag to indicate whether we are doing sync io is incredibly ugly. It also causes performance problems when one does a lot of async io and then proceeds to sync it. Part of the io will go out as async, and the other part as sync. This causes a disconnect between the previously submitted io and the synced io. For io schedulers such as CFQ, this will cause us lost merges and suboptimal behaviour in scheduling. Remove PF_SYNCWRITE completely from the fsync/msync paths, and let the O_DIRECT path just directly indicate that the writes are sync by using WRITE_SYNC instead. Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] blk_start_queue() must be called with irq disabled - add warningPaolo 'Blaisorblade' Giarrusso2006-06-231-1/+4
| | | | | | | | | | | | | | The queue lock can be taken from interrupts so it must always be taken with irq disabling primitives. Some primitives already verify this. blk_start_queue() is called under this lock, so interrupts must be disabled. Also document this requirement clearly in blk_init_queue(), where the queue spinlock is set. Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] list: use list_replace_init() instead of list_splice_init()Oleg Nesterov2006-06-231-3/+2
| | | | | | | | | | list_splice_init(list, head) does unneeded job if it is known that list_empty(head) == 1. We can use list_replace_init() instead. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] blk: fix gendisk->in_flight accounting during barrier sequenceJens Axboe2006-05-231-1/+6
| | | | | | | | | | | | | | While executing barrrier sequence, the bar_rq which carries actual write was accounted as normal IO on completion, while it wasn't on queueing. This caused gendisk->in_flight to be decremented by 1 after each barrier thus messed up statistics. This patch makes bar_rq not accounted as normal IO. As the containing barrier request as a whole is accounted, part of it shouldn't be. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [BLOCK] limit request_fn recursionJens Axboe2006-05-111-2/+15
| | | | | | | | | | | | | | | | | Don't recurse back into the driver even if the unplug threshold is met, when the driver asks for a requeue. This is both silly from a logical point of view (requeues typically happen due to driver/hardware shortage), and also dangerous since we could hit an endless request_fn -> requeue -> unplug -> request_fn loop and crash on stack overrun. Also limit blk_run_queue() to one level of recursion, similar to how blk_start_queue() works. This patch fixed a real problem with SLES10 and lpfc, and it could hit any SCSI lld that returns non-zero from it's ->queuecommand() handler. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Remove __devinitdata from notifier block definitionsChandra Seetharaman2006-04-261-1/+1
| | | | | | | | | | | | | | | | Few of the notifier_chain_register() callers use __devinitdata in the definition of notifier_block data structure. It is incorrect as the data structure should be available after the initializations (they do not unregister them during initializations). This was leading to an oops when notifier_chain_register() call is invoked for those callback chains after initialization. This patch fixes all such usages to _not_ have the notifier_block data structure in the init data section. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [patch] cleanup: use blk_queue_stoppedCoywolf Qi Hunt2006-04-201-2/+2
| | | | | | | This cleanup the source to use blk_queue_stopped. Signed-off-by: Coywolf Qi Hunt <qiyong@freeforge.net> Signed-off-by: Jens Axboe <axboe@suse.de>
* Documentation: fix minor kernel-doc warningsMartin Waitz2006-04-021-1/+1
| | | | | | | This patch updates the comments to match the actual code. Signed-off-by: Martin Waitz <tali@admingilde.org> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* Merge branch 'cfq-merge' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds2006-03-281-6/+15
|\ | | | | | | | | | | | | | | * 'cfq-merge' of git://brick.kernel.dk/data/git/linux-2.6-block: [BLOCK] cfq-iosched: seek and async performance fixes [PATCH] ll_rw_blk: fix 80-col offender in put_io_context() [PATCH] cfq-iosched: small cfq_choose_req() optimization [PATCH] [BLOCK] cfq-iosched: change cfq io context linking from list to tree
| * [PATCH] ll_rw_blk: fix 80-col offender in put_io_context()Jens Axboe2006-03-281-1/+3
| | | | | | | | | | | | This makes akpm more happy. Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] [BLOCK] cfq-iosched: change cfq io context linking from list to treeJens Axboe2006-03-281-6/+13
| | | | | | | | | | | | | | | | | | On setups with many disks, we spend a considerable amount of time looking up the process-disk mapping on each queue of io. Testing with a NULL based block driver, this costs 40-50% reduction in throughput for 1000 disks. Signed-off-by: Jens Axboe <axboe@suse.de>
* | [PATCH] for_each_possible_cpu: fixes for generic partKAMEZAWA Hiroyuki2006-03-281-1/+1
|/ | | | | | | | replaces for_each_cpu with for_each_possible_cpu(). Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge branch 'for-linus' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds2006-03-271-7/+7
|\ | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://brick.kernel.dk/data/git/linux-2.6-block: [PATCH] Don't make debugfs depend on DEBUG_KERNEL [PATCH] Fix blktrace compile with sysfs not defined [PATCH] unused label in drivers/block/cciss. [BLOCK] increase size of disk stat counters [PATCH] blk_execute_rq_nowait-speedup [PATCH] ide-cd: quiet down GPCMD_READ_CDVD_CAPACITY failure [BLOCK] ll_rw_blk: kmalloc -> kzalloc conversion [PATCH] kzalloc() conversion in drivers/block [PATCH] update max_sectors documentation
| * [PATCH] blk_execute_rq_nowait-speedupAndrew Morton2006-03-271-3/+5
| | | | | | | | | | | | | | | | Both elv_add_request() and generic_unplug_device() grab the queue lock and disable interrupts, do that locally and use the __ variants. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Jens Axboe <axboe@suse.de>
| * [BLOCK] ll_rw_blk: kmalloc -> kzalloc conversionJens Axboe2006-03-271-4/+2
| | | | | | | | Signed-off-by: Jens Axboe <axboe@suse.de>
* | [PATCH] md: Make sure QUEUE_FLAG_CLUSTER is set properly for md.NeilBrown2006-03-271-0/+2
|/ | | | | | | | | This flag should be set for a virtual device iff it is set for all underlying devices. Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Block queue IO tracing support (blktrace) as of 2006-03-23Jens Axboe2006-03-231-2/+42
| | | | Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] fix sysfs interaction and lifetime rules handling for queuesAl Viro2006-03-181-25/+58
|
* [PATCH] deal with rmmod/put_io_context() racesAl Viro2006-03-181-0/+2
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] fix locking in queue_requests_store()Al Viro2006-03-181-3/+7
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] fix double-free in blk_init_queue_node()Al Viro2006-03-181-5/+5
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] block: disable block layer bouncing for most memory on 64bit systemsAndi Kleen2006-03-081-14/+19
| | | | | | | | | | | | The low level PCI DMA mapping functions should handle it in most cases. This should fix problems with depleting the DMA zone early. The old code used precious GFP_DMA memory in many cases where it was not needed. Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] block: implement elv_insert and use it (fix ordcolor flipping bug)Tejun Heo2006-02-081-2/+2
| | | | | | | | | | | | | | | | | | | q->ordcolor must only be flipped on initial queueing of a hardbarrier request. Constructing ordered sequence and requeueing used to pass through __elv_add_request() which flips q->ordcolor when it sees a barrier request. This patch separates out elv_insert() from __elv_add_request() and uses elv_insert() when constructing ordered sequence and requeueing. elv_insert() inserts the given request at the specified position and does nothing else. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] fix ordering on requeued request drainageJens Axboe2006-02-051-22/+16
| | | | | | | | | | | | | Previously, if a fs request which was being drained failed and got requeued, blk_do_ordered() didn't allow it to be reissued, which causes queue stall. This patch makes blk_do_ordered() use the sequence of each request to determine whether a request can be issued or not. This fixes the bug and simplifies code. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] percpu data: only iterate over possible CPUsEric Dumazet2006-02-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | percpu_data blindly allocates bootmem memory to store NR_CPUS instances of cpudata, instead of allocating memory only for possible cpus. As a preparation for changing that, we need to convert various 0 -> NR_CPUS loops to use for_each_cpu(). (The above only applies to users of asm-generic/percpu.h. powerpc has gone it alone and is presently only allocating memory for present CPUs, so it's currently corrupting memory). Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: James Bottomley <James.Bottomley@steeleye.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Jens Axboe <axboe@suse.de> Cc: Anton Blanchard <anton@samba.org> Acked-by: William Irwin <wli@holomorphy.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] device-mapper disk statistics: timingJun'ichi "Nick" Nomura2006-02-011-0/+2
| | | | | | | | | | | | | | | Record I/O timing statistics The start time is added to struct dm_io, an existing structure allocated privately internally within dm and attached to each incoming bio. We export disk_round_stats() from block/ll_rw_blk.c instead of creating a private clone. Signed-off-by: Jun'ichi "Nick" Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [BLOCK] A few kerneldoc fixupsJens Axboe2006-01-311-0/+2
| | | | Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] ll_rw_blk: fix setting of ->ordered on initTetsuo Takata2006-01-241-0/+1
| | | | | | | This makes XFS barrier mounts succeed on my SCSI system. Signed-off-by: Tetsuo Takata <takatatt@intellilink.co.jp> Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] ll_rw_blk: use preempt-disabling disk_stat_add() in completionJens Axboe2006-01-241-1/+1
| | | | | | It can legally be called with interrupts/preemption enabled. Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] ll_rw_blk: make max_sectors and max_hw_sectors unsigned intsJens Axboe2006-01-241-1/+1
| | | | | | | IDE lba48 can support full 64k request size, which overflows the max_hw_sectors variable. Signed-off-by: Jens Axboe <axboe@suse.de>
* Merge branch 'blk-softirq' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds2006-01-091-1/+105
|\ | | | | | | Manual merge for trivial #include changes
| * [BLOCK] ll_rw_blk: Enable out-of-order request completions through softirqJens Axboe2006-01-091-1/+105
| | | | | | | | | | | | | | | | | | Request completion can be a quite heavy process, since it needs to iterate through the entire request and complete the bio's it holds. This patch adds blk_complete_request() which moves this processing into a dedicated block softirq. Signed-off-by: Jens Axboe <axboe@suse.de>
* | [BLOCK] Kill blk_attempt_remerge()Jens Axboe2006-01-091-24/+0
| | | | | | | | | | | | | | | | It's a broken interface, it's done way too late. And apparently it triggers slab problems in recent kernels as well (most likely after the generic dispatch code was merged). So kill it, ide-cd is the only user of it. Signed-off-by: Jens Axboe <axboe@suse.de>
* | [BLOCK][TRIVIAL] ll_rw_blk: header included twiceNicolas Kaiser2006-01-091-1/+0
|/ | | | | | | linux/blkdev.h included twice Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] reimplement handling of barrier requestTejun Heo2006-01-061-141/+243
| | | | | | | | | | | | Reimplement handling of barrier requests. * Flexible handling to deal with various capabilities of target devices. * Retry support for falling back. * Tagged queues which don't support ordered tag can do ordered. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] ll_rw_blk: separate out bio init part from __make_requestTejun Heo2006-01-061-29/+33
| | | | | | | | Separate out bio initialization part from __make_request. It will be used by the following blk_ordered_reimpl. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] add @uptodate to end_that_request_last() and @error to rq_end_io_fn()Tejun Heo2006-01-061-7/+15
| | | | | | | | | | | | | | | | | add @uptodate argument to end_that_request_last() and @error to rq_end_io_fn(). there's no generic way to pass error code to request completion function, making generic error handling of non-fs request difficult (rq->errors is driver-specific and each driver uses it differently). this patch adds @uptodate to end_that_request_last() and @error to rq_end_io_fn(). for fs requests, this doesn't really matter, so just using the same uptodate argument used in the last call to end_that_request_first() should suffice. imho, this can also help the generic command-carrying request jens is working on. Signed-off-by: tejun heo <htejun@gmail.com> Signed-Off-By: Jens Axboe <axboe@suse.de>