aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/md
Commit message (Collapse)AuthorAgeFilesLines
...
| * md: Simplify uuid_equal().Andre Noll2008-07-111-9/+4
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: sb_equal(): Fix misleading printk.Andre Noll2008-07-081-1/+1
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: Fix a typo in the comment to cmd_match().Andre Noll2008-07-081-1/+1
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: Fix typo in array_state comment.Andre Noll2008-07-081-1/+1
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: sync_speed_show(): Trivial cleanups.Andre Noll2008-07-081-4/+4
| | | | | | | | | | | | | | | | - Remove superfluous parentheses. - Make format string match the type of the variable that is printed. Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: do_md_run(): Fix misleading error message.Andre Noll2008-07-081-2/+3
| | | | | | | | | | | | | | | | | | | | In case pers->run() succeeds but creating the bitmap fails, we print an error message stating that pers->run() has failed. Print this message only if pers->run() really failed. Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: md_getgeo(): Move comment to proper position.Andre Noll2008-07-081-6/+6
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * md: md_ioctl(): Fix misleading indentation.Andre Noll2008-07-081-3/+1
| | | | | | | | | | Signed-off-by: Andre Noll <maan@systemlinux.org> Signed-off-by: Neil Brown <neilb@suse.de>
| * Merge branch 'for-neil' of ↵Neil Brown2008-07-083-16/+29
| |\ | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/djbw/md into for-next
| | * md: resolve external metadata handling deadlock in md_allow_writeDan Williams2008-06-303-16/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | md_allow_write() marks the metadata dirty while holding mddev->lock and then waits for the write to complete. For externally managed metadata this causes a deadlock as userspace needs to take the lock to communicate that the metadata update has completed. Change md_allow_write() in the 'external' case to start the 'mark active' operation and then return -EAGAIN. The expected side effects while waiting for userspace to write 'active' to 'array_state' are holding off reshape (code currently handles -ENOMEM), cause some 'stripe_cache_size' change requests to fail, cause some GET_BITMAP_FILE ioctl requests to fall back to GFP_NOIO, and cause updates to 'raid_disks' to fail. Except for 'stripe_cache_size' changes these failures can be mitigated by coordinating with mdmon. md_write_start() still prevents writes from occurring until the metadata handler has had a chance to take action as it unconditionally waits for MD_CHANGE_CLEAN to be cleared. [neilb@suse.de: return -EAGAIN, try GFP_NOIO] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | Merge branch 'master' into for-nextNeil Brown2008-07-081-0/+1
| |\ \ | | |/ | |/|
| * | md: rationalize raid5 function namesDan Williams2008-06-281-36/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Commit a4456856 refactored some of the deep code paths in raid5.c into separate functions. The names chosen at the time do not consistently indicate what is going to happen to the stripe. So, update the names, and since a stripe is a cache element use cache semantics like fill, dirty, and clean. (also, fix up the indentation in fetch_block5) Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: handle operation chaining in raid5_run_opsDan Williams2008-06-281-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Neil said: > At the end of ops_run_compute5 you have: > /* ack now if postxor is not set to be run */ > if (tx && !test_bit(STRIPE_OP_POSTXOR, &s->ops_run)) > async_tx_ack(tx); > > It looks odd having that test there. Would it fit in raid5_run_ops > better? The intended global interpretation is that raid5_run_ops can build a chain of xor and memcpy operations. When MD registers the compute-xor it tells async_tx to keep the operation handle around so that another item in the dependency chain can be submitted. If we are just computing a block to satisfy a read then we can terminate the chain immediately. raid5_run_ops gives a better context for this test since it cares about the entire chain. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: replace R5_WantPrexor with R5_WantDrain, add 'prexor' reconstruct_statesDan Williams2008-06-281-60/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Currently ops_run_biodrain and other locations have extra logic to determine which blocks are processed in the prexor and non-prexor cases. This can be eliminated if handle_write_operations5 flags the blocks to be processed in all cases via R5_Wantdrain. The presence of the prexor operation is tracked in sh->reconstruct_state. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: replace STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} with 'reconstruct_states'Dan Williams2008-06-281-142/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Track the state of reconstruct operations (recalculating the parity block usually due to incoming writes, or as part of array expansion) Reduces the scope of the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags to only tracking whether a reconstruct operation has been requested via the ops_request field of struct stripe_head_state. This is the final step in the removal of ops.{pending,ack,complete,count}, i.e. the STRIPE_OP_{BIODRAIN,PREXOR,POSTXOR} flags only request an operation and do not track the state of the operation. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: replace STRIPE_OP_COMPUTE_BLK with STRIPE_COMPUTE_RUNDan Williams2008-06-281-47/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Track the state of compute operations (recalculating a block from all the other blocks in a stripe) with a state flag. Reduces the scope of the STRIPE_OP_COMPUTE_BLK flag to only tracking whether a compute operation has been requested via the ops_request field of struct stripe_head_state. Note, the compute operation that is performed in the course of doing a 'repair' operation (check the parity block, recalculate it and write it back if the check result is not zero) is tracked separately with the 'check_state' variable. Compute operations are held off while a 'check' is in progress, and moving this check out to handle_issuing_new_read_requests5 the helper routine __handle_issuing_new_read_requests5 can be simplified. This is another step towards the removal of ops.{pending,ack,complete,count}, i.e. STRIPE_OP_COMPUTE_BLK only requests an operation and does not track the state of the operation. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: replace STRIPE_OP_BIOFILL with STRIPE_BIOFILL_RUNDan Williams2008-06-281-21/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Track the state of read operations (copying data from the stripe cache to bio buffers outside the lock) with a state flag. Reduce the scope of the STRIPE_OP_BIOFILL flag to only tracking whether a biofill operation has been requested via the ops_request field of struct stripe_head_state. This is another step towards the removal of ops.{pending,ack,complete,count}, i.e. STRIPE_OP_BIOFILL only requests an operation and does not track the state of the operation. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: replace STRIPE_OP_CHECK with 'check_states'Dan Williams2008-06-281-89/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> The STRIPE_OP_* flags record the state of stripe operations which are performed outside the stripe lock. Their use in indicating which operations need to be run is straightforward; however, interpolating what the next state of the stripe should be based on a given combination of these flags is not straightforward, and has led to bugs. An easier to read implementation with minimal degrees of freedom is needed. Towards this goal, this patch introduces explicit states to replace what was previously interpolated from the STRIPE_OP_* flags. For now this only converts the handle_parity_checks5 path, removing a user of the ops.{pending,ack,complete,count} fields of struct stripe_operations. This conversion also found a remaining issue with the current code. There is a small window for a drive to fail between when we schedule a repair and when the parity calculation for that repair completes. When this happens we will writeback to 'failed_num' when we really want to write back to 'pd_idx'. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: unify raid5/6 i/o submissionDan Williams2008-06-281-61/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> Let the raid6 path call ops_run_io to get pending i/o submitted. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: use stripe_head_state in ops_run_io()Dan Williams2008-06-281-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> In handle_stripe after taking sh->lock we sample some bits into 's' (struct stripe_head_state): s.syncing = test_bit(STRIPE_SYNCING, &sh->state); s.expanding = test_bit(STRIPE_EXPAND_SOURCE, &sh->state); s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state); Use these values from 's' in ops_run_io() rather than re-sampling the bits. This ensures a consistent snapshot (as seen under sh->lock) is used. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: kill STRIPE_OP_IO flagDan Williams2008-06-281-27/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> The R5_Want{Read,Write} flags already gate i/o. So, this flag is superfluous and we can unconditionally call ops_run_io(). Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | md: kill STRIPE_OP_MOD_DMA in raid5 offloadDan Williams2008-06-281-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Dan Williams <dan.j.williams@intel.com> This micro-optimization allowed the raid code to skip a re-read of the parity block after checking parity. It took advantage of the fact that xor-offload-engines have their own internal result buffer and can check parity without writing to memory. Remove it for the following reasons: 1/ It is a layering violation for MD to need to manage the DMA and non-DMA paths within async_xor_zero_sum 2/ Bad precedent to toggle the 'ops' flags outside the lock 3/ Hard to realize a performance gain as reads will not need an updated parity block and writes will dirty it anyways. Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | Support changing rdev size on running arrays.Chris Webb2008-06-281-13/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Chris Webb <chris@arachsys.com> Allow /sys/block/mdX/md/rdY/size to change on running arrays, moving the superblock if necessary for this metadata version. We prevent the available space from shrinking to less than the used size, and allow it to be set to zero to fill all the available space on the underlying device. Signed-off-by: Chris Webb <chris@arachsys.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | Make sure all changes to md/dev-XX/state are notifiedNeil Brown2008-06-281-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | The important state change happens during an interrupt in md_error. So just set a flag there and call sysfs_notify later in process context. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Make sure all changes to md/degraded are notified.Neil Brown2008-06-281-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | When a device fails, when a spare is activated, when an array is reshaped, or when an array is started, the extent to which the array is degraded can change. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Make sure all changes to md/sync_action are notified.Neil Brown2008-06-281-6/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the 'resync' thread starts or stops, when we explicitly set sync_action, or when we determine that there is definitely nothing to do, we notify sync_action. To stop "sync_action" from occasionally showing the wrong value, we introduce a new flags - MD_RECOVERY_RECOVER - to say that a recovery is probably needed or happening, and we make sure that we set MD_RECOVERY_RUNNING before clearing MD_RECOVERY_NEEDED. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Make sure all changes to md/array_state are notified.Neil Brown2008-06-281-5/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes in md/array_state could be of interest to a monitoring program. So make sure all changes trigger a notification. Exceptions: changing active_idle to active is not reported because it is frequent and not interesting. changing active to active_idle is only reported on arrays with externally managed metadata, as it is not interesting otherwise. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Don't reject HOT_REMOVE_DISK request for an array that is not yet started.Neil Brown2008-06-281-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | There is really no need for this test here, and there are valid cases for selectively removing devices from an array that it not actually active. Signed-off-by: Neil Brown <neilb@suse.de>
| * | rationalise return value for ->hot_add_disk method.Neil Brown2008-06-285-20/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For all array types but linear, ->hot_add_disk returns 1 on success, 0 on failure. For linear, it returns 0 on success and -errno on failure. This doesn't cause a functional problem because the ->hot_add_disk function of linear is used quite differently to the others. However it is confusing. So convert all to return 0 for success or -errno on failure and fix call sites to match. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Support adding a spare to a live md array with external metadata.Neil Brown2008-06-285-9/+67
| | | | | | | | | | | | | | | | | | | | | i.e. extend the 'md/dev-XXX/slot' attribute so that you can tell a device to fill an vacant slot in an and md array. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Enable setting of 'offset' and 'size' of a hot-added spare.Neil Brown2008-06-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | offset_store and rdev_size_store allow control of the region of a device which is to be using in an md/raid array. They only allow these values to be set when an array is being assembled, as changing them on an active array could be dangerous. However when adding a spare device to an array, we might need to set the offset and size before starting recovery. So allow these values to be set also if "->raid_disk < 0" which indicates that the device is still a spare. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Don't try to make md arrays dirty if that is not meaningful.Neil Brown2008-06-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arrays personalities such as 'raid0' and 'linear' have no redundancy, and so marking them as 'clean' or 'dirty' is not meaningful. So always allow write requests without requiring a superblock update. Such arrays types are detected by ->sync_request being NULL. If it is not possible to send a sync request we don't need a 'dirty' flag because all a dirty flag does is trigger some sync_requests. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Close race in md_probeNeil Brown2008-06-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a possible race in md_probe. If two threads call md_probe for the same device, then one could exit (having checked that ->gendisk exists) before the other has called kobject_init_and_add, thus returning an incomplete kobj which will cause problems when we try to add children to it. So extend the range of protection of disks_mutex slightly to avoid this possibility. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Allow setting start point for requested check/repairNeil Brown2008-06-281-5/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes it possible to just resync a small part of an array. e.g. if a drive reports that it has questionable sectors, a 'repair' of just the region covering those sectors will cause them to be read and, if there is an error, re-written with correct data. Signed-off-by: Neil Brown <neilb@suse.de>
| * | Improve setting of "events_cleared" for write-intent bitmaps.Neil Brown2008-06-281-5/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an array is degraded, bits in the write-intent bitmap are not cleared, so that if the missing device is re-added, it can be synced by only updated those parts of the device that have changed since it was removed. The enable this a 'events_cleared' value is stored. It is the event counter for the array the last time that any bits were cleared. Sometimes - if a device disappears from an array while it is 'clean' - the events_cleared value gets updated incorrectly (there are subtle ordering issues between updateing events in the main metadata and the bitmap metadata) resulting in the missing device appearing to require a full resync when it is re-added. With this patch, we update events_cleared precisely when we are about to clear a bit in the bitmap. We record events_cleared when we clear the bit internally, and copy that to the superblock which is written out before the bit on storage. This makes it more "obviously correct". We also need to update events_cleared when the event_count is going backwards (as happens on a dirty->clean transition of a non-degraded array). Thanks to Mike Snitzer for identifying this problem and testing early "fixes". Cc: "Mike Snitzer" <snitzer@gmail.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | use bio_endio instead of a call to bi_end_ioNeil Brown2008-06-281-12/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Turn calls to bi->bi_end_io() into bio_endio(). Apparently bio_endio does exactly the same error processing as is hardcoded at these places. bio_endio() avoids recursion (or will soon), so it should be used. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Neil Brown <neilb@suse.de>
| * | linear: correct disk numbering error checkNikanth Karthikesan2008-06-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | From: "Nikanth Karthikesan" <knikanth@novell.com> Correct disk numbering problem check. Signed-off-by: Nikanth Karthikesan <knikanth@suse.de> Signed-off-by: Neil Brown <neilb@suse.de>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6Linus Torvalds2008-07-159-1666/+96
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (102 commits) [SCSI] scsi_dh: fix kconfig related build errors [SCSI] sym53c8xx: Fix bogus sym_que_entry re-implementation of container_of [SCSI] scsi_cmnd.h: remove double inclusion of linux/blkdev.h [SCSI] make struct scsi_{host,target}_type static [SCSI] fix locking in host use of blk_plug_device() [SCSI] zfcp: Cleanup external header file [SCSI] zfcp: Cleanup code in zfcp_erp.c [SCSI] zfcp: zfcp_fsf cleanup. [SCSI] zfcp: consolidate sysfs things into one file. [SCSI] zfcp: Cleanup of code in zfcp_aux.c [SCSI] zfcp: Cleanup of code in zfcp_scsi.c [SCSI] zfcp: Move status accessors from zfcp to SCSI include file. [SCSI] zfcp: Small QDIO cleanups [SCSI] zfcp: Adapter reopen for large number of unsolicited status [SCSI] zfcp: Fix error checking for ELS ADISC requests [SCSI] zfcp: wait until adapter is finished with ERP during auto-port [SCSI] ibmvfc: IBM Power Virtual Fibre Channel Adapter Client Driver [SCSI] sg: Add target reset support [SCSI] lib: Add support for the T10 (SCSI) Data Integrity Field CRC [SCSI] sd: Move scsi_disk() accessor function to sd.h ...
| * | | [SCSI] scsi_dh: fix kconfig related build errorsChandra Seetharaman2008-07-152-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Do not automatically "select" SCSI_DH for dm-multipath. If SCSI_DH doesn't exist,just do not allow hardware handlers to be used. Handle SCSI_DH being a module also. Make sure it doesn't allow DM_MULTIPATH to be compiled in when SCSI_DH is a module. [jejb: added comment for Kconfig syntax] Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Reported-by: Randy Dunlap <randy.dunlap@oracle.com> Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
| * | | [SCSI] scsi_dh: Remove hardware handler infrastructure from dmChandra Seetharaman2008-06-054-278/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch just removes infrastructure that provided support for hardware handlers in the dm layer as it is not needed anymore. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
| * | | [SCSI] scsi_dh: Remove hardware handlers from dmChandra Seetharaman2008-06-055-1315/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the 3 hardware handlers that currently exist under dm as the functionality is moved to SCSI layer in the earlier patches. [jejb: removed more makefile hunks and rejection fixes] Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
| * | | [SCSI] scsi_dh: Remove dm_pg_init_completeChandra Seetharaman2008-06-051-41/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch just removes the dm layer's path initialization completion routine. This is separated from the other patch(scsi_dh: Use SCSI device handler in dm-multipath) Just to make that patch more readable. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
| * | | [SCSI] scsi_dh: Add a single threaded workqueue for initializing pathsChandra Seetharaman2008-06-051-7/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch set (SCSI hardware handlers), initialization of a path was done asynchronously. Doing that requires a workqueue in each device/hardware handler module and leads to unneccessary complication in the device handler code, making it difficult to read the code and follow the state diagram. Moving that workqueue to this level makes the device handler code simpler. Hence, the workqueue is moved to dm level. A new workqueue is added instead of adding it to the existing workqueue (kmpathd) for the following reasons: 1. Device activation has to happen faster, stacking them along with the other workqueue might lead to unnecessary delay in the activation of the path. 2. The effect could be felt the other way too. i.e the current events that are handled by the existing workqueue might get a delayed response. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
| * | | [SCSI] scsi_dh: Use SCSI device handler in dm-multipathChandra Seetharaman2008-06-052-51/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts dm-mpath to use scsi device handlers instead of dm's hardware handlers. This patch does not add any new functionality. Old behaviors remain and userspace tools work as is except that arguments supplied with hardware handler are ignored. One behavioral exception is: Activation of a path is synchronous in this patch, opposed to the older behavior of being asynchronous (changed in patch 07: scsi_dh: Add a single threaded workqueue for initializing a path) Note: There is no need to get a reference for the device handler module (as it was done in the dm hardware handler case) here as the reference is held when the device was first found. Instead we check and make sure that support for the specified device is present at table load time. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Acked-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
* | | | Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2008-07-144-19/+26
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.kernel.dk/linux-2.6-block: (37 commits) splice: fix generic_file_splice_read() race with page invalidation ramfs: enable splice write drivers/block/pktcdvd.c: avoid useless memset cdrom: revert commit 22a9189 (cdrom: use kmalloced buffers instead of buffers on stack) scsi: sr avoids useless buffer allocation block: blk_rq_map_kern uses the bounce buffers for stack buffers block: add blk_queue_update_dma_pad DAC960: push down BKL pktcdvd: push BKL down into driver paride: push ioctl down into driver block: use get_unaligned_* helpers block: extend queue_flag bitops block: request_module(): use format string Add bvec_merge_data to handle stacked devices and ->merge_bvec() block: integrity flags can't use bit ops on unsigned short cmdfilter: extend default read filter sg: fix odd style (extra parenthesis) introduced by cmd filter patch block: add bounce support to blk_rq_map_user_iov cfq-iosched: get rid of enable_idle being unused warning allow userspace to modify scsi command filter on per device basis ...
| * | | | Add bvec_merge_data to handle stacked devices and ->merge_bvec()Alasdair G Kergon2008-07-034-19/+26
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When devices are stacked, one device's merge_bvec_fn may need to perform the mapping and then call one or more functions for its underlying devices. The following bio fields are used: bio->bi_sector bio->bi_bdev bio->bi_size bio->bi_rw using bio_data_dir() This patch creates a new struct bvec_merge_data holding a copy of those fields to avoid having to change them directly in the struct bio when going down the stack only to have to change them back again on the way back up. (And then when the bio gets mapped for real, the whole exercise gets repeated, but that's a problem for another day...) Signed-off-by: Alasdair G Kergon <agk@redhat.com> Cc: Neil Brown <neilb@suse.de> Cc: Milan Broz <mbroz@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | | | Merge branch 'for-2.6.26' of git://neil.brown.name/mdLinus Torvalds2008-07-101-6/+1
|\ \ \ \ | |/ / / |/| | | | | | | | | | | * 'for-2.6.26' of git://neil.brown.name/md: md: ensure all blocks are uptodate or locked when syncing
| * | | md: ensure all blocks are uptodate or locked when syncingDan Williams2008-07-101-6/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | Remove the dubious attempt to prefer 'compute' over 'read'. Not only is it wrong given commit c337869d (md: do not compute parity unless it is on a failed drive), but it can trigger a BUG_ON in handle_parity_checks5(). Cc: <stable@kernel.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Neil Brown <neilb@suse.de>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dmLinus Torvalds2008-07-021-0/+1
|\ \ \ | |/ / |/| | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm: dm crypt: use cond_resched
| * | dm crypt: use cond_reschedMilan Broz2008-07-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add cond_resched() to prevent monopolising CPU when processing large bios. dm-crypt processes encryption of bios in sector units. If the bio request is big it can spend a long time in the encryption call. Signed-off-by: Milan Broz <mbroz@redhat.com> Tested-by: Yan Li <elliot.li.tech@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Alasdair G Kergon <agk@redhat.com>