diff options
author | FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> | 2008-04-11 12:56:51 +0200 |
---|---|---|
committer | Jens Axboe <jens.axboe@oracle.com> | 2008-04-21 09:50:08 +0200 |
commit | afdc1a780ef84a54b613dae6f971407748aab61c (patch) | |
tree | 67f120001c9a0a3f95fbb3ee2dc545f0886440c0 /block | |
parent | c5dec1c3034f1ae3503efbf641ff3b0273b64797 (diff) | |
download | kernel_goldelico_gta04-afdc1a780ef84a54b613dae6f971407748aab61c.zip kernel_goldelico_gta04-afdc1a780ef84a54b613dae6f971407748aab61c.tar.gz kernel_goldelico_gta04-afdc1a780ef84a54b613dae6f971407748aab61c.tar.bz2 |
block: add bio_copy_user_iov support to blk_rq_map_user_iov
With this patch, blk_rq_map_user_iov uses bio_copy_user_iov when a low
level driver needs padding or a buffer in sg_iovec isn't aligned. That
is, it uses temporary kernel buffers instead of mapping user pages
directly.
When a LLD needs padding, later blk_rq_map_sg needs to extend the last
entry of a scatter list. bio_copy_user_iov guarantees that there is
enough space for padding by using temporary kernel buffers instead of
user pages.
blk_rq_map_user_iov needs buffers in sg_iovec to be aligned. The
comment in blk_rq_map_user_iov indicates that drivers/scsi/sg.c also
needs buffers in sg_iovec to be aligned. Actually, drivers/scsi/sg.c
works with unaligned buffers in sg_iovec (it always uses temporary
kernel buffers).
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-map.c | 22 |
1 files changed, 17 insertions, 5 deletions
diff --git a/block/blk-map.c b/block/blk-map.c index c07d9c8..ab43533 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -5,6 +5,7 @@ #include <linux/module.h> #include <linux/bio.h> #include <linux/blkdev.h> +#include <scsi/sg.h> /* for struct sg_iovec */ #include "blk.h" @@ -194,15 +195,26 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, struct sg_iovec *iov, int iov_count, unsigned int len) { struct bio *bio; + int i, read = rq_data_dir(rq) == READ; + int unaligned = 0; if (!iov || iov_count <= 0) return -EINVAL; - /* we don't allow misaligned data like bio_map_user() does. If the - * user is using sg, they're expected to know the alignment constraints - * and respect them accordingly */ - bio = bio_map_user_iov(q, NULL, iov, iov_count, - rq_data_dir(rq) == READ); + for (i = 0; i < iov_count; i++) { + unsigned long uaddr = (unsigned long)iov[i].iov_base; + + if (uaddr & queue_dma_alignment(q)) { + unaligned = 1; + break; + } + } + + if (unaligned || (q->dma_pad_mask & len)) + bio = bio_copy_user_iov(q, iov, iov_count, read); + else + bio = bio_map_user_iov(q, NULL, iov, iov_count, read); + if (IS_ERR(bio)) return PTR_ERR(bio); |