diff options
author | Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> | 2010-10-27 18:23:54 +0900 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2011-01-12 11:28:46 +0200 |
commit | 515a01279a187415322a80736800a7d6325876ab (patch) | |
tree | 8690a1b26013cb385b9d143c83301bdab758dd48 /virt | |
parent | a36a57b1a19bce17b67f5c6f43460baf664ae5fa (diff) | |
download | kernel_samsung_crespo-515a01279a187415322a80736800a7d6325876ab.zip kernel_samsung_crespo-515a01279a187415322a80736800a7d6325876ab.tar.gz kernel_samsung_crespo-515a01279a187415322a80736800a7d6325876ab.tar.bz2 |
KVM: pre-allocate one more dirty bitmap to avoid vmalloc()
Currently x86's kvm_vm_ioctl_get_dirty_log() needs to allocate a bitmap by
vmalloc() which will be used in the next logging and this has been causing
bad effect to VGA and live-migration: vmalloc() consumes extra systime,
triggers tlb flush, etc.
This patch resolves this issue by pre-allocating one more bitmap and switching
between two bitmaps during dirty logging.
Performance improvement:
I measured performance for the case of VGA update by trace-cmd.
The result was 1.5 times faster than the original one.
In the case of live migration, the improvement ratio depends on the workload
and the guest memory size. In general, the larger the memory size is the more
benefits we get.
Note:
This does not change other architectures's logic but the allocation size
becomes twice. This will increase the actual memory consumption only when
the new size changes the number of pages allocated by vmalloc().
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r-- | virt/kvm/kvm_main.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0021c28..27649fd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -449,8 +449,9 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) if (!memslot->dirty_bitmap) return; - vfree(memslot->dirty_bitmap); + vfree(memslot->dirty_bitmap_head); memslot->dirty_bitmap = NULL; + memslot->dirty_bitmap_head = NULL; } /* @@ -537,15 +538,21 @@ static int kvm_vm_release(struct inode *inode, struct file *filp) return 0; } +/* + * Allocation size is twice as large as the actual dirty bitmap size. + * This makes it possible to do double buffering: see x86's + * kvm_vm_ioctl_get_dirty_log(). + */ static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot) { - unsigned long dirty_bytes = kvm_dirty_bitmap_bytes(memslot); + unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot); memslot->dirty_bitmap = vmalloc(dirty_bytes); if (!memslot->dirty_bitmap) return -ENOMEM; memset(memslot->dirty_bitmap, 0, dirty_bytes); + memslot->dirty_bitmap_head = memslot->dirty_bitmap; return 0; } |