aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/svm.h
diff options
context:
space:
mode:
authorAndre Przywara <andre.przywara@amd.com>2010-04-11 23:07:28 +0200
committerAvi Kivity <avi@redhat.com>2010-05-17 12:17:38 +0300
commit6bc31bdc55cad6609b1610b4cecad312664f2808 (patch)
tree82a78a5a8ee0b4202b782e695bad3745ef98a65f /arch/x86/include/asm/svm.h
parentf7a711971edd952352a89698db1d36f469e25f77 (diff)
downloadkernel_samsung_tuna-6bc31bdc55cad6609b1610b4cecad312664f2808.zip
kernel_samsung_tuna-6bc31bdc55cad6609b1610b4cecad312664f2808.tar.gz
kernel_samsung_tuna-6bc31bdc55cad6609b1610b4cecad312664f2808.tar.bz2
KVM: SVM: implement NEXTRIPsave SVM feature
On SVM we set the instruction length of skipped instructions to hard-coded, well known values, which could be wrong when (bogus, but valid) prefixes (REX, segment override) are used. Newer AMD processors (Fam10h 45nm and better, aka. PhenomII or AthlonII) have an explicit NEXTRIP field in the VMCB containing the desired information. Since it is cheap to do so, we use this field to override the guessed value on newer processors. A fix for older CPUs would be rather expensive, as it would require to fetch and partially decode the instruction. As the problem is not a security issue and needs special, handcrafted code to trigger (no compiler will ever generate such code), I omit a fix for older CPUs. If someone is interested, I have both a patch for these CPUs as well as demo code triggering this issue: It segfaults under KVM, but runs perfectly on native Linux. Signed-off-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Diffstat (limited to 'arch/x86/include/asm/svm.h')
-rw-r--r--arch/x86/include/asm/svm.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index b26a38d..1d91d05 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -81,7 +81,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u32 event_inj_err;
u64 nested_cr3;
u64 lbr_ctl;
- u8 reserved_5[832];
+ u64 reserved_5;
+ u64 next_rip;
+ u8 reserved_6[816];
};