diff options
author | Alexander Graf <agraf@suse.de> | 2010-01-08 02:58:06 +0100 |
---|---|---|
committer | Marcelo Tosatti <mtosatti@redhat.com> | 2010-03-01 12:35:49 -0300 |
commit | 021ec9c69f8b7b20f46296cc76cc4cb341b25191 (patch) | |
tree | 304f086761e7c01fb412c8319b89ff8b6fb2dde7 /arch/powerpc/xmon | |
parent | bc90923e27908ef65aa8aaad2f234e18b5273c78 (diff) | |
download | kernel_samsung_crespo-021ec9c69f8b7b20f46296cc76cc4cb341b25191.zip kernel_samsung_crespo-021ec9c69f8b7b20f46296cc76cc4cb341b25191.tar.gz kernel_samsung_crespo-021ec9c69f8b7b20f46296cc76cc4cb341b25191.tar.bz2 |
KVM: PPC: Call SLB patching code in interrupt safe manner
Currently we're racy when doing the transition from IR=1 to IR=0, from
the module memory entry code to the real mode SLB switching code.
To work around that I took a look at the RTAS entry code which is faced
with a similar problem and did the same thing:
A small helper in linear mapped memory that does mtmsr with IR=0 and
then RFIs info the actual handler.
Thanks to that trick we can safely take page faults in the entry code
and only need to be really wary of what to do as of the SLB switching
part.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Diffstat (limited to 'arch/powerpc/xmon')
0 files changed, 0 insertions, 0 deletions