From: Jan Beulich Date: Thu, 12 Jul 2018 08:47:33 +0000 (+0200) Subject: x86/shadow: fetch CPL just once in sh_page_fault() X-Git-Tag: archive/raspbian/4.14.0+80-gd101b417b7-1+rpi1^2~63^2~3621 X-Git-Url: https://dgit.raspbian.org/?a=commitdiff_plain;h=94c7b060c072794d9f21755db24db1ac502ceb4b;p=xen.git x86/shadow: fetch CPL just once in sh_page_fault() This isn't as much of an optimization than to avoid triggering a gcc bug affecting 5.x ... 7.x, triggered by any asm() put inside the ad hoc "rewalk" loop and taking as an (output?) operand a register variable tied to %rdx (an "rdx" clobber is fine). The issue is due to an apparent collision in register use with the modulo operation in vtlb_hash(), which (with optimization enabled) involves a multiplication of two 64-bit values with the upper half (in %rdx) of the 128-bit result being of interest. Such an asm() was originally meant to be implicitly introduced into the code when converting most indirect calls through the hvm_funcs table to direct calls (via alternative instruction patching); that model was switched to clobbers due to further compiler problems, but I think the change here is worthwhile nevertheless. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper Reviewed-by: Tim Deegan --- diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index da586c21c7..021ae252e4 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -2817,6 +2817,7 @@ static int sh_page_fault(struct vcpu *v, uint32_t rc, error_code; bool walk_ok; int version; + unsigned int cpl; const struct npfec access = { .read_access = 1, .write_access = !!(regs->error_code & PFEC_write_access), @@ -2967,6 +2968,8 @@ static int sh_page_fault(struct vcpu *v, return 0; } + cpl = is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v); + rewalk: error_code = regs->error_code; @@ -3023,8 +3026,7 @@ static int sh_page_fault(struct vcpu *v, * If this corner case comes about accidentally, then a security-relevant * bug has been tickled. */ - if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) && - (is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v)) == 3 ) + if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) && cpl == 3 ) error_code |= PFEC_implicit; /* The walk is done in a lock-free style, with some sanity check