This isn't as much of an optimization than to avoid triggering a gcc bug
affecting 5.x ... 7.x, triggered by any asm() put inside the ad hoc
"rewalk" loop and taking as an (output?) operand a register variable
tied to %rdx (an "rdx" clobber is fine). The issue is due to an apparent
collision in register use with the modulo operation in vtlb_hash(),
which (with optimization enabled) involves a multiplication of two
64-bit values with the upper half (in %rdx) of the 128-bit result being
of interest.
Such an asm() was originally meant to be implicitly introduced into the
code when converting most indirect calls through the hvm_funcs table to
direct calls (via alternative instruction patching); that model was
switched to clobbers due to further compiler problems, but I think the
change here is worthwhile nevertheless.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Tim Deegan <tim@xen.org>
uint32_t rc, error_code;
bool walk_ok;
int version;
+ unsigned int cpl;
const struct npfec access = {
.read_access = 1,
.write_access = !!(regs->error_code & PFEC_write_access),
return 0;
}
+ cpl = is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v);
+
rewalk:
error_code = regs->error_code;
* If this corner case comes about accidentally, then a security-relevant
* bug has been tickled.
*/
- if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) &&
- (is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v)) == 3 )
+ if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) && cpl == 3 )
error_code |= PFEC_implicit;
/* The walk is done in a lock-free style, with some sanity check