XSETBV is an expensive instruction as, amongst other things, it involves
reconfiguring the instruction decode at the frontend of the pipeline.
We have several paths which reconfigure %xcr0 in quick succession (the context
switch path has 5, including the fpu save/restore helpers), and only a single
caller takes any care to try to skip redundant writes.
Update set_xcr0() to perform amortisation automatically, and simplify the
__context_switch() path as a consequence.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
memcpy(stack_regs, &n->arch.user_regs, CTXT_SWITCH_STACK_BYTES);
if ( cpu_has_xsave )
{
- u64 xcr0 = n->arch.xcr0 ?: XSTATE_FP_SSE;
-
- if ( xcr0 != get_xcr0() && !set_xcr0(xcr0) )
+ if ( !set_xcr0(n->arch.xcr0 ?: XSTATE_FP_SSE) )
BUG();
if ( cpu_has_xsaves && is_hvm_vcpu(n) )
return lo != 0;
}
-bool set_xcr0(u64 xfeatures)
+bool set_xcr0(u64 val)
{
- if ( !xsetbv(XCR_XFEATURE_ENABLED_MASK, xfeatures) )
- return false;
- this_cpu(xcr0) = xfeatures;
+ uint64_t *this_xcr0 = &this_cpu(xcr0);
+
+ if ( *this_xcr0 != val )
+ {
+ if ( !xsetbv(XCR_XFEATURE_ENABLED_MASK, val) )
+ return false;
+
+ *this_xcr0 = val;
+ }
+
return true;
}