x86/pv: Optimise to the segment context switching paths
Opencode the fs/gs helpers, as the optimiser is unable to rearrange the logic
down to a single X86_CR4_FSGSBASE check. This removes several jumps and
creates bigger basic blocks.
In load_segments(), optimise GS base handling substantially. The call to
svm_load_segs() already needs gsb/gss the correct way around, so hoist the
logic for the later path to use it as well. Swapping the inputs in GPRs is
far more efficient than using SWAPGS.
Previously, there was optionally one SWAPGS from the user/kernel mode check,
two SWAPGS's in write_gs_shadow() and two WRGSBASE's. Updates to GS (4 or 5
here) in quick succession stall all contemporary pipelines repeatedly. (Intel
Core/Xeon pipelines have segment register renaming[1], so can continue to
speculatively execute with one GS update in flight. Other pipelines cannot
have two updates in flight concurrently, and must stall dispatch of the second
until the first has retired.)
Rewrite the logic to have exactly two WRGSBASEs and one SWAPGS, which removes
two stalles all contemporary processors.
Although modest, the resulting delta is:
add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-106 (-106)
Function old new delta
paravirt_ctxt_switch_from 235 198 -37
context_switch 3582 3513 -69
in a common path.
[1] https://software.intel.com/security-software-guidance/insights/deep-dive-intel-analysis-speculative-behavior-swapgs-and-segment-registers
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>