... for the time being: The mechanism used depends on the domain's use
of the IRET hypercall - which PVH is not using. HVM code (which PVH
uses) will deliver an NMI if it sees v->nmi_pending however that
temporary affinity adjustment gets undone in the HYPERVISOR_iret
handler, yet PVH can't call that hypercall.
Also drop two bogus code lines spotted while going through the involved
code paths: Addresses of per-CPU variables can't possibly be NULL, and
the setting of st->vcpu in send_guest_trap()'s MCE case is redundant
with an earlier cmpxchgptr().
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
int cpu = smp_processor_id();
struct softirq_trap *st = &per_cpu(softirq_trap, cpu);
- BUG_ON(st == NULL);
BUG_ON(st->vcpu == NULL);
/* Set the tmp value unconditionally, so that
{
struct domain *d = hardware_domain;
- if ( (d == NULL) || (d->vcpu == NULL) || (d->vcpu[0] == NULL) )
+ if ( !d || !d->vcpu || !d->vcpu[0] || !is_pv_domain(d) /* PVH fixme */ )
return;
set_bit(reason_idx, nmi_reason(d));
if ( !test_and_set_bool(v->mce_pending) ) {
st->domain = d;
- st->vcpu = v;
st->processor = v->processor;
/* not safe to wake up a vcpu here */