x86/PV: make PMU MSR handling consistent
authorJan Beulich <jbeulich@suse.com>
Fri, 2 Sep 2016 12:19:29 +0000 (14:19 +0200)
committerJan Beulich <jbeulich@suse.com>
Fri, 2 Sep 2016 12:19:29 +0000 (14:19 +0200)
So far accesses to Intel MSRs on an AMD system fall through to the
default case, while accesses to AMD MSRs on an Intel system bail (in
the RDMSR case without updating EAX and EDX). Make the "AMD MSRs on
Intel" case match the "Intel MSR on AMD" one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
xen/arch/x86/traps.c

index 992ba23f506e63f1dcb17edf10e9c9d8d8ea3c3e..d2f2de4aea79910d0b1dae8616557c4af723eabc 100644 (file)
@@ -2912,8 +2912,8 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
 
                     if ( vpmu_do_wrmsr(regs->ecx, msr_content, 0) )
                         goto fail;
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/
 
@@ -3048,8 +3048,8 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
 
                     regs->eax = (uint32_t)val;
                     regs->edx = (uint32_t)(val >> 32);
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/