Use the smp_ variants, as we're only synchronizing against other CPUs.
Add a write barrier before incrementing the version.
x86's memory ordering rules and the presence of various out-of-unit
function calls mean that this code worked OK before, and the barriers
are mostly decorative.
Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
ASSERT(paging_locked_by_me(d));
+ /* No need for smp_rmb() here; taking the paging lock was enough. */
if ( version == atomic_read(&d->arch.paging.shadow.gtable_dirty_version) )
return 1;
* will make sure no inconsistent mapping being translated into
* shadow page table. */
version = atomic_read(&d->arch.paging.shadow.gtable_dirty_version);
- rmb();
+ smp_rmb();
walk_ok = sh_walk_guest_tables(v, va, &gw, error_code);
#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
* overlapping with this one may be inconsistent
*/
perfc_incr(shadow_rm_write_flush_tlb);
+ smp_wmb();
atomic_inc(&d->arch.paging.shadow.gtable_dirty_version);
flush_tlb_mask(d->domain_dirty_cpumask);
}