x86/p2m: avoid unnecessary calls of write_p2m_entry_pre() hook
authorJan Beulich <jbeulich@suse.com>
Fri, 8 Jan 2021 15:49:23 +0000 (16:49 +0100)
committerJan Beulich <jbeulich@suse.com>
Fri, 8 Jan 2021 15:49:23 +0000 (16:49 +0100)
When shattering a large page, we first construct the new page table page
and only then hook it up. The "pre" hook in this case does nothing, for
the page starting out all blank. Avoid 512 calls into shadow code in
this case by passing in INVALID_GFN, indicating the page being updated
is (not yet) associated with any GFN. (The alternative to this change
would be to actually pass in a correct GFN, which can't be all the same
on every loop iteration.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
xen/arch/x86/mm/p2m-pt.c

index 5fa0d30ce7d23fcfd027f19ed9ff3f8787c9449c..848773e1cde34170c57fa2519880bbb93d4cff2a 100644 (file)
@@ -134,7 +134,7 @@ static int write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 
         paging_lock(d);
 
-        if ( p2m->write_p2m_entry_pre )
+        if ( p2m->write_p2m_entry_pre && gfn != gfn_x(INVALID_GFN) )
             p2m->write_p2m_entry_pre(d, gfn, p, new, level);
 
         oflags = l1e_get_flags(*p);
@@ -290,7 +290,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = write_p2m_entry(p2m, gfn_x(INVALID_GFN), l1_entry + i,
+                                 new_entry, level);
             if ( rc )
             {
                 unmap_domain_page(l1_entry);