When freeing a p2m entry, all the sub-tree behind it will also be freed.
This may include intermediate page-tables or any l3 entry requiring to
drop a reference (e.g for foreign pages). As soon as pages are freed,
they may be re-used by Xen or another domain. Therefore it is necessary
to flush *all* the TLBs beforehand.
While CPU TLBs will be flushed before freeing the pages, this is not
the case for IOMMU TLBs. This can be solved by moving the IOMMU TLBs
flush earlier in the code.
This wasn't considered as a security issue as device passthrough on Arm
is not security supported.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Tested-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Release-acked-by: Juergen Gross <jgross@suse.com>
(cherry picked from commit
671878779741b38c5f2363adceef8de2ce0b3945)
p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
}
+ if ( need_iommu(p2m->domain) &&
+ (lpae_valid(orig_pte) || lpae_valid(*entry)) )
+ rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
+ else
+ rc = 0;
+
/*
* Free the entry only if the original pte was valid and the base
* is different (to avoid freeing when permission is changed).
if ( lpae_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
p2m_free_entry(p2m, orig_pte, level);
- if ( need_iommu(p2m->domain) &&
- (lpae_valid(orig_pte) || lpae_valid(*entry)) )
- rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
- else
- rc = 0;
-
out:
unmap_domain_page(table);