Prior extension of these functions to enable per-device quarantine page
tables already didn't add more locking there, but merely left in place
what had been there before. But really locking is unnecessary here:
We're running with pcidevs_lock held (i.e. multiple invocations of the
same function [or their teardown equivalents] are impossible, and hence
there are no "local" races), while all consuming of the data being
populated here can't race anyway due to happening sequentially
afterwards, and unlike ordinary domains' page tables quarantine ones
are never modified once fully constructed. See also the comment in
struct arch_pci_dev.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
union amd_iommu_pte *root;
struct page_info *pgs[IOMMU_MAX_PT_LEVELS] = {};
- spin_lock(&hd->arch.mapping_lock);
-
root = __map_domain_page(pdev->arch.amd.root_table);
rc = fill_qpt(root, level - 1, pgs);
unmap_domain_page(root);
pdev->arch.leaf_mfn = page_to_mfn(pgs[0]);
-
- spin_unlock(&hd->arch.mapping_lock);
}
page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list);
struct dma_pte *root;
struct page_info *pgs[6] = {};
- spin_lock(&hd->arch.mapping_lock);
-
root = map_vtd_domain_page(pdev->arch.vtd.pgd_maddr);
rc = fill_qpt(root, level - 1, pgs);
unmap_vtd_domain_page(root);
pdev->arch.leaf_mfn = page_to_mfn(pgs[0]);
-
- spin_unlock(&hd->arch.mapping_lock);
}
page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list);