AMD/IOMMU: correct potentially-UB shifts
Recent changes (likely
5fafa6cf529a ["AMD/IOMMU: have callers specify
the target level for page table walks"]) have made Coverity notice a
shift count in iommu_pde_from_dfn() which might in theory grow too
large. While this isn't a problem in practice, address the concern
nevertheless to not leave dangling breakage in case very large
superpages would be enabled at some point.
Coverity ID:
1504264
While there also address a similar issue in set_iommu_ptes_present().
It's not clear to me why Coverity hasn't spotted that one.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>