KVM: x86: hyper-v: Avoid calling kvm_make_vcpus_request_mask() with vcpu_mask==NULL
authorVitaly Kuznetsov <vkuznets@redhat.com>
Fri, 3 Sep 2021 07:51:36 +0000 (09:51 +0200)
committerSalvatore Bonaccorso <carnil@debian.org>
Fri, 12 May 2023 04:08:40 +0000 (05:08 +0100)
Origin: https://git.kernel.org/linus/6470accc7ba948b0b3aca22b273fe84ec638a116
Bug-Debian: https://bugs.debian.org/1035779

In preparation to making kvm_make_vcpus_request_mask() use for_each_set_bit()
switch kvm_hv_flush_tlb() to calling kvm_make_all_cpus_request() for 'all cpus'
case.

Note: kvm_make_all_cpus_request() (unlike kvm_make_vcpus_request_mask())
currently dynamically allocates cpumask on each call and this is suboptimal.
Both kvm_make_all_cpus_request() and kvm_make_vcpus_request_mask() are
going to be switched to using pre-allocated per-cpu masks.

Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210903075141.403071-4-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gbp-Pq: Topic bugfix/x86
Gbp-Pq: Name KVM-x86-hyper-v-Avoid-calling-kvm_make_vcpus_request.patch

arch/x86/kvm/hyperv.c

index 09ec1cda2d687c517a3bfe5de1f94899f1b5f8e7..e03e320847cdd9ec8437654d4a3b0783a64aca76 100644 (file)
@@ -1562,16 +1562,19 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
 
        cpumask_clear(&hv_vcpu->tlb_flush);
 
-       vcpu_mask = all_cpus ? NULL :
-               sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
-                                       vp_bitmap, vcpu_bitmap);
-
        /*
         * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
         * analyze it here, flush TLB regardless of the specified address space.
         */
-       kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
-                                   NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+       if (all_cpus) {
+               kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST);
+       } else {
+               vcpu_mask = sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
+                                                   vp_bitmap, vcpu_bitmap);
+
+               kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
+                                           NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+       }
 
 ret_success:
        /* We always do full TLB flush, set rep_done = rep_cnt. */