由于 MMU,构建支持 KVM 的 CentOS 6 内核失败

由于 MMU,构建支持 KVM 的 CentOS 6 内核失败

为 CentOS 构建最新的 Linux 内核时出现以下错误

CC [M] arch/x86/kernel/iosf_mbi.o CC arch/x86/kvm/../.././virt/kvm/kvm_main.o CC arch/x86/kvm/../../../virt/kvm/coalesced_mmio.o CC arch/x86/kvm/../../../virt/kvm/eventfd.o CC arch/x86/kvm/../../../virt/kvm/irqchip.o CC arch/x86/kvm/../../../virt/kvm/vfio.o CC arch/x86/kvm/../../../virt/kvm/async_pf.o CC arch/x86/kvm/x86.o arch/x86/kvm/x86.c:在函数“kvm_write_tsc”中: arch/x86/kvm/x86.c:1290:警告:在此函数中可能未初始化“already_matched” > CC arch/x86/kvm/mmu.o arch/x86/kvm/mmu.c:在函数“kvm_mmu_pte_write”中: arch/x86/kvm/mmu.c:4219:错误:在初始化程序中指定了未知字段“cr0_wp” arch/x86/kvm/mmu.c:4220:错误:在初始化程序中指定了未知字段“cr4_pae” arch/x86/kvm/mmu.c:4220:警告:联合初始化程序中元素过多 arch/x86/kvm/mmu.c:4220:警告:(接近初始化'(匿名)') arch/x86/kvm/mmu.c:4221:错误:初始化程序中指定了未知字段'nxe' arch/x86/kvm/mmu.c:4221:警告:联合初始化程序中元素过多 arch/x86/kvm/mmu.c:4221:警告:(接近'(匿名)'的初始化) arch/x86/kvm/mmu.c:4222:错误:初始化程序中指定了未知字段'smep_andnot_wp' arch/x86/kvm/mmu.c:4222:警告:联合初始化程序中元素过多 arch/x86/kvm/mmu.c:4222:警告:(接近'(匿名)'的初始化) arch/x86/kvm/mmu.c:4223:错误:初始化程序中指定了未知字段'smap_andnot_wp' arch/x86/kvm/mmu.c:4223:警告:联合初始化程序中元素过多 arch/x86/kvm/mmu.c:4223:警告:(接近“(匿名)”的初始化)make[2]:* [arch/x86/kvm/mmu.o] 错误 1 ​​make[1]: *[arch/x86/kvm] 错误 2 make:*** [arch/x86] 错误 2

我已在 KVM 菜单下禁用 MMU,但它仍然出现在编译中,我也尝试过 make clean

这是构建配置:http://sprunge.us/YdcN

我遗漏了什么吗?

答案1

这是在文件中发现的编码错误linux-4.0.5/arch/x86/kvm/mmu.c。您可以注释掉这些行,因为这是一个用于分配页表详细信息的联合变量,因为它并不重要。

或者:

您可以用以下代码替换函数kvm_mmu_pte_write()定义来纠正错误

void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
                   const u8 *new, int bytes) {
    gfn_t gfn = gpa >> PAGE_SHIFT;
    struct kvm_mmu_page *sp;
    LIST_HEAD(invalid_list);
    u64 entry, gentry, *spte;
    int npte;
    bool remote_flush, local_flush, zap_page;

    struct kvm_mmu *context = &vcpu->arch.mmu;

    union kvm_mmu_page_role mask = (union kvm_mmu_page_role) {
        context->base_role.cr0_wp = 1,
        context->base_role.cr4_pae = 1,
        context->base_role.nxe = 1,
        context->base_role.smep_andnot_wp = 1,
        context->base_role.smap_andnot_wp = 1
    };

    /*
     * If we don't have indirect shadow pages, it means no page is
     * write-protected, so we can exit simply.
     */
    if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages))
            return;

    zap_page = remote_flush = local_flush = false;

    pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);

    gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);

    /*
     * No need to care whether allocation memory is successful
     * or not since pte prefetch is skiped if it does not have
     * enough objects in the cache.
     */
    mmu_topup_memory_caches(vcpu);

    spin_lock(&vcpu->kvm->mmu_lock);
    ++vcpu->kvm->stat.mmu_pte_write;
    kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);

    for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
            if (detect_write_misaligned(sp, gpa, bytes) ||
                  detect_write_flooding(sp)) {
                    zap_page |= !!kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
                                                 &invalid_list);
                    ++vcpu->kvm->stat.mmu_flooded;
                    continue;
            }

            spte = get_written_sptes(sp, gpa, &npte);
            if (!spte)
                    continue;

            local_flush = true;
            while (npte--) {
                    entry = *spte;
                    mmu_page_zap_pte(vcpu->kvm, sp, spte);
                    if (gentry &&
                          !((sp->role.word ^ vcpu->arch.mmu.base_role.word)
                          & mask.word) && rmap_can_add(vcpu))
                            mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
                    if (need_remote_flush(entry, *spte))
                            remote_flush = true;
                    ++spte;
            }
    }
    mmu_pte_write_flush_tlb(vcpu, zap_page, remote_flush, local_flush);
    kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
    kvm_mmu_audit(vcpu, AUDIT_POST_PTE_WRITE);
    spin_unlock(&vcpu->kvm->mmu_lock);
}

进行更改并再次编译代码。它对我有用。

相关内容