如何让 kubeadm 完成并使用 containerd 启动 etc kubenertes 清单中定义的静态 pod

如何让 kubeadm 完成并使用 containerd 启动 etc kubenertes 清单中定义的静态 pod

我正在使用 fedora 30(也尝试过 fedora 29),但无法通过 kubeadm init,这是我收到的错误:

kstarter]# kubeadm   init --ignore-preflight-errors=Swap,Service,Docker,SystemVerification,NumC PU --config=/root/con/cc.yaml 
[init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly 
[preflight] Pulling images required for setting up a Kubernetes cluster 
[preflight] This might take a minute or two, depending on the speed of your internet connection 
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 
[kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki"  
[certs] Generating "etcd/ca" certificate and key 
[certs] Generating "etcd/healthcheck-client" certificate and key 
[certs] Generating "apiserver-etcd-client" certificate and key 
[certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [devks8twohundredandthirtyeighthbdhh99suy localhost] and IPs [172.16.6.3 127.0.0.1 ::1 ] 
[certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [devks8twohundredandthirtyeighthbdhh99suy localhost] and IPs [172.16.6.3 127.0.0.1 ::1] 
[certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key 
[certs] apiserver serving cert is signed for DNS names [devks8twohundredandthirtyeighthbdhh99suy kubernetes kubernetes.default kubernetes.defau lt.svc kubernetes.default.svc.cluster.local] and IPs [172.24.0.1 172.16.6.3] 
[certs] Generating "apiserver-kubelet-client" certificate and key 
[certs] Generating "front-proxy-ca" certificate and key    
[certs] Generating "front-proxy-client" certificate and key 
[certs] Generating "sa" key and public key [kubeconfig] Using 
kubeconfig folder "/etc/kubernetes" 
[kubeconfig] Writing  "admin.conf" kubeconfig file 
[kubeconfig] Writing "kubelet.conf" kubeconfig file 
[kubeconfig] Writing "controller-manager.conf"  kubeconfig file 
[kubeconfig] Writing "scheduler.conf" kubeconfig file 
[control-plane] Using manifest folder     "/etc/kubernetes/manifests" 
[control-plane] Creating static Pod manifest for "kube-apiserver" 
[control-plane] Creating static Pod manifest for "kube-controller-manager" 
[control-plane] Creating static Pod manifest for "kube-scheduler" 
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 
[kubelet-check] Initial timeout of 40s passed.
     Unfortunately, an error has occurred:
       timed out waiting for the condition
     This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
     If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'
     Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

我可以手动使用 containerd(使用 crictl)在 pod 中启动容器,它们工作正常,但 kubeadm 创建了 /etc/kubernetes/manifests,但其中没有任何内容被启动,这是为什么?我可以在日志中看到任何有助于调试的内容。这是 kubelet 日志

本质上,我无法让 kubeadm 完成,并且查看 crictl 的输出,没有创建应该创建的容器/pod,例如 kube api pod、etcd pod,没有创建任何容器,就像 containerd 和 kubeadm 之间没有任何连接,但我无法在日志中看到我做错了什么。

- containerd-1.2.4-1.fc30.x86_64
- kubectl-1.14.1-0.x86_64
- kubeadm-1.14.1-0.x86_64
- kubelet-1.14.1-0.x86_64  all from google repository

使用早期版本的 kubernetes (1.10X) 和容器-cni不(containerd)我可以让 kubeadm 创建集群。

答案1

事实证明这里存在一个问题:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
FailSwapOn: false
...
EnableControllerAttachDetach: true
StaticPodPath: "/etc/kubernetes/manifests/"

那不staticPodPath应该StaticPodPath

相关内容