我无法初始化 HA Kubernetes 集群。有人能帮助我吗?

我无法初始化 HA Kubernetes 集群。有人能帮助我吗?

我正在尝试使用 keepalived + haproxy 设置 HA K8s 集群

这是我的配置

root@k8sHAproxy1:~# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script "killall -0 haproxy" # check the haproxy process
interval 2 # every 2 seconds
weight 2 # add 2 points if OK
}
vrrp_instance VI_1 {
interface ens160             # interface to monitor
state MASTER             # MASTER on haproxy1, BACKUP on haproxy2
virtual_router_id 51
priority 101             # 101 on haproxy1, 100 on haproxy2
virtual_ipaddress {
10.58.118.212/24 # virtual ip address
}
track_script {
chk_haproxy
}
}
root@k8sHAproxy1:~#
root@k8sHAproxy1:~# egrep -v "^#|^$" /etc/haproxy/haproxy.cfg
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s
frontend apiserver
    bind 10.58.118.212:6443
    mode tcp
    option tcplog
    default_backend apiserver
backend apiserver
    #option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    server k8smaster1 10.58.118.213:6443 check fall 3 rise 2
    server k8smaster2 10.58.118.214:6443 check fall 3 rise 2
    server k8smaster3 10.58.118.215:6443 check fall 3 rise 2

另外我已经设置了代理配置:

root@k8sMaster1:~# cat /usr/lib/systemd/system/containerd.service
...
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Environment="HTTP_PROXY=http://proxy.dev.tsbr.net:8080"
Environment="HTTPS_PROXY=http://proxy.dev.tsbr.net:8080"
Environment="NO_PROXY=10.58.118.211,10.58.118.210,10.58.118.213,10.58.118.214,10.58.118.215,10.58.118.0/24,127.0.0.1,localhost,10.96.0.0/12"

主节点和 HAproxy 主机之间存在连接

root@k8sMaster1:~# grep -w k8smaster /etc/hosts
10.58.118.212 k8smaster
root@k8sMaster1:~#
root@k8sMaster1:~# nc -v k8smaster 6443
Connection to k8smaster 6443 port [tcp/*] succeeded!

然而,这就是我得到的错误

root@k8sMaster1:~# kubeadm init --control-plane-endpoint "k8smaster:6443"" --upload-certs
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8shaproxy1 k8smaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.58.118.213]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster1 localhost] and IPs [10.58.118.213 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster1 localhost] and IPs [10.58.118.213 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
            - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

现在有了 --v=10 选项:

I0624 16:23:36.523343  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:36.523369  235607 round_trippers.go:460] Response     Headers:
I0624 16:23:37.015433  235607 round_trippers.go:435] curl -k -v     -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:37.024138  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:37.024330  235607 round_trippers.go:460] Response Headers:
I0624 16:23:37.514991  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:37.523638  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:37.523664  235607 round_trippers.go:460] Response Headers:
I0624 16:23:38.015505  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:38.024253  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:38.024290  235607 round_trippers.go:460] Response Headers:
I0624 16:23:38.514795  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:38.523677  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:38.523702  235607 round_trippers.go:460] Response Headers:
I0624 16:23:39.014634  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:39.023141  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:39.023172  235607 round_trippers.go:460] Response Headers:
I0624 16:23:39.514723  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:39.523828  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 9 milliseconds
I0624 16:23:39.523873  235607 round_trippers.go:460] Response Headers:
I0624 16:23:40.015592  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:40.024868  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 9 milliseconds
I0624 16:23:40.024893  235607 round_trippers.go:460] Response Headers:
I0624 16:23:40.515581  235607 round_trippers.go:435] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.21.2 (linux/amd64) kubernetes/092fbfb" 'https://k8smaster:6443/healthz?timeout=10s'
I0624 16:23:40.524264  235607 round_trippers.go:454] GET https://k8smaster:6443/healthz?timeout=10s  in 8 milliseconds
I0624 16:23:40.524289  235607 round_trippers.go:460] Response Headers:


        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371

日志:

我收到了这些日志:

/var/log/haproxy.log


Jun 24 16:22:49 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58050 [24/Jun/2021:16:22:29.478] apiserver apiserver/k8smaster1 1/0/20073 4721 cD 2/2/1/1/0 0/0
Jun 24 16:23:09 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58188 [24/Jun/2021:16:22:49.553] apiserver apiserver/k8smaster1 1/0/20013 1804 cD 2/2/1/1/0 0/0
Jun 24 16:23:29 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58326 [24/Jun/2021:16:23:09.569] apiserver apiserver/k8smaster1 1/0/20016 1804 cD 2/2/1/1/0 0/0
Jun 24 16:23:49 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58466 [24/Jun/2021:16:23:29.586] apiserver apiserver/k8smaster1 1/0/20022 1804 cD 2/2/1/1/0 0/0
Jun 24 16:24:09 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58604 [24/Jun/2021:16:23:49.610] apiserver apiserver/k8smaster1 1/0/20016 1804 cD 2/2/1/1/0 0/0
Jun 24 16:24:29 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58744 [24/Jun/2021:16:24:09.628] apiserver apiserver/k8smaster1 1/0/20015 1804 cD 2/2/1/1/0 0/0
Jun 24 16:24:49 k8sHAproxy1 haproxy[99951]: 10.58.118.213:58882 [24/Jun/2021:16:24:29.645] apiserver apiserver/k8smaster1 1/0/20014 1804 cD 2/2/1/1/0 0/0


root@k8sMaster1:~# journalctl -xeu kubelet | tail
Jun 24 16:27:51 k8sMaster1 kubelet[237054]: I0624 16:27:51.381836  237054 image_gc_manager.go:321] "Attempting to delete unused images"
Jun 24 16:27:51 k8sMaster1 kubelet[237054]: I0624 16:27:51.384563  237054 image_gc_manager.go:375] "Removing image to free bytes" imageID="sha256:a6ebd1c1ad9810239a2885494ae92e0230224bafcb39ef1433c6cb49a98b0dfe" size=52509835
Jun 24 16:27:51 k8sMaster1 kubelet[237054]: I0624 16:27:51.531304  237054 image_gc_manager.go:375] "Removing image to free bytes" imageID="sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899" size=12945155
Jun 24 16:27:51 k8sMaster1 kubelet[237054]: I0624 16:27:51.631451  237054 eviction_manager.go:346] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage"
Jun 24 16:27:56 k8sMaster1 kubelet[237054]: E0624 16:27:56.318840  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 24 16:28:01 k8sMaster1 kubelet[237054]: E0624 16:28:01.319755  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 24 16:28:06 k8sMaster1 kubelet[237054]: E0624 16:28:06.320950  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 24 16:28:11 k8sMaster1 kubelet[237054]: E0624 16:28:11.321569  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 24 16:28:16 k8sMaster1 kubelet[237054]: E0624 16:28:16.322777  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jun 24 16:28:21 k8sMaster1 kubelet[237054]: E0624 16:28:21.324206  237054 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

观察

  1. 命令kubeadm config images pull有效。

  2. 当我运行时kubeadm init,集群创建没有任何问题

    ...

答案1

几天后我发现了问题

/usr/lib/systemd/system/containerd.service 设置中的变量 no_proxy 旨在接受域而不是 IP 范围。kubectl 尝试通过代理访问 API 服务器

正确方法:Environment="NO_PROXY=dev.lab.net"

相关内容