Kubernetes Init 超时

Kubernetes Init 超时

这是我的配置,我正在尝试将节点初始化为主节点,即控制平面节点。因此,我尝试使用 HA-Proxy 节点(该节点是此集群之外的外部机器)在主节点和工作节点之间路由流量。该机器位于同一子网中,并且可从主节点 ping 到此节点。我有以下命令来执行初始化

sudo kubeadm init     
--cri-socket unix:///var/run/cri-dockerd.sock     
--pod-network-cidr 172.16.0.0/16     
--apiserver-advertise-address 10.199.70.9

因此在这里我使用 cri-dockerd.sock 进行初始化进程但是当我运行它时出现以下错误。

[init] Using Kubernetes version: v1.27.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0728 14:43:56.248834   72032 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local l1kub01.corvega.int] and IPs [10.96.0.1 10.199.70.9]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [l1kub01.corvega.int localhost] and IPs [10.199.70.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [l1kub01.corvega.int localhost] and IPs [10.199.70.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

现在查看日志,我发现问题实际上是这个主节点无法连接到代理节点,因此出现错误。我在代理节点上运行了 haproxy.service,然后 haproxy.config 如下。

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

frontend kubernetes
        bind 10.199.70.9:6443
        option tcplog
        mode tcp
        default_backend kubernetes-control-plane

backend kubernetes-control-plane
        mode tcp
        balance roundrobin
        option tcp-check
        server L1KUB01 10.199.70.24:6443 fall 3 rise 2

其中 10.199.70.24 是主节点的 IP,而 10.199.70.9 是 haproxy 节点的 IP。有人能指出问题所在吗?

我正在 Ubuntu 22.04 上安装最新版本的 Kubernetes 1.27

相关内容