控制平面加入 HA 集群时 kube-apiserver 退出

控制平面加入 HA 集群时 kube-apiserver 退出
  1. Ubuntu 16.04.5
  2. 所有操作均以 root 用户身份完成
  3. 软件版本如下:
ii  kubeadm                             1.20.4-00                                  amd64        Kubernetes Cluster Bootstrapping Tool
ii  kubectl                             1.20.4-00                                  amd64        Kubernetes Command Line Tool
ii  kubelet                             1.20.4-00                                  amd64        Kubernetes Node Agent
ii  kubernetes-cni                      0.8.7-00                                   amd64        Kubernetes CNI
ii  containerd.io                       1.2.6-3                                    amd64        An open and reliable container runtime

我正在按照指南使用 kubeadm 创建高可用性集群:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

假设控制平面节点为:A(devops1ar01n01 172.16.80.3) B(devops1ar01n02 172.16.80.4) C(devops1ar01n03 172.16.80.5)。

我按照以下链接使用 kube-vip 设置负载均衡器。在 A 和 BI 上都创建了文件 /etc/kube-vip/config.yaml 和 /etc/kubernetes/mainfest/kube-vip.yaml:

https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#kube-vip

我运行以下命令来初始化第一个控制平面节点 A(kube-vip 监听端口 16443):

kubeadm init --control-plane-endpoint kube-vip:16443 --upload-certs

输出如下:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
    --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b \
    --control-plane --certificate-key 44995042d21c87ea5ed4f62443fe665cbfd7c71397485ca9f06d1483548c1883

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
    --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b 

然后我按照输出运行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

然后我通过运行以下命令在节点 A 上安装了 CNI 插件 weave:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=1.20.4"

然后检查 pod:

root@devops1ar01n01:~# kubectl get pod -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-s2bh5                  0/1     Running   0          36m
coredns-74ff55c5b-stm2l                  1/1     Running   0          36m
etcd-devops1ar01n01                      1/1     Running   0          36m
kube-apiserver-devops1ar01n01            1/1     Running   0          36m
kube-controller-manager-devops1ar01n01   1/1     Running   0          36m
kube-proxy-bnzpd                         1/1     Running   0          36m
kube-scheduler-devops1ar01n01            1/1     Running   0          36m
kube-vip-devops1ar01n01                  1/1     Running   0          36m
weave-net-8fmf9                          2/2     Running   0          14s

到那时,一切进展顺利,但是在节点 B 加入集群时出现了问题。(--v=8 查看详细输出,--ignore-preflight-errors="DirAvailable--etc-kubernetes-manifests" 由于存在文件 /etc/kubernetes/mainfest/kube-vip),以下命令在节点 B 上运行。

kubeadm join kube-vip:16443 --token pa0bw2.gn6bqnyjlmh0o7xn \
   --discovery-token-ca-cert-hash sha256:fd7bb5afe0307b8694c218f07c1f3adbf270254d1f37bcec75ed292b7223cc8b \
   --control-plane \
   --certificate-key 44995042d21c87ea5ed4f62443fe665cbfd7c71397485ca9f06d1483548c1883 \
   --ignore-preflight-errors="DirAvailable--etc-kubernetes-manifests" 
   --v=8

然后会显示以下信息(172.16.80.4是节点B的IP):

[kubelet-check] Initial timeout of 40s passed.
I0226 11:12:44.981744   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:12:52.890038   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:13:03.915500   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded
I0226 11:13:19.337921   11128 etcd.go:468] Failed to get etcd status for https://172.16.80.4:2379: failed to dial endpoint https://172.16.80.4:2379 with maintenance client: context deadline exceeded

发现节点B上还没有创建etcd容器:

root@devops1ar01n02:~# docker ps | grep -v pause
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
b0188090c251        ae5eb22e4a9d           "kube-apiserver --ad…"   20 seconds ago      Up 19 seconds                           k8s_kube-apiserver_kube-apiserver-devops1ar01n02_kube-system_50f7004f736896db78d143e1d44bfbb5_4
c8c93ad432e9        7f92d556d4ff           "/usr/bin/launch.sh"     3 minutes ago       Up 3 minutes                            k8s_weave-npc_weave-net-lthlv_kube-system_eac41670-a119-4085-99e7-7cf08185deb7_0
e9946edd52ba        5f8cb769bd73           "kube-scheduler --au…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-devops1ar01n02_kube-system_90280dfce8bf44f46a3e41b6c4a9f551_0
4ffe61f78cf5        a00c858e350e           "/kube-vip start -c …"   3 minutes ago       Up 3 minutes                            k8s_kube-vip_kube-vip-devops1ar01n02_kube-system_dd4d116d758ec63efaf78fc4112d63e6_0
7019afbd1497        0a41a1414c53           "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-devops1ar01n02_kube-system_9375c16649f1cd963bdbc6e4125314fc_0
32035400ad9d        c29e6c583067           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-v5nwr_kube-system_d0c1ce98-b066-4349-89e0-6113b8fa1708_0

当我回到节点 A 检查 pod 时,kubectl 命令挂起然后超时:

root@devops1ar01n01:~# kubectl get pod -n kube-system
Unable to connect to the server: net/http: TLS handshake timeout

我检查了容器,发现 kube-apiserver 一直在重新启动:

root@devops1ar01n01:~# docker ps -a | grep -v pause | grep kube-apiserver
d5f85d72d2dc        ae5eb22e4a9d           "kube-apiserver --ad…"   8 seconds ago        Up 8 seconds                                        k8s_kube-apiserver_kube-apiserver-devops1ar01n01_kube-system_860fed4d3a137b129887eb23f07be1b6_6
a3bd40ba5552        ae5eb22e4a9d           "kube-apiserver --ad…"   About a minute ago   Exited (1) About a minute ago                       k8s_kube-apiserver_kube-apiserver-devops1ar01n01_kube-system_860fed4d3a137b129887eb23f07be1b6_5

docker logs <container_id>在节点A上运行获取已退出的kube-apiserver容器的日志,输出如下:

Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0226 06:22:58.186450       1 server.go:632] external host was not specified, using 172.16.80.3
I0226 06:22:58.187781       1 server.go:182] Version: v1.20.4
I0226 06:22:59.289621       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0226 06:22:59.294839       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0226 06:22:59.294939       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0226 06:22:59.299670       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0226 06:22:59.299772       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0226 06:22:59.318732       1 client.go:360] parsed scheme: "endpoint"
I0226 06:22:59.318985       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0226 06:23:00.290273       1 client.go:360] parsed scheme: "endpoint"
I0226 06:23:00.290377       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
Error: context deadline exceeded

输出systemctl status kubelet如下:

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2021-02-26 10:31:19 CST; 3h 58min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 19602 (kubelet)
    Tasks: 17
   Memory: 67.0M
      CPU: 20min 47.367s
   CGroup: /system.slice/kubelet.service
           └─19602 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --p

Feb 26 14:29:34 devops1ar01n01 kubelet[19602]: Trace[336098419]: [10.001217223s] [10.001217223s] END
Feb 26 14:29:34 devops1ar01n01 kubelet[19602]: E0226 14:29:34.695789   19602 reflector.go:138] object-"kube-system"/"kube-proxy-token-x5lsv": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "
Feb 26 14:29:36 devops1ar01n01 kubelet[19602]: E0226 14:29:36.056753   19602 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://kube-vip:16443/apis/coordination.k8s.
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.068403   19602 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.068717   19602 event.go:218] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k
Feb 26 14:29:40 devops1ar01n01 kubelet[19602]: E0226 14:29:40.069126   19602 kubelet_node_status.go:470] Error updating node status, will retry: error getting node "devops1ar01n01": Get "https://kube-vi
Feb 26 14:29:44 devops1ar01n01 kubelet[19602]: I0226 14:29:44.012843   19602 scope.go:111] [topologymanager] RemoveContainer - Container ID: 3ab3a85e785ae39f705ca30aad59a52ec17d12e9f31cbf920695d7af9cf93
Feb 26 14:29:44 devops1ar01n01 kubelet[19602]: E0226 14:29:44.031056   19602 pod_workers.go:191] Error syncing pod 860fed4d3a137b129887eb23f07be1b6 ("kube-apiserver-devops1ar01n01_kube-system(860fed4d3a
Feb 26 14:29:50 devops1ar01n01 kubelet[19602]: E0226 14:29:50.070777   19602 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"
Feb 26 14:29:50 devops1ar01n01 kubelet[19602]: E0226 14:29:50.072142   19602 kubelet_node_status.go:470] Error updating node status, will retry: error getting node "devops1ar01n01": Get "https://kube-vi

我尝试过 kubeadm 重置两个节点并重复操作,但这种情况总是会出现。我该如何进一步调试?

答案1

问题终于被我自己解决了。这可能是由于负载均衡器中的健康检查 URL 错误导致的。

我遵循了本指南https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#kube-vip为我的 HA 集群创建负载均衡器,我选择了 kube-vip 的方式,这似乎是最方便的方式。

如果你使用 kube-vip,健康检查 URL 可能/healthz与第二种方法的 haproxy 配置文件中所述一致(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#keepalived-and-haproxy)但是我发现健康检查的url/livez在/etc/kubernetes/mainfest/kube-apiserver.yaml中

我猜测应该是kube-vip中配置了错误的健康检查url,导致健康检查失败,进而导致kube-apiserver不断重启。

为了验证这一点,为了编辑负载均衡器中的健康检查网址,我选择了第二种方式来创建负载均衡器(https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#keepalived-and-haproxy)并/healthz改为/livez/etc/haproxy/haproxy.cfg

然后我按照指南运行kube initkube join它可以正常工作。

相关内容