Kubeadm - 无法加入节点 - 等待连接时请求被取消

Kubeadm - 无法加入节点 - 等待连接时请求被取消

尝试使用 kubeadm 在 3 个 Debian 10 VM 上配置 k8s 集群。

所有虚拟机都有 2 个网络接口,eth0 作为具有静态 ip 的公共接口,eth1 作为具有 192.168.0.0/16 中的静态 ip 的本地接口:

  • 主站:192.168.1.1
  • 节点1:192.168.2.1
  • 节点2:192.168.2.2

所有节点之间都有互连。

ip a来自主主机:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:52:70:53:d5:12 brd ff:ff:ff:ff:ff:ff
    inet XXX.XXX.244.240/24 brd XXX.XXX.244.255 scope global dynamic eth0
       valid_lft 257951sec preferred_lft 257951sec
    inet6 2a01:367:c1f2::112/48 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::252:70ff:fe53:d512/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:95:af:b0:8c:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/16 brd 192.168.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::295:afff:feb0:8cc4/64 scope link 
       valid_lft forever preferred_lft forever

主节点已初始化:

kubeadm init --upload-certs --apiserver-advertise-address=192.168.1.1 --apiserver-cert-extra-sans=192.168.1.1,XXX.XXX.244.240 --pod-network-cidr=10.40.0.0/16 -v=5

输出

但是当我加入工作节点时,kube-api 无法访问:

kubeadm join 192.168.1.1:6443 --token 7bl0in.s6o5kyqg27utklcl --discovery-token-ca-cert-hash sha256:7829b6c7580c0c0f66aa378c9f7e12433eb2d3b67858dd3900f7174ec99cda0e -v=5

输出

来自主服务器的 Netstat:

# netstat -tupn | grep :6443
tcp        0      0 192.168.1.1:43332       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41774       192.168.1.1:6443        ESTABLISHED 5362/kube-proxy     
tcp        0      0 192.168.1.1:41744       192.168.1.1:6443        ESTABLISHED 5236/kubelet        
tcp        0      0 192.168.1.1:43376       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43398       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41652       192.168.1.1:6443        ESTABLISHED 4914/kube-scheduler 
tcp        0      0 192.168.1.1:43448       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43328       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43452       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43386       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43350       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41758       192.168.1.1:6443        ESTABLISHED 5182/kube-controlle 
tcp        0      0 192.168.1.1:43306       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43354       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43296       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:43408       192.168.1.1:6443        TIME_WAIT   -                   
tcp        0      0 192.168.1.1:41730       192.168.1.1:6443        ESTABLISHED 5182/kube-controlle 
tcp        0      0 192.168.1.1:41738       192.168.1.1:6443        ESTABLISHED 4914/kube-scheduler 
tcp        0      0 192.168.1.1:43444       192.168.1.1:6443        TIME_WAIT   -                   
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41730       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41744       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41738       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41652       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 ::1:6443                ::1:42862               ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41758       ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 ::1:42862               ::1:6443                ESTABLISHED 5094/kube-apiserver 
tcp6       0      0 192.168.1.1:6443        192.168.1.1:41774       ESTABLISHED 5094/kube-apiserver 

来自 master 的 Pod:

# kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP                   NODE                      NOMINATED NODE   READINESS GATES
coredns-558bd4d5db-8qhhl                          0/1     Pending   0          12m   <none>               <none>                    <none>           <none>
coredns-558bd4d5db-9hj7z                          0/1     Pending   0          12m   <none>               <none>                    <none>           <none>
etcd-cloud604486.fastpipe.io                      1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-apiserver-cloud604486.fastpipe.io            1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-controller-manager-cloud604486.fastpipe.io   1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-proxy-dzd42                                  1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>
kube-scheduler-cloud604486.fastpipe.io            1/1     Running   0          12m   2a01:367:c1f2::112   cloud604486.fastpipe.io   <none>           <none>

所有虚拟机都设置了以下内核参数:

  • { name: 'vm.swappiness', value: '0' }
  • { name: 'net.bridge.bridge-nf-call-iptables', value: '1' }
  • { name: 'net.bridge.bridge-nf-call-ip6tables', value: '1'}
  • { name: 'net.ipv4.ip_forward', value: 1 }
  • { name: 'net.ipv6.conf.all.forwarding', value: 1}

br_netfilter 内核模块处于活动状态,并且 iptables 设置为传统模式(通过替代方案)

我是否遗漏了什么?

答案1

经过一周的修补,问题最终归结为服务提供商网络配置错误。

对于遇到同样问题的人,请检查网络的 MTU,在我的情况下,它默认为 1500,而不是推荐的 1450。

相关内容