Calico Node 和 Kube Proxy 在新节点上永久崩溃

Calico Node 和 Kube Proxy 在新节点上永久崩溃

我有一个 1.25.0 版本的 Kubernetes 集群,其中包含一些节点(Ubuntu 服务器)。我使用 calicohttps://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml。现在我正在添加一个新节点。该节点完全相同。唯一的例外是它有一个 2.5gbit 网络端口,而不是 1gbit 网络端口。在此节点上,calico 节点和 kube 代理都永久崩溃。在所有其他节点上,它运行良好。Calico Node 报告崩溃的原因如下:

Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused W0724 00:54:46.157624 73 feature_gate.go:241] Setting GA feature gate ServiceInternalTrafficPolicy=true. It will be removed in a future release.
Back-off restarting failed container

Kube 代理刚刚崩溃back-off restarting failed container

所有日志看起来都很好,没有错误 - 甚至没有警告。以下是来自 calico 节点容器的部分日志:

2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1893: Received interface update msg=&intdataplane.ifaceStateUpdate{Name:"calico_tmp_B", State:"", Index:76}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1913: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_B", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1893: Received interface update msg=&intdataplane.ifaceStateUpdate{Name:"calico_tmp_A", State:"", Index:77}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1913: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calico_tmp_A", Addrs:set.Set[string](nil)}
2023-07-24 01:35:34.609 [INFO][115] felix/int_dataplane.go 1803: Dataplane updates throttled
2023-07-24 01:35:35.603 [INFO][115] felix/int_dataplane.go 1770: Dataplane updates no longer throttled
bird: device1: Initializing
bird: direct1: Initializing
bird: device1: Starting
bird: device1: Initializing
bird: direct1: Initializing
bird: Mesh_192_168_178_58: Initializing
bird: Mesh_192_168_178_25: Initializing
bird: Mesh_192_168_178_70: Initializing
bird: Mesh_192_168_178_38: Initializing
bird: Mesh_192_168_178_72: Initializing
bird: device1: Starting
bird: device1: Connected to table master
bird: bird: device1: Connected to table masterdevice1: State changed to feed
bird: device1: State changed to feed
bird: direct1: Starting
bird: direct1: Connected to table master
bird: direct1: State changed to feed
bird: direct1: Startingbird: 
Graceful restart started
bird: bird: direct1: Connected to table masterGraceful restart done
bird: direct1: State changed to feedbird: 
Startedbird: 
Mesh_192_168_178_58: Starting
bird: bird: Mesh_192_168_178_58: State changed to startdevice1: State changed to up
bird: Mesh_192_168_178_25: Starting
bird: Mesh_192_168_178_25: State changed to start
bird: Mesh_192_168_178_70: Starting
bird: Mesh_192_168_178_70: State changed to start
bird: Mesh_192_168_178_38: Starting
bird: bird: direct1: State changed to upMesh_192_168_178_38: State changed to start
bird: Mesh_192_168_178_72: Starting
bird: Mesh_192_168_178_72: State changed to start
bird: Graceful restart started
bird: Started
bird: device1: State changed to up
bird: direct1: State changed to up
bird: Mesh_192_168_178_58: Connected to table master
bird: Mesh_192_168_178_58: State changed to wait
bird: Mesh_192_168_178_25: Connected to table master
bird: Mesh_192_168_178_25: State changed to wait
bird: Mesh_192_168_178_72: Connected to table master
bird: Mesh_192_168_178_72: State changed to wait
bird: Mesh_192_168_178_70: Connected to table master
bird: Mesh_192_168_178_70: State changed to wait
bird: Mesh_192_168_178_38: Connected to table master
bird: Mesh_192_168_178_38: State changed to wait
bird: Graceful restart done
bird: Mesh_192_168_178_58: State changed to feed
bird: Mesh_192_168_178_25: State changed to feed
bird: Mesh_192_168_178_70: State changed to feed
bird: Mesh_192_168_178_38: State changed to feed
bird: Mesh_192_168_178_72: State changed to feed
bird: Mesh_192_168_178_58: State changed to up
bird: Mesh_192_168_178_25: State changed to up
bird: Mesh_192_168_178_70: State changed to up
bird: Mesh_192_168_178_38: State changed to up
bird: Mesh_192_168_178_72: State changed to up
2023-07-24 01:35:41.982 [INFO][115] felix/health.go 336: Overall health status changed: live=true ready=true
+---------------------------+---------+----------------+-----------------+--------+
|         COMPONENT         | TIMEOUT |    LIVENESS    |    READINESS    | DETAIL |
+---------------------------+---------+----------------+-----------------+--------+
| CalculationGraph          | 30s     | reporting live | reporting ready |        |
| FelixStartup              | -       | reporting live | reporting ready |        |
| InternalDataplaneMainLoop | 1m30s   | reporting live | reporting ready |        |
+---------------------------+---------+----------------+-----------------+--------+
2023-07-24 01:36:27.256 [INFO][115] felix/int_dataplane.go 1836: Received *proto.HostMetadataV4V6Update update from calculation graph msg=hostname:"storage-controller" ipv4_addr:"192.168.178.72/24" labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > labels:<key:"kubernetes.io/arch" value:"amd64" > labels:<key:"kubernetes.io/hostname" value:"storage-controller" > labels:<key:"kubernetes.io/os" value:"linux" > labels:<key:"specialServerType" value:"storage" > 
2023-07-24 01:36:29.551 [INFO][115] felix/int_dataplane.go 1836: Received *proto.HostMetadataV4V6Update update from calculation graph msg=hostname:"node1" ipv4_addr:"192.168.178.25/24" labels:<key:"beta.kubernetes.io/arch" value:"amd64" > labels:<key:"beta.kubernetes.io/os" value:"linux" > labels:<key:"kubernetes.io/arch" value:"amd64" > labels:<key:"kubernetes.io/hostname" value:"node1" > labels:<key:"kubernetes.io/os" value:"linux" > labels:<key:"node-role.kubernetes.io/control-plane" value:"" > labels:<key:"node.kubernetes.io/exclude-from-external-load-balancers" value:"" > 
2023-07-24 01:36:34.389 [INFO][117] monitor-addresses/autodetection_methods.go 103: Using autodetected IPv4 address on interface enp11s0: 192.168.178.88/24
2023-07-24 01:36:37.850 [INFO][115] felix/summary.go 100: Summarising 20 dataplane reconciliation loops over 1m3.5s: avg=13ms longest=180ms (resync-filter-v4,resync-ipsets-v4,resync-mangle-v4,resync-nat-v4,resync-raw-v4,resync-routes-v4,resync-routes-v4,resync-rules-v4,update-filter-v4,update-ipsets-4,update-mangle-v4,update-nat-v4,update-raw-v4)

我完全不明白这一点。我已经重建了整个节点,更新了 calico 节点,还尝试了其他 kubernetes 版本(1.25.11)。没有安装防火墙。有人能帮我吗?谢谢

附言:我已经尝试了所有自动检测方法。目前我使用 IP_AUTODETECTION_METHOD=can-reach=8.8.8.8

节点上的 ifconfig 输出:

 ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:50:db:52:31  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp11s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.178.88  netmask 255.255.255.0  broadcast 192.168.178.255
        inet6 fe80::67c:16ff:fec8:53e6  prefixlen 64  scopeid 0x20<link>
        inet6 2a02:908:523:bd80:67c:16ff:fec8:53e6  prefixlen 64  scopeid 0x0<global>
        ether 04:7c:16:c8:53:e6  txqueuelen 1000  (Ethernet)
        RX packets 8748  bytes 6634723 (6.6 MB)
        RX errors 0  dropped 242  overruns 0  frame 0
        TX packets 6342  bytes 944277 (944.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 390  bytes 47600 (47.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 390  bytes 47600 (47.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
        inet 192.168.53.192  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 39  bytes 13183 (13.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

wlp12s0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 60:e9:aa:5e:01:95  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

答案1

错误:BIRD 尚未准备好:查询 BIRD 时出错:无法连接到 BIRDv4 套接字:dial unix /var/run/calico/bird.ctl

上述错误主要是由于您有多个接口,并且 calico 插件无法检测到正确的接口,这是由于设置接口或 IP 地址时配置错误造成的。常见的修复方法是分配静态 IP 地址或启用自动检测接口。

以下是一些其他参考资料:

  1. https://github.com/projectcalico/calico/issues/2834#issuecomment-528400727

  2. https://github.com/projectcalico/calico/issues/2042#issuecomment-419655823

答案2

我自己能够找到并修复该问题。我始终按照此指南安装所有节点:

https://www.edureka.co/blog/install-kubernetes-on-ubuntu

这似乎已经过时了,并且没有设置重要的内核参数。使用本指南后,Calico 现在可以正常启动:

https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/

具体来说,如果我没有记错的话,它是关于这个命令的:

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF 

相关内容