在进入生产系统之前,我正在尝试建立一个 k8s 集群用于学习和测试。
我已经在 Debian 11 的裸机上设置了我的 k8s 集群
安装后我可以运行:
$ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
km1 Ready control-plane 22m v1.26.2
kw1 Ready worker 21m v1.26.2
我觉得这很好。但是当我运行:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-rp47v 1/1 Running 0 12m
kube-system calico-node-m4bsl 0/1 Init:CrashLoopBackOff 6 (2m54s ago) 8m39s
kube-system calico-node-tzcp7 1/1 Running 0 12m
kube-system coredns-787d4945fb-cldh2 1/1 Running 0 12m
kube-system coredns-787d4945fb-pcpx8 1/1 Running 0 12m
kube-system etcd-km1 1/1 Running 44 13m
kube-system kube-apiserver-km1 1/1 Running 46 13m
kube-system kube-controller-manager-km1 1/1 Running 41 13m
kube-system kube-proxy-c7m6b 1/1 Running 0 12m
kube-system kube-proxy-sx4hj 1/1 Running 0 12m
kube-system kube-scheduler-km1 1/1 Running 41 13m
我看calico-node-m4bsl
是不起作用。
- 这是一个问题吗?
- 我是不是做错了什么才导致这种情况发生?
以下是一些背景信息,希望它能帮助你解答我的困惑:
我通过以下方式获取了 calico.yaml:
$ curl -fLO https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
我对该文件做的唯一更改是取消注释并设置 CALICO_IPV4POOL_CIDR 变量:
4601 - name: CALICO_IPV4POOL_CIDR
4602 value: "10.2.0.0/16"
我像这样初始化了我的集群:
$ sudo kubeadm init --control-plane-endpoint=km1.lan:6443 --pod-network-cidr=10.2.0.0/16
$ kubectl describe pods -n kube-system calico-node-m4bsl
Name: calico-node-m4bsl
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: calico-node
Node: kw1/192.168.56.60
Start Time: Thu, 09 Mar 2023 13:45:09 -0600
Labels: controller-revision-hash=9889897b6
k8s-app=calico-node
pod-template-generation=1
Annotations: <none>
Status: Pending
IP: 192.168.56.60
IPs:
IP: 192.168.56.60
Controlled By: DaemonSet/calico-node
Init Containers:
upgrade-ipam:
Container ID: containerd://49d885579623eb69e01288cbfbac8ee06e6a168819764fced9d4a83eba4443c7
Image: docker.io/calico/cni:v3.25.0
Image ID: docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/calico-ipam
-upgrade
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 Mar 2023 13:45:10 -0600
Finished: Thu, 09 Mar 2023 13:45:10 -0600
Ready: True
Restart Count: 0
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
Mounts:
/host/opt/cni/bin from cni-bin-dir (rw)
/var/lib/cni/networks from host-local-net-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knm4l (ro)
install-cni:
Container ID: containerd://91271557309b31affd3adc56c8d7ee57c560036d67f787b9e09645926a720b44
Image: docker.io/calico/cni:v3.25.0
Image ID: docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/install
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 09 Mar 2023 14:06:17 -0600
Finished: Thu, 09 Mar 2023 14:06:18 -0600
Ready: False
Restart Count: 9
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
CNI_CONF_NAME: 10-calico.conflist
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CNI_MTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
SLEEP: false
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knm4l (ro)
mount-bpffs:
Container ID:
Image: docker.io/calico/node:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
Command:
calico-node
-init
-best-effort
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/nodeproc from nodeproc (ro)
/sys/fs from sys-fs (rw)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knm4l (ro)
Containers:
calico-node:
Container ID:
Image: docker.io/calico/node:v3.25.0
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
Liveness: exec [/bin/calico-node -felix-live -bird-live] delay=10s timeout=10s period=10s #success=1 #failure=6
Readiness: exec [/bin/calico-node -felix-ready -bird-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
DATASTORE_TYPE: kubernetes
WAIT_FOR_DATASTORE: true
NODENAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
IP: autodetect
CALICO_IPV4POOL_IPIP: Always
CALICO_IPV4POOL_VXLAN: Never
CALICO_IPV6POOL_VXLAN: Never
FELIX_IPINIPMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_VXLANMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_WIREGUARDMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
CALICO_IPV4POOL_CIDR: 10.100.0.0/16
CALICO_DISABLE_FILE_LOGGING: true
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
FELIX_IPV6SUPPORT: false
FELIX_HEALTHENABLED: true
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/sys/fs/bpf from bpffs (rw)
/var/lib/calico from var-lib-calico (rw)
/var/log/calico/cni from cni-log-dir (ro)
/var/run/calico from var-run-calico (rw)
/var/run/nodeagent from policysync (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knm4l (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
sys-fs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/
HostPathType: DirectoryOrCreate
bpffs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: Directory
nodeproc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
cni-log-dir:
Type: HostPath (bare host directory volume)
Path: /var/log/calico/cni
HostPathType:
host-local-net-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/cni/networks
HostPathType:
policysync:
Type: HostPath (bare host directory volume)
Path: /var/run/nodeagent
HostPathType: DirectoryOrCreate
kube-api-access-knm4l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned kube-system/calico-node-m4bsl to kw1
Normal Pulled 23m kubelet Container image "docker.io/calico/cni:v3.25.0" already present on machine
Normal Created 23m kubelet Created container upgrade-ipam
Normal Started 23m kubelet Started container upgrade-ipam
Normal Pulled 22m (x5 over 23m) kubelet Container image "docker.io/calico/cni:v3.25.0" already present on machine
Normal Created 22m (x5 over 23m) kubelet Created container install-cni
Normal Started 22m (x5 over 23m) kubelet Started container install-cni
Warning BackOff 3m50s (x94 over 23m) kubelet Back-off restarting failed container install-cni in pod calico-node-m4bsl_kube-system(80c1a06b-7522-4df6-8c5e-e7e1beb41cd0)
答案1
怀疑根本原因是 kubelet 正在启动多个实例端口映射从而防止安装 CNI容器复制该可执行文件并完成安装到与calico 节点。这似乎是因为 kubelet 和 calico 都在争夺对同一可执行文件的访问权限,即/home/kubernetes/bin/portmap
。请参阅支持hostPort了解详情。
正如您所描述的(calico 容器初始化失败),calico-node pod 无法恢复。因此,依赖于网络策略的用户工作负载也无法启动。
修改 kube-system 的calico 节点通过UPDATE_CNI_BINARIES="false"
在守护进程集 YAML 中进行设置以包含如下所示的内容;
- env
- name: UPDATE_CNI_BINARIES
value: "false"
另外,检查您是否会因为活动高峰而面临暂时的资源过载问题。尝试更改periodSeconds
或timeoutSeconds
让您的应用程序有足够的时间做出响应。