部署 metallb 时如何修复 ContainerCreating 错误?

部署 metallb 时如何修复 ContainerCreating 错误?

为了测试目的,我在 vmware esxi 服务器上安装了 ubuntu 21。在那台机器上,按照以下步骤使用 lxc 容器启动 kubernetes存储库 LXC 已启动并运行。

adminuser@testing:~/Desktop$ lxc list
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
|   NAME   |  STATE  |       IPV4        |                     IPV6                      |   TYPE    | SNAPSHOTS |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kmaster  | RUNNING | 10.8.0.217 (eth0) | fd42:666f:471d:3d53:216:3eff:fe54:dce6 (eth0) | CONTAINER | 0         |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kworker1 | RUNNING | 10.8.0.91 (eth0)  | fd42:666f:471d:3d53:216:3eff:fee4:480e (eth0) | CONTAINER | 0         |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kworker2 | RUNNING | 10.8.0.124 (eth0) | fd42:666f:471d:3d53:216:3eff:fede:3c9d (eth0) | CONTAINER | 0         |
+----------+---------+---------------

然后开始按照此处提到的步骤在此集群上部署 metallb关联。并将此配置图应用于路由。GNU nano 4.8 k8s-metallb-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.8.0.240-10.8.0.250

但是 metallb 吊舱没有运行。

kubectl get pods -n metallb-system
NAME                          READY   STATUS                       RESTARTS   AGE
controller-6b78bff7d9-cxf2z   0/1     ContainerCreating            0          38m
speaker-fpvjt                 0/1     CreateContainerConfigError   0          38m
speaker-mbz7b                 0/1     CreateContainerConfigError   0          38m
speaker-zgz4d                 0/1     CreateContainerConfigError   0          38m

我检查了日志。

kubectl describe pod controller-6b78bff7d9-cxf2z -n metallb-system
Name:           controller-6b78bff7d9-cxf2z
Namespace:      metallb-system
Priority:       0
Node:           kworker1/10.8.0.91
Start Time:     Wed, 14 Jul 2021 20:52:10 +0530
Labels:         app=metallb
                component=controller
                pod-template-hash=6b78bff7d9
Annotations:    prometheus.io/port: 7472
                prometheus.io/scrape: true
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/controller-6b78bff7d9
Containers:
  controller:
    Container ID:  
    Image:         quay.io/metallb/controller:v0.10.2
    Image ID:      
    Port:          7472/TCP
    Host Port:     0/TCP
    Args:
      --port=7472
      --config=config
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      METALLB_ML_SECRET_NAME:  memberlist
      METALLB_DEPLOYMENT:      controller
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76kg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-j76kg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                 From               Message
  ----     ------                  ----                ----               -------
  Normal   Scheduled               32m                 default-scheduler  Successfully assigned metallb-system/controller-6b78bff7d9-cxf2z to kworker1
  Warning  FailedCreatePodSandBox  32m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8a6fa54086b9e65c42c8a0478dcac0769e8b278eeafe11eafb9ad5be40d48eb": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "264ee423734139b712395c0570c888cff0b7b526e5154da0b7ccbdafe5bd9ba3": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1a3cb9e20a2a015adc7b4924ed21e0b50604ee9f9fae52170c03298dff0d6a78": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "56dd906cdadc8ef50db3cc725d988090539a0818c2579738d575140cebbec71a": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  31m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8ddcfa704da9867c3a68030f0dc59f7c0d04bdc3a0b598c98a71aa8787585ca6": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "50431bbc89188799562c48847be90e243bbf49a2c5401eb2219a0c4745cfcfb6": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da9ad1d418d3aded668c53f5e3f98ddfac14af638ed7e8142b904e12a99bfd77": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4dc6109c696ee410c58a0894ac70e5165a56bab99468ee42ffe88b2f5e33ef2f": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  30m                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8f1cad2ce9f8c278c07c924106a1b6b321a80124504737a574bceea983a0026": open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  2m (x131 over 29m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f5e93b893275afe5309eddd9686c0ecfeb01e91141259164082cb99c1e2c1902": open /run/flannel/subnet.env: no such file or directory

以及扬声器容器。

kubectl describe pod  speaker-zgz4d -n metallb-system
Name:         speaker-zgz4d
Namespace:    metallb-system
Priority:     0
Node:         kmaster/10.8.0.217
Start Time:   Wed, 14 Jul 2021 20:52:10 +0530
Labels:       app=metallb
              component=speaker
              controller-revision-hash=7668c5cdf6
              pod-template-generation=1
Annotations:  prometheus.io/port: 7472
              prometheus.io/scrape: true
Status:       Pending
IP:           10.8.0.217
IPs:
  IP:           10.8.0.217
Controlled By:  DaemonSet/speaker
Containers:
  speaker:
    Container ID:  
    Image:         quay.io/metallb/speaker:v0.10.2
    Image ID:      
    Ports:         7472/TCP, 7946/TCP, 7946/UDP
    Host Ports:    7472/TCP, 7946/TCP, 7946/UDP
    Args:
      --port=7472
      --config=config
    State:          Waiting
      Reason:       CreateContainerConfigError
    Ready:          False
    Restart Count:  0
    Environment:
      METALLB_NODE_NAME:       (v1:spec.nodeName)
      METALLB_HOST:            (v1:status.hostIP)
      METALLB_ML_BIND_ADDR:    (v1:status.podIP)
      METALLB_ML_LABELS:      app=metallb,component=speaker
      METALLB_ML_SECRET_KEY:  <set to the key 'secretkey' in secret 'memberlist'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2gzm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-l2gzm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    41m                  default-scheduler  Successfully assigned metallb-system/speaker-zgz4d to kmaster
  Warning  FailedMount  41m                  kubelet            MountVolume.SetUp failed for volume "kube-api-access-l2gzm" : failed to sync configmap cache: timed out waiting for the condition
  Warning  Failed       39m (x12 over 41m)   kubelet            Error: secret "memberlist" not found
  Normal   Pulled       78s (x185 over 41m)  kubelet            Container image "quay.io/metallb/speaker:v0.10.2" already present on machine

将值从 null 设置为 0 后的容器状态。

kube-apiserver-kmaster            1/1     Running             0          27m
kube-controller-manager-kmaster   1/1     Running             0          27m
kube-flannel-ds-7f5b7             0/1     CrashLoopBackOff    1          76s
kube-flannel-ds-bs9h5             0/1     Error               1          72s
kube-flannel-ds-t9rpf             0/1     Error               1          71s
kube-proxy-ht5fk                  0/1     CrashLoopBackOff    3          76s
kube-proxy-ldhhc                  0/1     CrashLoopBackOff    3          75s
kube-proxy-mwrkc                  0/1     CrashLoopBackOff    3          76s
kube-scheduler-kmaster            1/1     Running             0          2

答案1

memberlist我通过手动创建名为而不是的正确密钥来解决了这个问题,metallb-memberlist如下所示:

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

答案2

我在安装 ingress-nginx 之前安装了 metallb

我只是忽略了那个错误。安装 ingress-nginx 后,错误消失了。

—双筒望远镜—

答案3

我无法访问 VMWare 工具集,但我尝试尽可能地复制您的设置。

在我的例子中,kube-proxy-*kube-flannel-ds-*pod 处于CrashLoopBackOff状态。失败

1 main.go:251] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-7tg89': Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-7tg89": dial tcp 10.96.0.1:443: i/o timeout
1 server.go:489] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

这导致 metallb 吊舱无法启动。


为了使其工作,我编辑了kube-proxyconfigMap

# kubectl edit configmap/kube-proxy -n kube-system

并改变了

maxPerCore: null

maxPerCore: 0

然后删除所有kube-proxykube-flannel-dspod,它们立即由 DaemonSet 重新创建。

# kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-h5qsh          1/1     Running   0          49m
coredns-558bd4d5db-m86w5          1/1     Running   0          49m
etcd-kmaster                      1/1     Running   0          49m
kube-apiserver-kmaster            1/1     Running   0          49m
kube-controller-manager-kmaster   1/1     Running   0          49m
kube-flannel-ds-87pnx             1/1     Running   0          11m
kube-flannel-ds-jmjtc             1/1     Running   0          11m
kube-flannel-ds-rxbdm             1/1     Running   0          11m
kube-proxy-dcvjs                  1/1     Running   0          12m
kube-proxy-h628j                  1/1     Running   0          12m
kube-proxy-w8jxn                  1/1     Running   0          12m
kube-scheduler-kmaster            1/1     Running   0          49m

然后删除了所有 metallb pod,这些 pod 也由 DeamonSet 重新创建

root@kmaster:~# kubectl get pods -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-6b78bff7d9-btwlr   1/1     Running   0          12m
speaker-kr8lv                 1/1     Running   0          12m
speaker-sqk4d                 1/1     Running   0          12m
speaker-wm5r8                 1/1     Running   0          12m

现在看来一切都正常了。


我还/run/flannel/subnet.env手动创建了文件,内容如下:

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

但可能没有必要

相关内容