Calico CNI calio-apiserver 问题,基于 kubeadm 的本地 kubernetes v1.24.2 集群

Calico CNI calio-apiserver 问题,基于 kubeadm 的本地 kubernetes v1.24.2 集群

我无法让 calico CNI 在基于 kubeadm 的本地 kubernetes v1.24.2 集群上完全发挥作用。

calio-apiserver pod(在 calio-apiserver 命名空间中)的状态为“CrashLoopBackOff”。

我按照“https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onprem”上的官方文档,使用以下命令安装了 calico CNI。

kubectl 创建 -fhttps://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml kubectl create -f /tmp/custom-resources.yaml “/tmp/custom-resources.yaml”的内容如下所示。


---

  # This section includes base Calico installation configuration.
  # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
  # "https://projectcalico.docs.tigera.io/networking/ip-autodetection"
  apiVersion: operator.tigera.io/v1
  kind: Installation
  metadata:
    name: default
  spec:
    # Configures Calico networking.
    calicoNetwork:
      # Note: The ipPools section cannot be modified post-install.
      bgp: Disabled #we would like to use VXLAN see "https://projectcalico.docs.tigera.io/networking/determine-best-networking".
      ipPools:
        -
          #blockSize: 26
          cidr: 172.22.0.0/16
          encapsulation: VXLAN
      nodeAddressAutodetectionV4:
        interface: eth1
        cidrs:
          -
            "192.168.12.0/24"
  
---
  
  # This section configures the Calico API server.
  # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
  apiVersion: operator.tigera.io/v1
  kind: APIServer 
  metadata: 
    name: default 
  spec: {}
  

我提供给 kubeadm init 命令 --config 参数的配置文件包含以下部分(这是该文件的缩写版本)


  apiVersion: kubeadm.k8s.io/v1beta3
  kind: ClusterConfiguration
  networking:
    dnsDomain: cluster.local
    serviceSubnet: 172.21.0.0/16
    podSubnet: 172.22.0.0/16

下面的命令

          {
            kube_apiserver_node_01="192.168.12.17";
            {
              kubectl \
                --kubeconfig=/home/somebody/kubernetes-via-kubeadm/kubeadm/${kube_apiserver_node_01}/admin.conf \
                get nodes,pods,services,deployments,replicasets,cronjobs,jobs -A \
                -o wide \
              ;
            };
          };

生成以下报告。


NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node/centos7-03-05   Ready    control-plane   45m   v1.24.2   192.168.12.17   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-06   Ready    control-plane   44m   v1.24.2   192.168.12.18   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-07   Ready    control-plane   42m   v1.24.2   192.168.12.19   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-08   Ready    <none>          41m   v1.24.2   192.168.12.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-09   Ready    <none>          41m   v1.24.2   192.168.12.21   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-10   Ready    <none>          40m   v1.24.2   192.168.12.22   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2

NAMESPACE          NAME                                           READY   STATUS             RESTARTS        AGE   IP               NODE            NOMINATED NODE   READINESS GATES
calico-apiserver   pod/calico-apiserver-5dccd9f877-5hxdx          0/1     CrashLoopBackOff   12 (3s ago)     37m   172.22.3.194     centos7-03-09   <none>           <none>
calico-apiserver   pod/calico-apiserver-5dccd9f877-fvxzx          0/1     CrashLoopBackOff   11 (5m1s ago)   37m   172.22.178.194   centos7-03-10   <none>           <none>
calico-system      pod/calico-kube-controllers-6d5b985f7d-cxt77   1/1     Running            1 (38m ago)     45m   172.22.147.132   centos7-03-05   <none>           <none>
calico-system      pod/calico-node-7dcsc                          1/1     Running            0               45m   192.168.12.17    centos7-03-05   <none>           <none>
calico-system      pod/calico-node-b4kbb                          1/1     Running            0               41m   192.168.12.21    centos7-03-09   <none>           <none>
calico-system      pod/calico-node-fvbg5                          1/1     Running            0               44m   192.168.12.18    centos7-03-06   <none>           <none>
calico-system      pod/calico-node-nccpv                          1/1     Running            1 (37m ago)     40m   192.168.12.22    centos7-03-10   <none>           <none>
calico-system      pod/calico-node-vc8vg                          1/1     Running            2 (37m ago)     41m   192.168.12.19    centos7-03-07   <none>           <none>
calico-system      pod/calico-node-zqlhn                          1/1     Running            0               41m   192.168.12.20    centos7-03-08   <none>           <none>
calico-system      pod/calico-typha-5c4b4bc487-5t9s9              1/1     Running            0               41m   192.168.12.20    centos7-03-08   <none>           <none>
calico-system      pod/calico-typha-5c4b4bc487-rc6w7              1/1     Running            0               41m   192.168.12.19    centos7-03-07   <none>           <none>
calico-system      pod/calico-typha-5c4b4bc487-zxzwx              1/1     Running            0               45m   192.168.12.17    centos7-03-05   <none>           <none>
calico-system      pod/csi-node-driver-cq82z                      2/2     Running            0               37m   172.22.121.193   centos7-03-07   <none>           <none>
calico-system      pod/csi-node-driver-dlrcp                      2/2     Running            0               37m   172.22.178.193   centos7-03-10   <none>           <none>
calico-system      pod/csi-node-driver-ksb92                      2/2     Running            0               41m   172.22.147.131   centos7-03-05   <none>           <none>
calico-system      pod/csi-node-driver-pkmzj                      2/2     Running            0               41m   172.22.74.193    centos7-03-06   <none>           <none>
calico-system      pod/csi-node-driver-qj5sm                      2/2     Running            0               37m   172.22.117.193   centos7-03-08   <none>           <none>
calico-system      pod/csi-node-driver-r666v                      2/2     Running            0               37m   172.22.3.193     centos7-03-09   <none>           <none>
kube-system        pod/coredns-6d4b75cb6d-42rt4                   1/1     Running            0               45m   172.22.147.129   centos7-03-05   <none>           <none>
kube-system        pod/coredns-6d4b75cb6d-phk66                   1/1     Running            0               45m   172.22.147.130   centos7-03-05   <none>           <none>
kube-system        pod/kube-apiserver-centos7-03-05               1/1     Running            0               45m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-apiserver-centos7-03-06               1/1     Running            0               43m   192.168.12.18    centos7-03-06   <none>           <none>
kube-system        pod/kube-apiserver-centos7-03-07               1/1     Running            2 (38m ago)     42m   192.168.12.19    centos7-03-07   <none>           <none>
kube-system        pod/kube-controller-manager-centos7-03-05      1/1     Running            3 (37m ago)     45m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-controller-manager-centos7-03-06      1/1     Running            0               43m   192.168.12.18    centos7-03-06   <none>           <none>
kube-system        pod/kube-controller-manager-centos7-03-07      1/1     Running            0               41m   192.168.12.19    centos7-03-07   <none>           <none>
kube-system        pod/kube-proxy-624hh                           1/1     Running            0               41m   192.168.12.20    centos7-03-08   <none>           <none>
kube-system        pod/kube-proxy-9g7rp                           1/1     Running            0               44m   192.168.12.18    centos7-03-06   <none>           <none>
kube-system        pod/kube-proxy-hf4ht                           1/1     Running            0               40m   192.168.12.22    centos7-03-10   <none>           <none>
kube-system        pod/kube-proxy-kw6fv                           1/1     Running            0               41m   192.168.12.21    centos7-03-09   <none>           <none>
kube-system        pod/kube-proxy-nqhft                           1/1     Running            0               41m   192.168.12.19    centos7-03-07   <none>           <none>
kube-system        pod/kube-proxy-xphvv                           1/1     Running            0               45m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-scheduler-centos7-03-05               1/1     Running            3 (37m ago)     45m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-scheduler-centos7-03-06               1/1     Running            0               43m   192.168.12.18    centos7-03-06   <none>           <none>
kube-system        pod/kube-scheduler-centos7-03-07               1/1     Running            0               42m   192.168.12.19    centos7-03-07   <none>           <none>
ns-test-02         pod/my-alpine                                  1/1     Running            0               19m   172.22.3.195     centos7-03-09   <none>           <none>
ns-test-02         pod/my-nginx                                   1/1     Running            0               19m   172.22.117.194   centos7-03-08   <none>           <none>
tigera-operator    pod/tigera-operator-6bb888d6fc-7whxq           1/1     Running            3 (37m ago)     45m   192.168.12.17    centos7-03-05   <none>           <none>

NAMESPACE          NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
calico-apiserver   service/calico-api                        ClusterIP   172.21.115.199   <none>        443/TCP                  37m   apiserver=true
calico-system      service/calico-kube-controllers-metrics   ClusterIP   172.21.65.7      <none>        9094/TCP                 39m   k8s-app=calico-kube-controllers
calico-system      service/calico-typha                      ClusterIP   172.21.133.86    <none>        5473/TCP                 45m   k8s-app=calico-typha
default            service/kubernetes                        ClusterIP   172.21.0.1       <none>        443/TCP                  45m   <none>
kube-system        service/kube-dns                          ClusterIP   172.21.0.10      <none>        53/UDP,53/TCP,9153/TCP   45m   k8s-app=kube-dns
ns-test-02         service/my-nginx                          ClusterIP   172.21.242.10    <none>        80/TCP                   19m   app=nginx,purpose=learning

NAMESPACE          NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                      SELECTOR
calico-apiserver   deployment.apps/calico-apiserver          0/2     2            0           37m   calico-apiserver          docker.io/calico/apiserver:v3.24.1          apiserver=true
calico-system      deployment.apps/calico-kube-controllers   1/1     1            1           45m   calico-kube-controllers   docker.io/calico/kube-controllers:v3.24.1   k8s-app=calico-kube-controllers
calico-system      deployment.apps/calico-typha              3/3     3            3           45m   calico-typha              docker.io/calico/typha:v3.24.1              k8s-app=calico-typha
kube-system        deployment.apps/coredns                   2/2     2            2           45m   coredns                   k8s.gcr.io/coredns/coredns:v1.8.6           k8s-app=kube-dns
tigera-operator    deployment.apps/tigera-operator           1/1     1            1           45m   tigera-operator           quay.io/tigera/operator:v1.28.1             name=tigera-operator

NAMESPACE          NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                      SELECTOR
calico-apiserver   replicaset.apps/calico-apiserver-5dccd9f877          2         2         0       37m   calico-apiserver          docker.io/calico/apiserver:v3.24.1          apiserver=true,pod-template-hash=5dccd9f877
calico-system      replicaset.apps/calico-kube-controllers-6d5b985f7d   1         1         1       45m   calico-kube-controllers   docker.io/calico/kube-controllers:v3.24.1   k8s-app=calico-kube-controllers,pod-template-hash=6d5b985f7d
calico-system      replicaset.apps/calico-typha-5c4b4bc487              3         3         3       45m   calico-typha              docker.io/calico/typha:v3.24.1              k8s-app=calico-typha,pod-template-hash=5c4b4bc487
kube-system        replicaset.apps/coredns-6d4b75cb6d                   2         2         2       45m   coredns                   k8s.gcr.io/coredns/coredns:v1.8.6           k8s-app=kube-dns,pod-template-hash=6d4b75cb6d
tigera-operator    replicaset.apps/tigera-operator-6bb888d6fc           1         1         1       45m   tigera-operator           quay.io/tigera/operator:v1.28.1             name=tigera-operator,pod-template-hash=6bb888d6fc

NAMESPACE       NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS                             IMAGES                                                                        SELECTOR
calico-system   daemonset.apps/calico-node       6         6         6       6            6           kubernetes.io/os=linux   45m   calico-node                            docker.io/calico/node:v3.24.1                                                 k8s-app=calico-node
calico-system   daemonset.apps/csi-node-driver   6         6         6       6            6           kubernetes.io/os=linux   45m   calico-csi,csi-node-driver-registrar   docker.io/calico/csi:v3.24.1,docker.io/calico/node-driver-registrar:v3.24.1   k8s-app=csi-node-driver
kube-system     daemonset.apps/kube-proxy        6         6         6       6            6           kubernetes.io/os=linux   45m   kube-proxy                             k8s.gcr.io/kube-proxy:v1.24.2                                                 k8s-app=kube-proxy

下面的命令


          {
            kube_apiserver_node_01="192.168.12.17";
            {
              kubectl \
                --kubeconfig=/home/somebody/kubernetes-via-kubeadm/kubeadm/${kube_apiserver_node_01}/admin.conf \
                describe pod/calico-apiserver-5dccd9f877-5hxdx -n calico-apiserver \
              ;
            };
          };

生成以下 pod 定义。


Name:         calico-apiserver-5dccd9f877-5hxdx
Namespace:    calico-apiserver
Priority:     0
Node:         centos7-03-09/192.168.12.21
Start Time:   Wed, 31 Aug 2022 13:05:13 +0300
Labels:       apiserver=true
              app.kubernetes.io/name=calico-apiserver
              k8s-app=calico-apiserver
              pod-template-hash=5dccd9f877
Annotations:  cni.projectcalico.org/containerID: b4c32d8b9eff386f005b4191e93844ae41442e0a6390c665118e0da734e736bf
              cni.projectcalico.org/podIP: 172.22.3.194/32
              cni.projectcalico.org/podIPs: 172.22.3.194/32
              hash.operator.tigera.io/calico-apiserver-certs: 0bf8a4762bb60b6cb9b2a162a41421bf03dea6a0
Status:       Running
IP:           172.22.3.194
IPs:
  IP:           172.22.3.194
Controlled By:  ReplicaSet/calico-apiserver-5dccd9f877
Containers:
  calico-apiserver:
    Container ID:  cri-o://f62089d913cce9f23de5a6049341175e367f0c6c979c1ea69a17b61bb3990287
    Image:         docker.io/calico/apiserver:v3.24.1
    Image ID:      docker.io/calico/apiserver@sha256:1eac67b67652da38060ed0f5ece2f4fabf2e911b2df7d2c84c278f96b5e4b324
    Port:          <none>
    Host Port:     <none>
    Args:
      --secure-port=5443
      --tls-private-key-file=/calico-apiserver-certs/tls.key
      --tls-cert-file=/calico-apiserver-certs/tls.crt
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 31 Aug 2022 13:42:42 +0300
      Finished:     Wed, 31 Aug 2022 13:42:43 +0300
    Ready:          False
    Restart Count:  12
    Liveness:       http-get https://:5443/version delay=90s timeout=1s period=10s #success=1 #failure=3
    Readiness:      exec [/code/filecheck] delay=5s timeout=1s period=10s #success=1 #failure=5
    Environment:
      DATASTORE_TYPE:           kubernetes
      KUBERNETES_SERVICE_HOST:  172.21.0.1
      KUBERNETES_SERVICE_PORT:  443
      MULTI_INTERFACE_MODE:     none
    Mounts:
      /calico-apiserver-certs from calico-apiserver-certs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4v4ks (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  calico-apiserver-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-apiserver-certs
    Optional:    false
  kube-api-access-4v4ks:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                    From               Message
  ----     ------       ----                   ----               -------
  Normal   Scheduled    39m                    default-scheduler  Successfully assigned calico-apiserver/calico-apiserver-5dccd9f877-5hxdx to centos7-03-09
  Warning  FailedMount  39m                    kubelet            MountVolume.SetUp failed for volume "calico-apiserver-certs" : secret "calico-apiserver-certs" not found
  Normal   Pulling      39m                    kubelet            Pulling image "docker.io/calico/apiserver:v3.24.1"
  Normal   Pulled       38m                    kubelet            Successfully pulled image "docker.io/calico/apiserver:v3.24.1" in 41.273158472s
  Normal   Created      37m (x4 over 38m)      kubelet            Created container calico-apiserver
  Normal   Started      37m (x4 over 38m)      kubelet            Started container calico-apiserver
  Normal   Pulled       37m (x4 over 38m)      kubelet            Container image "docker.io/calico/apiserver:v3.24.1" already present on machine
  Warning  BackOff      4m34s (x169 over 38m)  kubelet            Back-off restarting failed container

从上述输出的“事件”部分,我们看到以下行


  Warning  FailedMount  39m                    kubelet            MountVolume.SetUp failed for volume "calico-apiserver-certs" : secret "calico-apiserver-certs" not found

但是存在一个 secret/calico-apiserver-certs 对象,见下文。


          {
            kube_apiserver_node_01="192.168.12.17";
            {
              kubectl \
                --kubeconfig=/home/somebody/kubernetes-via-kubeadm/kubeadm/${kube_apiserver_node_01}/admin.conf \
                describe secret/calico-apiserver-certs -n calico-apiserver \
              ;
            };
          };

上述命令产生以下输出。


Name:         calico-apiserver-certs
Namespace:    calico-apiserver
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
tls.crt:  2530 bytes
tls.key:  1675 bytes


下面提供了输出主机 192.168.12.21(centos7-03-09)上 pod/calico-apiserver-5dccd9f877-5hxdx 的 kubelet 服务的日志的命令。


ssh 192.168.12.21 "journalctl -x -u kubelet.service"

生成如下日志行。

Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.417899    2214 manager.go:309] Recovery completed
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.485635    2214 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: E0831 13:00:45.491114    2214 kubelet.go:2424] "Error getting node" err="node \"centos7-03-08\" not found"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.501858    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasSufficientMemory"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.501895    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasNoDiskPressure"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.501918    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasSufficientPID"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.501993    2214 kubelet_node_status.go:70] "Attempting to register node" node="centos7-03-08"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.512658    2214 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv4
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: E0831 13:00:45.521005    2214 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"centos7-03-08.171065aa8222db47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"centos7-03-08", UID:"centos7-03-08", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node centos7-03-08 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"centos7-03-08"}, FirstTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501881159, time.Local), LastTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501881159, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: E0831 13:00:45.521352    2214 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="centos7-03-08"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: E0831 13:00:45.530639    2214 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"centos7-03-08.171065aa8223472d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"centos7-03-08", UID:"centos7-03-08", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node centos7-03-08 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"centos7-03-08"}, FirstTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501908781, time.Local), LastTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501908781, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: E0831 13:00:45.545285    2214 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"centos7-03-08.171065aa822392ac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"centos7-03-08", UID:"centos7-03-08", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node centos7-03-08 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"centos7-03-08"}, FirstTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501928108, time.Local), LastTimestamp:time.Date(2022, time.August, 31, 13, 0, 45, 501928108, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.553697    2214 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.555658    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasSufficientMemory"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.555689    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasNoDiskPressure"
Aug 31 13:00:45 centos7-03-08 kubelet[2214]: I0831 13:00:45.555708    2214 kubelet_node_status.go:563] "Recording event message for node" node="centos7-03-08" event="NodeHasSufficientPID"

相关内容