Statefulset Pod:多个持久卷声明存在问题

Statefulset Pod:多个持久卷声明存在问题

因为我有 5 个共享 NFS 文件夹:

   root@k8s-eu-1-master:~# df -h | grep /srv/
   aa.aaa.aaa.aaa:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
   bb.bbb.bbb.bbb:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
   cc.ccc.ccc.cc:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
   dd.ddd.ddd.dd:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
   ee.eee.eee.eee:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data

我在 cassandra-statefulset.yaml 中添加了第二个 volumeMount 及其 volumeClaimTemplate:

  # These volume mounts are persistent. They are like inline claims,
  # but not exactly because the names need to match exactly one of
  # the stateful pod volumes.
  volumeMounts:
  - name: k8s-eu-1-worker-1
   mountPath: /srv/shared-k8s-eu-1-worker-1
  - name: k8s-eu-1-worker-2
   mountPath: /srv/shared-k8s-eu-1-worker-2

   # These are converted to volume claims by the controller
   # and mounted at the paths mentioned above.
   # do not use these in production until ssd GCEPersistentDisk or other ssd pd
   volumeClaimTemplates:
   - metadata:
     name: k8s-eu-1-worker-1
    spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: k8s-eu-1-worker-1
     resources:
      requests:
       storage: 1Gi
   - metadata:
     name: k8s-eu-1-worker-2
    spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: k8s-eu-1-worker-2
     resources:
       requests:
       storage: 1Gi

  ---
  kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
   name: k8s-eu-1-worker-1
  provisioner: k8s-sigs.io/k8s-eu-1-worker-1
  parameters:
   type: pd-ssd

  kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
   name: k8s-eu-1-worker-2
  provisioner: k8s-sigs.io/k8s-eu-1-worker-2
  parameters:
   type: pd-ssd

一开始它看起来运行良好:

   root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml 
   statefulset.apps/cassandra created

但是 statefulset 仍然处于“未准备好”状态:

   root@k8s-eu-1-master:~# kubectl get sts
   NAME    READY AGE
   cassandra 0/3  17m

root@k8s-eu-1-master:~# kubectl describe sts cassandra
  Name:       cassandra
  Namespace:     default
  CreationTimestamp: Wed, 08 Nov 2023 12:02:10 +0100
  Selector:     app=cassandra
  Labels:      app=cassandra
  Annotations:    <none>
  Replicas:     3 desired | 1 total
  Update Strategy:  RollingUpdate
   Partition:    0
  Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
  Pod Template:
   Labels: app=cassandra
   Containers:
    cassandra:
    Image:   gcr.io/google-samples/cassandra:v13
    Ports:   7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
     cpu:  500m
     memory: 1Gi
    Requests:
     cpu:   500m
     memory: 1Gi
    Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1  
#failure=3
    Environment:
     MAX_HEAP_SIZE:     512M
     HEAP_NEWSIZE:      100M
     CASSANDRA_SEEDS:    cassandra-0.cassandra.default.svc.cluster.local
     CASSANDRA_CLUSTER_NAME: K8Demo
     CASSANDRA_DC:      DC1-K8Demo
     CASSANDRA_RACK:     Rack1-K8Demo
     POD_IP:         (v1:status.podIP)
    Mounts:
     /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
     /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
   Volumes: <none>
  Volume Claims:
   Name:     k8s-eu-1-worker-1
   StorageClass: k8s-eu-1-worker-1
   Labels:    <none>
   Annotations: <none>
   Capacity:   1Gi
   Access Modes: [ReadWriteOnce]
   Name:     k8s-eu-1-worker-2
   StorageClass: k8s-eu-1-worker-2
   Labels:    <none>
   Annotations: <none>
   Capacity:   1Gi
   Access Modes: [ReadWriteOnce]
  Events:
   Type  Reason      Age From          Message
   ----  ------      ---- ----          -------
   Normal SuccessfulCreate 18m statefulset-controller create Claim k8s-eu-1-worker-1-cassandra-0   
    Pod cassandra-0 in StatefulSet cassandra success
   Normal SuccessfulCreate 18m statefulset-controller create Claim k8s-eu-1-worker-2-cassandra-0 
    Pod cassandra-0 in StatefulSet cassandra success
   Normal SuccessfulCreate 18m statefulset-controller create Pod cassandra-0 in StatefulSet 
 cassandra successful

相应的 pod 仍处于“Pending”状态:

   root@k8s-eu-1-master:~# kubectl get pods
   NAME                               READY STATUS  RESTARTS AGE
   cassandra-0                           0/1  Pending 0     19m
   k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k 1/1  Running 0     19h

  root@k8s-eu-1-master:~# kubetl describe pod cassandra-0
  kubetl: command not found
  root@k8s-eu-1-master:~# kubectl describe pod cassandra-0
  Name:      cassandra-0
  Namespace:    default
  Priority:    0
  Service Account: default
  Node:      <none>
  Labels:     app=cassandra
           apps.kubernetes.io/pod-index=0
           controller-revision-hash=cassandra-79d64cd8b
           statefulset.kubernetes.io/pod-name=cassandra-0
  Annotations:   <none>
  Status:     Pending
  IP:       
  IPs:       <none>
  Controlled By:  StatefulSet/cassandra
  Containers:
   cassandra:
    Image:   gcr.io/google-samples/cassandra:v13
    Ports:   7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
     cpu:  500m
     memory: 1Gi
    Requests:
     cpu:   500m
     memory: 1Gi
    Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1  
#failure=3
    Environment:
     MAX_HEAP_SIZE:     512M
     HEAP_NEWSIZE:      100M
     CASSANDRA_SEEDS:    cassandra-0.cassandra.default.svc.cluster.local
     CASSANDRA_CLUSTER_NAME: K8Demo
     CASSANDRA_DC:      DC1-K8Demo
     CASSANDRA_RACK:     Rack1-K8Demo
     POD_IP:         (v1:status.podIP)
    Mounts:
     /srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1 (rw)
     /srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2 (rw)
     /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxx58 (ro)
  Conditions:
   Type     Status
   PodScheduled False 
  Volumes:
   k8s-eu-1-worker-1:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: k8s-eu-1-worker-1-cassandra-0
    ReadOnly: false
   k8s-eu-1-worker-2:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: k8s-eu-1-worker-2-cassandra-0
    ReadOnly: false
   kube-api-access-wxx58:
    Type:          Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds: 3607
    ConfigMapName:     kube-root-ca.crt
    ConfigMapOptional:   <nil>
    DownwardAPI:      true
  QoS Class:         Guaranteed
  Node-Selectors:       <none>
  Tolerations:        node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Events:
   Type  Reason      Age        From       Message
   ----  ------      ----       ----       -------
   Warning FailedScheduling 20m        default-scheduler 0/6 nodes are available: pod has unbound 
immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful  
for scheduling..
   Warning FailedScheduling 10m (x3 over 20m) default-scheduler 0/6 nodes are available: pod has 
unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not 
helpful for scheduling..

两个持久卷声明中只有一个处于“已绑定”状态,另一个仍处于“待处理”状态:

  root@k8s-eu-1-master:~# kubectl get pvc
  NAME              STATUS  VOLUME                  CAPACITY ACCESS MODES STORAGECLASS    
AGE
  k8s-eu-1-worker-1-cassandra-0 Bound  pvc-4f1d877b-8e01-4b76-b4e1-25bc226fd1a5 1Gi    RWO         
  k8s-eu-1-worker-1 21m
  k8s-eu-1-worker-2-cassandra-0 Pending                                    k8s-eu-1-worker-2 21m

我的上面的设置有什么问题cassandra-statefulset.yaml

答案1

我的错

我没有创造第二个k8s-eu-1-worker-2-nfs-subdir-external-provisioner

一旦我创建了它,有状态的 Pod 就会进入运行状态

相关内容