Charmed K8s-Ceph 存储类未创建卷

Charmed K8s-Ceph 存储类未创建卷

我已经使用 charmed Kubernetes 部署了本地裸机 K8s Cloud。并且我使用 Ceph 作为存储。

集群部署顺利,但当我想创建一个卷(通过 PVC)时,卷停留在该Pending状态。日志中有一个错误,指出此卷 ID 上已有操作,然后超时。

我或多或少遵循了 Canonical 的教程(https://ubuntu.com/tutorials/how-to-build-a-ceph-backed-kubernetes-cluster#1-overview) 没有成功。

这里或更多的信息:

Ceph 池:

root@infra01:~/k8s-test# juju run-action --wait ceph-mon/leader list-pools
unit-ceph-mon-0:
  UnitId: ceph-mon/0
  id: "28"
  results:
    message: |
      1 device_health_metrics
      2 xfs-pool
      3 ext4-pool
  status: completed
  timing:
    completed: 2021-04-27 14:34:02 +0000 UTC
    enqueued: 2021-04-27 14:34:01 +0000 UTC
    started: 2021-04-27 14:34:01 +0000 UTC

存储类别:

root@infra01:~# kubectl describe sc
Name:            ceph-ext4
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"cdk-addons":"true"},"name":"ceph-ext4"},"mountOptions":["discard"],"parameters":{"clusterID":"4898f638-a1a7-11eb-a288-5bcd87ddd233","csi.storage.k8s.io/controller-expand-secret-name":"csi-rbd-secret","csi.storage.k8s.io/controller-expand-secret-namespace":"default","csi.storage.k8s.io/fstype":"ext4","csi.storage.k8s.io/node-stage-secret-name":"csi-rbd-secret","csi.storage.k8s.io/node-stage-secret-namespace":"default","csi.storage.k8s.io/provisioner-secret-name":"csi-rbd-secret","csi.storage.k8s.io/provisioner-secret-namespace":"default","imageFeatures":"layering","pool":"ext4-pool"},"provisioner":"rbd.csi.ceph.com","reclaimPolicy":"Delete"}

Provisioner:           rbd.csi.ceph.com
Parameters:            clusterID=4898f638-a1a7-11eb-a288-5bcd87ddd233,csi.storage.k8s.io/controller-expand-secret-name=csi-rbd-secret,csi.storage.k8s.io/controller-expand-secret-namespace=default,csi.storage.k8s.io/fstype=ext4,csi.storage.k8s.io/node-stage-secret-name=csi-rbd-secret,csi.storage.k8s.io/node-stage-secret-namespace=default,csi.storage.k8s.io/provisioner-secret-name=csi-rbd-secret,csi.storage.k8s.io/provisioner-secret-namespace=default,imageFeatures=layering,pool=ext4-pool
AllowVolumeExpansion:  True
MountOptions:
  discard
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>


Name:            ceph-xfs
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"cdk-addons":"true"},"name":"ceph-xfs"},"mountOptions":["discard"],"parameters":{"clusterID":"4898f638-a1a7-11eb-a288-5bcd87ddd233","csi.storage.k8s.io/controller-expand-secret-name":"csi-rbd-secret","csi.storage.k8s.io/controller-expand-secret-namespace":"default","csi.storage.k8s.io/fstype":"xfs","csi.storage.k8s.io/node-stage-secret-name":"csi-rbd-secret","csi.storage.k8s.io/node-stage-secret-namespace":"default","csi.storage.k8s.io/provisioner-secret-name":"csi-rbd-secret","csi.storage.k8s.io/provisioner-secret-namespace":"default","imageFeatures":"layering","pool":"xfs-pool"},"provisioner":"rbd.csi.ceph.com","reclaimPolicy":"Delete"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           rbd.csi.ceph.com
Parameters:            clusterID=4898f638-a1a7-11eb-a288-5bcd87ddd233,csi.storage.k8s.io/controller-expand-secret-name=csi-rbd-secret,csi.storage.k8s.io/controller-expand-secret-namespace=default,csi.storage.k8s.io/fstype=xfs,csi.storage.k8s.io/node-stage-secret-name=csi-rbd-secret,csi.storage.k8s.io/node-stage-secret-namespace=default,csi.storage.k8s.io/provisioner-secret-name=csi-rbd-secret,csi.storage.k8s.io/provisioner-secret-namespace=default,imageFeatures=layering,pool=xfs-pool
AllowVolumeExpansion:  True
MountOptions:
  discard
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>

并且持续的音量声明:

root@infra01:~/k8s-test# kubectl describe pvc myvol
Name:          myvol
Namespace:     pa-cnfdevops-paccard
StorageClass:  ceph-xfs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                    From                                                                                              Message
  ----     ------                ----                   ----                                                                                              -------
  Warning  ProvisioningFailed    34m                    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   Provisioning          32m (x9 over 37m)      rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  External provisioner is provisioning volume for claim "pa-cnfdevops-paccard/myvol"
  Warning  ProvisioningFailed    32m (x8 over 34m)      rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-5ddcl_a95b48b7-31eb-4cc5-bcbe-2715dbd43451  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0414bcd6-38ff-4523-b2bd-93e3ef58eea5 already exists
  Normal   ExternalProvisioning  31m (x26 over 36m)     persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Normal   ExternalProvisioning  28m (x5 over 29m)      persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator
  Warning  ProvisioningFailed    27m                    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   Provisioning          4m4s (x14 over 30m)    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  External provisioner is provisioning volume for claim "pa-cnfdevops-paccard/myvol"
  Warning  ProvisioningFailed    4m4s (x12 over 27m)    rbd.csi.ceph.com_csi-rbdplugin-provisioner-549c6b54c6-9r2nh_11ff8e47-a296-401e-8f08-799f22fd5a02  failed to provision volume with StorageClass "ceph-xfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0414bcd6-38ff-4523-b2bd-93e3ef58eea5 already exists
  Normal   ExternalProvisioning  2m53s (x105 over 27m)  persistentvolume-controller                                                                       waiting for a volume to be created, either by external provisioner "rbd.csi.ceph.com" or manually created by system administrator

你们中有谁知道是什么原因造成这种情况以及如何解决它?

提前谢谢您!

答案1

我遇到了同样的问题,在我的案例中,问题是我没有从 pod 到 ceph 集群的连接。我使用的是 Calico CNI,由于这个问题LP#1895547来自 k8s 集群的外部流量没有经过 NAT。因此来自 ceph 集群的返回流量无处可去。

我的解决方案是在所有工作节点上进行设置net.ipv4.conf.all.rp_filter: 1。我使用 sysconfig charm 来执行此操作。

希望这可以帮助!

相关内容