kubernetes pod 中的 kafka

kubernetes pod 中的 kafka

我是使用 kubernetes 设置的新手,正在尝试使用持久卷运行 kafka pod,这样如果 pod 出现故障,内存不会丢失,并且我可以使用持久内存启动一个新的集群。

问题在于

我尝试在这里做一个

apiVersion: v1
kind: Service
metadata:
  name: kafka
  labels:
    app: kafka
spec:
  type: NodePort
  ports:
   - port: 9092
  selector:
   app: kafka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: kafka
spec:
 selector:
   matchLabels:
     app: kafka
 serviceName: "kafka"
 template:
   metadata:
     labels:
       app: kafka
   spec:
     terminationGracePeriodSeconds: 10
     containers:
     - name: kafka
       image: bitnami/kafka:latest
       # readinessProbe:
       #   httpGet:
       #     port: 7070
       #     path: /readiness
       #   initialDelaySeconds: 120
       #   periodSeconds: 15
       #   failureThreshold: 1
       # livenessProbe:
       #   httpGet:
       #     port: 7070
       #     path: /liveness
       #   initialDelaySeconds: 360
       #   periodSeconds: 15
       #   failureThreshold: 3
       ports:
       - containerPort: 9092
       volumeMounts:
       - name: kafka
         mountPath: /binami/kafka
 volumeClaimTemplates:
 - metadata:
     name: datadir
   spec:
     accessModes: [ "ReadWriteOnce" ]
     resources:
       requests:
         storage: 1Gi

但这似乎不起作用,由于某种原因,它进入错误状态,并显示错误消息:

pod has unbound immediate PersistentVolumeClaims

我不确定我是否理解了? - 该卷仅被其使用?
并且 pod 尚未重新启动,所以我很困惑为什么它不起作用?

PVC-config:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: datadir-kafka-0
  namespace: default
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/datadir-kafka-0
  uid: 264204f8-21cc-11ea-8f02-00155de9e001
  resourceVersion: '149105'
  creationTimestamp: '2019-12-18T19:25:28Z'
  labels:
    app: kafka
  annotations:
    control-plane.alpha.kubernetes.io/leader: >-
      {"holderIdentity":"640e4416-2192-11ea-978b-8c1645373373","leaseDurationSeconds":15,"acquireTime":"2019-12-18T19:25:28Z","renewTime":"2019-12-18T19:25:30Z","leaderTransitions":0}
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: pvc-264204f8-21cc-11ea-8f02-00155de9e001
  storageClassName: hostpath
  volumeMode: Filesystem
status:
  phase: Bound
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi

PV 配置:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pvc-264204f8-21cc-11ea-8f02-00155de9e001
  selfLink: /api/v1/persistentvolumes/pvc-264204f8-21cc-11ea-8f02-00155de9e001
  uid: 264dab2e-21cc-11ea-8f02-00155de9e001
  resourceVersion: '149091'
  creationTimestamp: '2019-12-18T19:25:28Z'
  annotations:
    docker.io/hostpath: >-
      C:\Users\kube\.docker\Volumes\datadir-kafka-0\pvc-264204f8-21cc-11ea-8f02-00155de9e001
    pv.kubernetes.io/provisioned-by: docker.io/hostpath
  finalizers:
    - kubernetes.io/pv-protection
spec:
  capacity:
    storage: 1Gi
  hostPath:
    path: >-
      /host_mnt/c/Users/kube/.docker/Volumes/datadir-kafka-0/pvc-264204f8-21cc-11ea-8f02-00155de9e001
    type: ''
  accessModes:
    - ReadWriteOnce
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: datadir-kafka-0
    uid: 264204f8-21cc-11ea-8f02-00155de9e001
    apiVersion: v1
    resourceVersion: '149073'
  persistentVolumeReclaimPolicy: Delete
  storageClassName: hostpath
  volumeMode: Filesystem
status:
  phase: Bound

答案1

动态 PersistentVolume 供应器似乎不支持所述的本地存储这里

您需要指定一个存储类别名称指向您在其中创建的本地卷数据目录 体积索赔模板你的卡夫卡 StatefulSet。您可以阅读一个例子这里

尝试这样的操作:

 volumeClaimTemplates:
 - metadata:
     name: datadir
   spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: hostpath
     resources:
       requests:
         storage: 1Gi

相关内容