无法使用 PersistentVolumeClaim 重建部署

无法使用 PersistentVolumeClaim 重建部署

我想创建一个带有 PersistentVolumeClaim 的 MongoDB 部署。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auth-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  selector:
    matchLabels:
      app: auth-mongo-pod-label
  template:
    metadata:
      labels:
        app: auth-mongo-pod-label
    spec:
      containers:
        - name: auth-mongo-pod
          image: mongo
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: auth-mongo-volume
              mountPath: /data/db
      volumes:
        - name: auth-mongo-volume
          persistentVolumeClaim:
            claimName: auth-mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo-pod-label
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

完整代码

创建自动驾驶集群后的首次构建:

Starting deploy...
 - Warning: Autopilot set default resource requests for Deployment default/auth-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/auth-depl created
 - service/auth-srv created
 - persistentvolumeclaim/auth-mongo-pvc created
 - Warning: Autopilot set default resource requests for Deployment default/auth-mongo-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/auth-mongo-depl created
 - service/auth-mongo-srv created
 - Warning: Autopilot set default resource requests for Deployment default/react-client-depl, as resource requests were not specified. See http://g.co/gke/autopilot-defaults
 - deployment.apps/react-client-depl created
 - service/react-client-srv created
 - ingress.networking.k8s.io/ingress-service created
Waiting for deployments to stabilize...
 - deployment/auth-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-depl-77fd8b57f5-vk8cf: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-mongo-depl-7d967468f6-rkh79: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/react-client-depl-68dcb844f6-b8fm9: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/auth-depl: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/auth-depl-77fd8b57f5-vk8cf: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
    - pod/react-client-depl-68dcb844f6-b8fm9: Unschedulable: 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
 - deployment/react-client-depl is ready. [2/3 deployment(s) still pending]
 - deployment/auth-depl is ready. [1/3 deployment(s) still pending]
 - deployment/auth-mongo-depl is ready.
Deployments stabilized in 2 minutes 31.285 seconds

蒙哥豆荚

如果我点击 Google Cloud Build 中的“重建”,我会得到:

Starting deploy...
 - deployment.apps/auth-depl configured
 - service/auth-srv configured
 - persistentvolumeclaim/auth-mongo-pvc unchanged
 - deployment.apps/auth-mongo-depl configured
 - service/auth-mongo-srv configured
 - deployment.apps/react-client-depl configured
 - service/react-client-srv configured
 - ingress.networking.k8s.io/ingress-service unchanged
Waiting for deployments to stabilize...
 - deployment/auth-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/auth-depl-666fdb5c64-cqnf6: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/auth-mongo-depl-958db4cd5-db5pr: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/react-client-depl: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
    - pod/react-client-depl-54998f6c5b-wswz7: 0/5 nodes are available: 5 Insufficient cpu, 5 Insufficient memory. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.
 - deployment/auth-mongo-depl: Unschedulable: 0/1 nodes available: 1 node is not ready
    - pod/auth-mongo-depl-958db4cd5-db5pr: Unschedulable: 0/1 nodes available: 1 node is not ready
 - deployment/auth-depl is ready. [2/3 deployment(s) still pending]
 - deployment/react-client-depl is ready. [1/3 deployment(s) still pending]
1/3 deployment(s) failed
ERROR
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold:v2.2.0" failed: step exited with non-zero status: 1

第二个 Pod 卡住了 pod 事件

我不确定为什么它无法扩展“与此 pod 关联的区域 us-central1-f 中的节点扩展失败:超出 GCE 配额。Pod 面临无法安排的风险。”;我怎么会用这么简单的部署超出配额?虽然在“FailedAttachVolume”之后它确实显示“已安排”。

我是否应该更改skaffold.yml文件以甚至不尝试重建数据库部署?我不熟悉污点;我是否应该更改设置以便使用相同的节点?

我确实在之前的集群中尝试过 ReadWriteMany,但是没有起作用。pod 事件

答案1

我最终使用了 StatefulSet,因为无论如何它对于水平扩展都是必需的。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: auth-mongo-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: auth-mongo-set
spec:
  selector:
    matchLabels:
      app: auth-mongo-pod-label
  serviceName: auth-mongo-srv
  template:
    metadata:
      labels:
        app: auth-mongo-pod-label
    spec:
      containers:
        - name: auth-mongo-pod
          image: mongo
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: auth-mongo-volume
              mountPath: /data/db
      volumes:
        - name: auth-mongo-volume
          persistentVolumeClaim:
            claimName: auth-mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo-pod-label
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

我认为现在每个 pod 都有自己的 PVC,因此尝试用新的 pod 替换旧的 pod 时不会出现任何问题(通过部署,两个 pod 都使用相同的 pvc)。

相关内容