我已经使用以下清单部署了 NFS 服务器
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pv-provisioning-demo
namespace: nfs
labels:
role: nfs-server
spec:
storageClassName: aws-gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: nfs
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: registry.k8s.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
namespace: nfs
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
但当我将其用作nfs-server.nfs.svc.cluster.local
服务器名称时,它无法解析它。当我将 Cluster IP 用作服务端点时,它就可以正常工作。
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
namespace: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.nfs.svc.cluster.local
# server: 172.20.144.30
path: "/"
mountOptions:
- nfsvers=4.2
知道这个设置有什么问题吗?
答案1
这是 Kubernetes 中的一个已知错误:可能是在 pod 能够访问内部 DNS 之前设置 NFS 服务器的初始化问题。
肮脏的解决方案:如果你想保留,你可以在每个节点上使用IP 旁边的nfs-server.nfs.svc.cluster.local
/etc/hosts 文件进行编辑:nfs-server.nfs.svc.cluster.local
$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
172.20.144.30 nfs-server.nfs.svc.cluster.local
您可以在这里找到更多信息:https://github.com/kubernetes/minikube/issues/3417或通过谷歌搜索server: nfs-server.nfs.svc.cluster.local