使用 Kubespary 创建集群时出错

使用 Kubespary 创建集群时出错

我想使用 Kubespary 创建一个 Kubernetes 集群。我已经创建了三个节点。我使用的是官方文档:https://kubernetes.io/docs/setup/production-environment/tools/kubespray/我有三个运行 openSUSE 的远程虚拟机,作为节点。

运行以下命令后

ansible-playbook -i inventory/local/hosts.yaml -u root --become --become-user=root cluster.yml

我得到了结果。错误:

TASK [etcd : Configure | Ensure etcd is running] ***********************************************************************************************************************************************************************************************************************************************************************
ok: [node1]
ok: [node2]
ok: [node3]
Friday 10 March 2023  11:33:50 +0000 (0:00:00.589)       0:05:23.939 ********** 
Friday 10 March 2023  11:33:50 +0000 (0:00:00.064)       0:05:24.004 ********** 
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (4 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (3 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (2 retries left).
FAILED - RETRYING: [node1]: Configure | Wait for etcd cluster to be healthy (1 retries left).

TASK [etcd : Configure | Wait for etcd cluster to be healthy] **********************************************************************************************************************************************************************************************************************************************************
fatal: [node1]: FAILED! => {"attempts": 4, "changed": false, "cmd": "set -o pipefail && /usr/local/bin/etcdctl endpoint --cluster status && /usr/local/bin/etcdctl endpoint --cluster health 2>&1 | grep -v 'Error: unhealthy cluster' >/dev/null", "delta": "0:00:05.030601", "end": "2023-03-10 06:34:33.341401", "msg": "non-zero return code", "rc": 1, "start": "2023-03-10 06:34:28.310800", "stderr": "{\"level\":\"warn\",\"ts\":\"2023-03-10T06:34:33.340-0500\",\"logger\":\"etcd-client\",\"caller\":\"[email protected]/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc00031a8c0/192.168.122.233:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"}\nFailed to get the status of endpoint https://192.168.122.120:2379 (context deadline exceeded)", "stderr_lines": ["{\"level\":\"warn\",\"ts\":\"2023-03-10T06:34:33.340-0500\",\"logger\":\"etcd-client\",\"caller\":\"[email protected]/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc00031a8c0/192.168.122.233:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"}", "Failed to get the status of endpoint https://192.168.122.120:2379 (context deadline exceeded)"], "stdout": "https://192.168.122.233:2379, 4dc4060cd0d7d06, 3.5.6, 20 kB, false, false, 2, 7, 7, ", "stdout_lines": ["https://192.168.122.233:2379, 4dc4060cd0d7d06, 3.5.6, 20 kB, false, false, 2, 7, 7, "]}

NO MORE HOSTS LEFT *****************************************************************************************************************************************************************************************************************************************************************************************************

PLAY RECAP *************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
node1                      : ok=517  changed=5    unreachable=0    failed=1    skipped=612  rescued=0    ignored=0   
node2                      : ok=483  changed=5    unreachable=0    failed=0    skipped=529  rescued=0    ignored=0   
node3                      : ok=436  changed=5    unreachable=0    failed=0    skipped=507  rescued=0    ignored=0   

这是我的 host.yaml:

all:
  hosts:
    node1:
      ansible_host: 134.122.85.85
      ip: 134.122.85.85
      access_ip: 134.122.85.85
    node2:
      ansible_host: 134.122.69.63
      ip: 134.122.69.63
      access_ip: 134.122.69.63
    node3:
      ansible_host: 161.35.28.90
      ip: 161.35.28.90
      access_ip: 161.35.28.90
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube_control_plane:
        kube-node:
    calico-rr:
      hosts: {}

主机之间存在通信。

相关内容