CEPH 找不到 OSD

CEPH 找不到 OSD

cephadm我使用引导安装了 ceph 集群。

我可以通过清单查看磁盘,但它们没有显示在设备列表中。为什么会这样呢?如何向集群添加磁盘?

root@RX570:~# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/sdl                  7.28 TB      True    True      USB3.0
/dev/sdm                  7.28 TB      True    True      USB3.0
/dev/sdn                  7.28 TB      True    True      USB3.0
/dev/sdo                  7.28 TB      True    True      USB3.0
/dev/sdp                  7.28 TB      True    True      USB3.0
/dev/nvme0n1              1.82 TB      False   False     Samsung SSD 980 PRO 2TB
/dev/sda                  3.64 TB      False   False     Samsung SSD 860
/dev/sdb                  16.37 TB     True    False     USB3.0
/dev/sdc                  16.37 TB     True    False     USB3.0
/dev/sdd                  16.37 TB     True    False     USB3.0
/dev/sde                  16.37 TB     True    False     USB3.0
/dev/sdf                  16.37 TB     True    False     USB3.0
/dev/sdg                  16.37 TB     True    False     USB3.0
/dev/sdh                  16.37 TB     True    False     USB3.0
/dev/sdi                  16.37 TB     True    False     USB3.0
/dev/sdj                  16.37 TB     True    False     USB3.0
/dev/sdk                  16.37 TB     True    False     USB3.0

root@RX570:~# ceph orch device ls
root@RX570:~# 

root@RX570:~# ceph orch host ls
HOST   ADDR           LABELS  STATUS  
RX570  192.168.1.227  _admin          
1 hosts in cluster

root@RX570:~# docker ps
CONTAINER ID   IMAGE                                     COMMAND                  CREATED              STATUS              PORTS     NAMES
8bee4afbafce   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   9 seconds ago        Up 9 seconds                  ceph-2243dcbe-9494-11ed-953a-e14796764522-mon-RX570
e4c133a3b1e8   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-b1dee40a-94a7-11ed-a3c1-29bb7e5ec517-crash-RX570
f81e05a1b7d4   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-86827f26-94aa-11ed-a3c1-29bb7e5ec517-crash-RX570
a3bb6d078fd5   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-ddbfff1c-94ef-11ed-a3c1-29bb7e5ec517-crash-RX570
9615b2f3fd22   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-2243dcbe-9494-11ed-953a-e14796764522-crash-RX570
0c717d30704e   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About a minute ago   Up About a minute             ceph-3d0a8c9c-94a2-11ed-a3c1-29bb7e5ec517-crash-RX570
494f07c609d8   quay.io/ceph/ceph-grafana:8.3.5           "/bin/sh -c 'grafana…"   25 minutes ago       Up 25 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-grafana-RX570
9ad68d8eecca   quay.io/prometheus/alertmanager:v0.23.0   "/bin/alertmanager -…"   25 minutes ago       Up 25 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-alertmanager-RX570
f39f9290b628   quay.io/prometheus/prometheus:v2.33.4     "/bin/prometheus --c…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-prometheus-RX570
b0b1713c4200   quay.io/ceph/ceph                         "/usr/bin/ceph-mgr -…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mgr-RX570-ztegxs
43f2e378e521   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   26 minutes ago       Up 26 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-crash-RX570
b88ecf269889   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mgr -…"   28 minutes ago       Up 28 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mgr-RX570-whcycj
25c7ac170460   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   28 minutes ago       Up 28 minutes                 ceph-9b740ba0-94f2-11ed-a3c1-29bb7e5ec517-mon-RX570
84adac6e89d8   quay.io/prometheus/node-exporter:v1.3.1   "/bin/node_exporter …"   31 minutes ago       Up 31 minutes                 ceph-12bf4064-94f1-11ed-a3c1-29bb7e5ec517-node-exporter-RX570
b7601e5b4611   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   31 minutes ago       Up 31 minutes                 ceph-12bf4064-94f1-11ed-a3c1-29bb7e5ec517-crash-RX570

root@RX570:~# ceph status
  cluster:
    id:     9b740ba0-94f2-11ed-a3c1-29bb7e5ec517
    health: HEALTH_WARN
            Failed to place 1 daemon(s)
            failed to probe daemons or devices
            OSD count 0 < osd_pool_default_size 2
 
  services:
    mon: 1 daemons, quorum RX570 (age 28m)
    mgr: RX570.whcycj(active, since 26m), standbys: RX570.ztegxs
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@RX570:~# ceph health
HEALTH_WARN Failed to place 1 daemon(s); failed to probe daemons or devices; OSD count 0 < osd_pool_default_size 2

答案1

确保您将所有主机添加到/etc/hosts

# Ceph
<pulic_network_ip> ceph-1
<pulic_network_ip> ceph-2
<pulic_network_ip> ceph-3

那么你应该将 ceph-nodes 添加到集群中:

ceph cephadm get-pub-key > ~/ceph.pub
ssh-copy-id -f -i ~/ceph.pub root@<host_ip>

ceph orch host add <host_name> <host_ip>
ceph orch host label add <host_name> <role>

然后将 OSD 守护进程添加到磁盘:

ceph orch daemon add osd ceph-1:/dev/sdm
ceph orch daemon add osd ceph-1:/dev/sdn

这也是可用的,但我不建议使用它:

ceph orch apply osd --all-available-devices

相关内容