为什么 ceph 没有在新节点上检测到 ssd 设备?

为什么 ceph 没有在新节点上检测到 ssd 设备?

我已经安装了一个 ceph 集群 (quincy),其中已经有 2 个节点和 4 个 OSD。现在我向集群添加了第三个运行 Debian (bullseye) 的主机。新主机被正确检测到并运行 mom。

问题是,即使应该有 2 个磁盘可用,新主机上也没有列出任何 OSD。当我在其中一个节点上运行该命令时:

$ sudo ceph orch device ls

我只能看到其他节点的设备。但新节点未列出

lsblk显示新主机上有两个可用磁盘:

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    1 476.9G  0 disk 
sdb      8:16   1 476.9G  0 disk 
├─sdb1   8:17   1    16G  0 part [SWAP]
├─sdb2   8:18   1     1G  0 part /boot
└─sdb3   8:19   1 459.9G  0 part /
sdc      8:32   1 476.9G  0 disk 

我也在ceph-volume新主机上尝试了该命令。但该命令也没有找到任何磁盘:

$ sudo cephadm ceph-volume inventory
Inferring fsid e79............
Device Path               Size         Device nodes    rotates available Model name

我已经移除了新主机,并安装了全新的操作系统。但我不知道 ceph 找不到任何磁盘的原因是什么。

Ceph 是否不允许混合使用 SSD/SATA 和 SSD/NVME 的节点?


cephadm.log通话期间的输出似乎ceph-volume inventory没有提供任何其他信息:

2022-12-08 00:15:15,432 7fdca25ac740 DEBUG --------------------------------------------------------------------------------
cephadm ['ceph-volume', 'inventory']
2022-12-08 00:15:15,432 7fdca25ac740 DEBUG Using default config /etc/ceph/ceph.conf
2022-12-08 00:15:16,131 7fee4d4c8740 DEBUG --------------------------------------------------------------------------------
cephadm ['check-host']
2022-12-08 00:15:16,131 7fee4d4c8740 INFO docker (/usr/bin/docker) is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO systemctl is present
2022-12-08 00:15:16,131 7fee4d4c8740 INFO lvcreate is present
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Unit ntp.service is enabled and running
2022-12-08 00:15:16,176 7fee4d4c8740 INFO Host looks OK
2022-12-08 00:15:16,444 7f370bfbf740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'ls']
2022-12-08 00:15:20,100 7fdca25ac740 INFO Inferring fsid 0f3cd66c-74e5-11ed-813b-901b0e95a162
2022-12-08 00:15:20,121 7fdca25ac740 DEBUG /usr/bin/docker: stdout quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45|cc65afd6173a|v17|2022-10-18 01:41:41 +0200 CEST
2022-12-08 00:15:22,253 7f6f2e30a740 DEBUG --------------------------------------------------------------------------------
cephadm ['gather-facts']
2022-12-08 00:15:22,482 7f82221ce740 DEBUG --------------------------------------------------------------------------------
cephadm ['--image', 'quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45', 'list-networks']
2022-12-08 00:15:24,261 7fdca25ac740 DEBUG Using container info for daemon 'mon'
2022-12-08 00:15:24,261 7fdca25ac740 INFO Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 01:41:41 +0200 CEST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45

ceph-volume.log输出:

[2022-12-07 23:24:00,496][ceph_volume.main][INFO  ] Running command: ceph-volume  inventory
[2022-12-07 23:24:00,499][ceph_volume.util.system][INFO  ] Executable lvs found on the host, will use /sbin/lvs
[2022-12-07 23:24:00,499][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S  -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2022-12-07 23:24:00,560][ceph_volume.process][INFO  ] Running command: /usr/bin/lsblk -P -o NAME,KNAME,PKNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
[2022-12-07 23:24:00,569][ceph_volume.process][INFO  ] stdout NAME="sda" KNAME="sda" PKNAME="" MAJ:MIN="8:0" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda1" KNAME="sda1" PKNAME="sda" MAJ:MIN="8:1" FSTYPE="swap" MOUNTPOINT="[SWAP]" LABEL="" UUID="51f95805-2d5f-4cba-a885-775a0c19ad53" RO="0" RM="1" MODEL="" SIZE="32G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda2" KNAME="sda2" PKNAME="sda" MAJ:MIN="8:2" FSTYPE="ext3" MOUNTPOINT="/rootfs/boot" LABEL="" UUID="676438b6-3214-4c05-bc6b-94bd7a88c26f" RO="0" RM="1" MODEL="" SIZE="1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sda3" KNAME="sda3" PKNAME="sda" MAJ:MIN="8:3" FSTYPE="ext4" MOUNTPOINT="/rootfs" LABEL="" UUID="a251c9b0-a91c-4768-bd42-5730e032ce58" RO="0" RM="1" MODEL="" SIZE="432.8G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="sda" PARTLABEL=""
[2022-12-07 23:24:00,570][ceph_volume.process][INFO  ] stdout NAME="sdb" KNAME="sdb" PKNAME="" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="1" MODEL="Crucial_CT500MX2" SIZE="465.8G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="4K" DISC-MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
[2022-12-07 23:24:00,573][ceph_volume.util.system][INFO  ] Executable pvs found on the host, will use /sbin/pvs
[2022-12-07 23:24:00,573][ceph_volume.process][INFO  ] Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o pv_name,vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size

答案1

在搜索了一段时间并且无法检测到我的节点中的 SAS 设备后,我设法通过使用以下命令手动添加它们将我的 HDD 启动为 OSD:

cephadm shell
ceph orch daemon add osd --method raw host1:/dev/sda

相关内容