我按照快速入门指南创建了一个小型 Ceph 集群,但有一个例外,我为 OSD 使用了一个单独的磁盘,而不是文件夹。
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
我已经发出
ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1
在相同的环境中,文件夹方法运行良好并且集群达到活动+清洁状态。
我检查了两个 OSD 是否均显示为启动,并尝试按照故障排除指南进行操作,但所描述的方法似乎均不起作用。
以下是 ceph osd tree、ceph -s 和 ceph osd dump 的输出
# id weight type name up/down reweight
-1 0 root default
-2 0 host node2
0 0 osd.0 up 1
-3 0 host node3
1 0 osd.1 up 1
cluster 5d7d7a6f-63c9-43c5-aebb-5458fd3ae43e
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {node1=10.10.10.12:6789/0}, election epoch 1, quorum 0 node1
osdmap e8: 2 osds: 2 up, 2 in
pgmap v15: 192 pgs, 3 pools, 0 bytes data, 0 objects
68476 kB used, 6055 MB / 6121 MB avail
192 active+degraded
epoch 8
fsid 5d7d7a6f-63c9-43c5-aebb-5458fd3ae43e
created 2015-04-04 21:45:58.089596
modified 2015-04-04 23:26:06.840590
flags
pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
max_osd 2
osd.0 up in weight 1 up_from 4 up_thru 4 down_at 0 last_clean_interval [0,0) 10.10.10.13:6800/1749 10.10.10.13:6801/1749 10.10.10.13:6802/1749 10.10.10.13:6803/1749 exists,up 42d5622d-8907-4991-a6b6-869190c21678
osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0) 10.10.10.14:6800/1750 10.10.10.14:6801/1750 10.10.10.14:6802/1750 10.10.10.14:6803/1750 exists,up b0a515d3-5f24-4e69-a5b3-1e094617b5b4
答案1
经过进一步研究,发现线索就在 osd 树输出中 - 权重全部设置为 0。这似乎是 Ceph 或 ceph-deploy 脚本的问题,因为它 100% 可重现。重置 crush map 中的 osd 权重可修复此问题。我所要做的就是发出以下命令:
ceph osd crush reweight osd.0 6
ceph osd crush reweight osd.1 6