“ceph-volume lvm new-db” 似乎没有达到应有的性能

“ceph-volume lvm new-db” 似乎没有达到应有的性能

尽管出现了错误,但数据库设备似乎已添加到 OSD(通过 ceph-volume lvm new-db)。到目前为止,我不清楚 DB 设备是否已成功添加到 OSD,或者这是 ceph 版本 17.2.7 中的一个错误

我注意到了这一点:最初,该命令ceph-volume lvm list /dev/sdb不显示任何数据库设备。但是,运行后ceph-volume lvm new-db(会抛出错误),会显示数据库设备。最后,我ceph config set osd.X bluestore_block_db_size SIZE这样做了,没有问题。

以下是我上面描述的日志:

root@nerffs03:/# ceph-volume lvm list /dev/sdb 


====== osd.1 =======

  [block]       /dev/ceph-6ced433b-30ae-4ce3-a07f-435164a45385/osd-block-14743211-392a-454e-a9f8-b9ee5db44f9b

      block device              /dev/ceph-6ced433b-30ae-4ce3-a07f-435164a45385/osd-block-14743211-392a-454e-a9f8-b9ee5db44f9b
      block uuid                agwZ3b-KOtH-HHhV-iZSb-0B2R-UA0M-LFXFnU
      cephx lockbox secret      
      cluster fsid              c369bf61-431b-11ee-bea8-19204043d6b4
      cluster name              ceph
      crush device class        
      encrypted                 0
      osd fsid                  14743211-392a-454e-a9f8-b9ee5db44f9b
      osd id                    1
      osdspec affinity          None
      type                      block
      vdo                       0
      devices                   /dev/sdb
root@nerffs03:/# ceph-volume lvm new-db --osd-id 1 --osd-fsid 14743211-392a-454e-a9f8-b9ee5db44f9b --target cephdb03/cephdb03osd1
--> Making new volume at /dev/cephdb03/cephdb03osd1 for OSD: 1 (/var/lib/ceph/osd/ceph-1)
 stdout: inferring bluefs devices from bluestore path
 stderr: failed to add DB device:(2) No such file or directory
 stderr: 2023-11-16T10:54:47.541+0000 7f1d9217c880 -1 bluestore(/var/lib/ceph/osd/ceph-1/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-1/block: (2) No such file or directory
 stderr: 2023-11-16T10:54:47.541+0000 7f1d9217c880 -1 bluestore(/var/lib/ceph/osd/ceph-1) _open_db_and_around failed to load os-type: (2) No such file or directory
--> failed to attach new volume, error code:254
--> Undoing lv tag set
Failed to attach new volume: cephdb03/cephdb03osd1
root@nerffs03:/# ceph-volume lvm list /dev/sdb 


====== osd.1 =======

  [block]       /dev/ceph-6ced433b-30ae-4ce3-a07f-435164a45385/osd-block-14743211-392a-454e-a9f8-b9ee5db44f9b

      block device              /dev/ceph-6ced433b-30ae-4ce3-a07f-435164a45385/osd-block-14743211-392a-454e-a9f8-b9ee5db44f9b
      block uuid                agwZ3b-KOtH-HHhV-iZSb-0B2R-UA0M-LFXFnU
      cephx lockbox secret      
      cluster fsid              c369bf61-431b-11ee-bea8-19204043d6b4
      cluster name              ceph
      crush device class        
      db device                 /dev/cephdb03/cephdb03osd1
      db uuid                   fUYlTg-1ErX-mRr5-JWsz-ji7b-gUF3-3x1P3f
      encrypted                 0
      osd fsid                  14743211-392a-454e-a9f8-b9ee5db44f9b
      osd id                    1
      osdspec affinity          None
      type                      block
      vdo                       0
      devices                   /dev/sdb
root@nerffs03:/# ceph config set osd.1 bluestore_block_db_size 270000000000

最后,我通过检查了上述设置ceph config get,其运行正常,如下所示:

root@nerffs03:/# ceph config get osd.1
WHO    MASK           LEVEL     OPTION                      VALUE                                                                                      RO
osd.1                 dev       bluestore_block_db_size     270000000000                                                                               * 
osd.1                 basic     container_image             quay.io/ceph/ceph@sha256:c35e50d1b0e75d62b777d0608ed51b4e5d3def11038adb79f6cfe368a7159166  * 
osd    host:nerffs03  basic     osd_memory_target           3115763950                                                                                   
osd                   advanced  osd_memory_target_autotune  true                                                                                         
root@nerffs03:/# 

知道 DB 设备是否已正确添加到 OSD 吗?如果没有,如何在 /var/lib/ceph/osd/ 中添加相应的丢失文件

谢谢!

相关内容