如何定义 Ceph OSD 磁盘分区大小?
它总是仅创建 10 GB 的可用空间。
- 磁盘大小 = 3.9 TB
- 分区大小 = 3.7 TB
- 使用ceph 磁盘准备和ceph 磁盘激活(见下文)
- OSD 已创建,但只有 10 GB,而不是 3.7 TB
。
使用的命令
root@proxmox:~# ceph-disk prepare --cluster ceph --cluster-uuid fea02667-f17d-44fd-a4c2-a8e19d05ed51 --fs-type xfs /dev/sda4
meta-data=/dev/sda4 isize=2048 agcount=4, agsize=249036799 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=996147194, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=486399, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
。
root@proxmox:~# ceph-disk activate /dev/sda4
creating /var/lib/ceph/tmp/mnt.jyqJTM/keyring
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAohgpdjwb3NRAAIrINUiXDWQ5iMWp4Ueah3Q== with 0 caps)
got monmap epoch 3
2019-06-19 19:59:54.006226 7f966e628e00 -1 bluestore(/var/lib/ceph/tmp/mnt.jyqJTM/block) _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.jyqJTM/block: (2) No such file or directory
2019-06-19 19:59:54.006285 7f966e628e00 -1 bluestore(/var/lib/ceph/tmp/mnt.jyqJTM/block) _read_bdev_label failed to open /var/lib/ceph/tmp/mnt.jyqJTM/block: (2) No such file or directory
2019-06-19 19:59:55.668619 7f966e628e00 -1 created object store /var/lib/ceph/tmp/mnt.jyqJTM for osd.0 fsid fea02667-f17d-44fd-a4c2-a8e19d05ed51
Created symlink /run/systemd/system/ceph-osd.target.wants/[email protected] → /lib/systemd/system/[email protected].
# Don't worry about my keys/IDs, its just a dev environment.
。
磁盘布局
root@proxmox:~# fdisk -l
Disk /dev/sda: 3.9 TiB, 4294967296000 bytes, 8388608000 sectors
OMITTIED
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 419430400 418379777 199.5G Linux LVM
/dev/sda4 419430408 8388607966 7969177559 3.7T Ceph OSD
。
Ceph OSD 磁盘大小不正确(10 GB 而不是 1.7 TB)
root@proxmox:~# ceph status
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 1.00GiB used, 9.00GiB / 10GiB avail
pgs:
。
。
完整安装详细信息
如果您想了解有关 Proxmox 安装和创建带有分区的 Ceph OSD 的详细信息,请继续阅读...
设置
- 磁盘大小:2TB NVMe(/dev/sda)
- 安装了 200 GB 的操作系统(Proxmox),其余磁盘为空(1800 GB)。
- 启动并进入 Web 界面后,创建一个集群并加入两个主机,以确保绿色仲裁状态
- 现在执行下面的脚本
配置脚本
# Install Ceph
pveceph install
# Configure Network (Just run on Primary Proxmox Server, your LAN network)
pveceph init --network 192.168.6.0/24
# Create Monitor
pveceph createmon
# View Disks Before
sgdisk --print /dev/sda
sgdisk --largest-new=4 --change-name="4:CephOSD" \
--partition-guid=4:4fbd7e29-9d25-41b8-afd0-062c0ceff05d \
--typecode=4:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sda
# View Disks After (Compare)
sgdisk --print /dev/sda
# Reboot for changes to be in affect
reboot
# Note your cluster ID (fsid) at this point from the web interface
Datacenter > Server > Ceph
# Prepare the Ceph OSD Disk, replace cluster-uuid with above fsid
ceph-disk prepare --cluster ceph --cluster-uuid fea02667-f17d-44fd-a4c2-a8e19d05ed51 --fs-type xfs /dev/sda4
# Activate the Ceph OSD Disk
ceph-disk activate /dev/sda4
# Check Ceph OSD Disk Size
ceph status
笔记 我读过一些帖子,由于性能问题强烈建议使用磁盘而不是分区,我理解这些警告,但就我而言,我使用 NVMe SSD 存储并接受任何风险。
答案1
我们最近遇到了类似的问题,我们的 OSD 被限制为只有 10GB,尽管实际磁盘空间大得多,我们发现它与bluestore
Ceph 中的后端有关。当bluestore
由文件支持时,如果该文件尚不存在,它将创建它:
bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
bluestore(/var/lib/ceph/osd/ceph-0) mkfs generated fsid db688b70-0f63-4086-ba4a-ef36dc4a3147
然后根据bluestore_block_size
配置调整其大小:
bluestore(/var/lib/ceph/osd/ceph-0) _setup_block_symlink_or_file resized block file to 10 GiB
bluestore_block_size
例如,在 ceph 14.2.2 版本中默认值为10GB [1]:
Option("bluestore_block_size", Option::TYPE_SIZE, Option::LEVEL_DEV)
.set_default(10_G)
.set_flag(Option::FLAG_CREATE)
.set_description("Size of file to create for backing bluestore"),
您可以通过以下设置进行调整ceph.conf
:
[global]
bluestore_block_size = <block file size in bytes>
[1]https://github.com/ceph/ceph/blob/v14.2.2/src/common/options.cc#L4338-L4341