ceph-osd 使用当前配置未检测到块设备

ceph-osd 使用当前配置未检测到块设备

通过 Juju 部署 Openstack 后,ceph-osd 被阻止

$: juju status 
ceph-osd/0                blocked   idle       1        10.20.253.197                      No block devices detected using current configuration
ceph-osd/1*               blocked   idle       2        10.20.253.199                      No block devices detected using current configuration
ceph-osd/2                blocked   idle       0        10.20.253.200                      No block devices detected using current configuration

我已经通过 juju ssh 连接到了第一台带有 ceph-osd/0 的机器

$: juju ssh ceph-osd/0

我运行以下命令:

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa276e23

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CAA6111D-5ECF-48EB-B4BF-9EC58E38AD64

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048       4095       2048    1M BIOS boot
/dev/vdb2   4096 1048563711 1048559616  500G Linux filesystem

$: df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.9G     0  7.9G   0% /dev
tmpfs           1.6G  856K  1.6G   1% /run
/dev/vda1       492G   12G  455G   3% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
tmpfs           1.6G     0  1.6G   0% /run/user/1000  

$: lsblk 
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    vda    252:0    0  500G  0 disk 
    └─vda1 252:1    0  500G  0 part /
    vdb    252:16   0  500G  0 disk 
    ├─vdb1 252:17   0    1M  0 part 
    └─vdb2 252:18   0  500G  0 part 

答案1

如果我们的环境已经部署,我已经决定使用这两个任务:

1°任务

$: juju ssh ceph-osd/0 
$: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1,2, default 2): 1

Partition 1 has been deleted.

Command (m for help): d
Selected partition 2
Partition 2 has been deleted.

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
select "d" to delete all partitions and then "w" to write the new change. 

然后

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

然后

    $: sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition number (1-128, default 1): 
First sector (34-1048575966, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-1048575966, default 1048575966): 

Created a new partition 1 of type 'Linux filesystem' and of size 500 GiB.

Command (m for help): p
Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

然后

$: sudo fdisk -l
Disk /dev/vda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2fa2c9a8

Device     Boot Start        End    Sectors  Size Id Type
/dev/vda1        2048 1048575966 1048573919  500G 83 Linux


Disk /dev/vdb: 500 GiB, 536870912000 bytes, 1048576000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 146912CF-FC27-4FDC-A202-24F05DC00E69

Device     Start        End    Sectors  Size Type
/dev/vdb1   2048 1048575966 1048573919  500G Linux filesystem

我也为另一台机器 ceph-osd/1 和 ceph-osd/2 重复了此任务

2°任务

在 Juju Gui 上,我已在 3 ceph-osd 上将 /dev/vdb1 中的字符串 /dev/sdb 更改,保存并提交

在此处输入图片描述

现在其自身状态为“空闲”

$: juju status
Model      Controller             Cloud/Region  Version  SLA          Timestamp
openstack  maas-cloud-controller  maas-cloud    2.4.2    unsupported  13:54:02+02:00

App                    Version        Status  Scale  Charm                  Store       Rev  OS      Notes
ceph-mon               13.2.1+dfsg1   active      3  ceph-mon               jujucharms   26  ubuntu  
ceph-osd               13.2.1+dfsg1   active      3  ceph-osd               jujucharms  269  ubuntu  
ceph-radosgw           13.2.1+dfsg1   active      1  ceph-radosgw           jujucharms  259  ubuntu  
cinder                 13.0.0         active      1  cinder                 jujucharms  273  ubuntu  
cinder-ceph            13.0.0         active      1  cinder-ceph            jujucharms  234  ubuntu  
glance                 17.0.0         active      1  glance                 jujucharms  268  ubuntu  
keystone               14.0.0         active      1  keystone               jujucharms  283  ubuntu  
mysql                  5.7.20-29.24   active      1  percona-cluster        jujucharms  269  ubuntu  
neutron-api            13.0.0         active      1  neutron-api            jujucharms  262  ubuntu  
neutron-gateway        13.0.0         active      1  neutron-gateway        jujucharms  253  ubuntu  
neutron-openvswitch    13.0.0         active      3  neutron-openvswitch    jujucharms  251  ubuntu  
nova-cloud-controller  18.0.0         active      1  nova-cloud-controller  jujucharms  311  ubuntu  
nova-compute           18.0.0         active      3  nova-compute           jujucharms  287  ubuntu  
ntp                    4.2.8p10+dfsg  active      4  ntp                    jujucharms   27  ubuntu  
openstack-dashboard    14.0.0         active      1  openstack-dashboard    jujucharms  266  ubuntu  
rabbitmq-server        3.6.10         active      1  rabbitmq-server        jujucharms   78  ubuntu  

Unit                      Workload  Agent  Machine  Public address  Ports              Message
ceph-mon/0                active    idle   2/lxd/1  10.20.253.216                      Unit is ready and clustered
ceph-mon/1                active    idle   0/lxd/0  10.20.253.95                       Unit is ready and clustered
ceph-mon/2*               active    idle   1/lxd/0  10.20.253.83                       Unit is ready and clustered
ceph-osd/0                active    idle   1        10.20.253.197                      Unit is ready (1 OSD)
ceph-osd/1*               active    idle   2        10.20.253.199                      Unit is ready (1 OSD)
ceph-osd/2                active    idle   0        10.20.253.200                      Unit is ready (1 OSD)
ceph-radosgw/0*           active    idle   3/lxd/0  10.20.253.87    80/tcp             Unit is ready
cinder/0*                 active    idle   0/lxd/1  10.20.253.188   8776/tcp           Unit is ready
  cinder-ceph/0*          active    idle            10.20.253.188                      Unit is ready
glance/0*                 active    idle   2/lxd/0  10.20.253.217   9292/tcp           Unit is ready
keystone/0*               active    idle   1/lxd/1  10.20.253.134   5000/tcp           Unit is ready
mysql/0*                  active    idle   3/lxd/1  10.20.253.96    3306/tcp           Unit is ready
neutron-api/0*            active    idle   0/lxd/2  10.20.253.189   9696/tcp           Unit is ready
neutron-gateway/0*        active    idle   3        10.20.253.198                      Unit is ready
  ntp/3                   active    idle            10.20.253.198   123/udp            Ready
nova-cloud-controller/0*  active    idle   2/lxd/2  10.20.253.218   8774/tcp,8778/tcp  Unit is ready
nova-compute/0            active    idle   1        10.20.253.197                      Unit is ready
  neutron-openvswitch/0*  active    idle            10.20.253.197                      Unit is ready
  ntp/0*                  active    idle            10.20.253.197   123/udp            Ready
nova-compute/1*           active    idle   0        10.20.253.200                      Unit is ready
  neutron-openvswitch/1   active    idle            10.20.253.200                      Unit is ready
  ntp/1                   active    idle            10.20.253.200   123/udp            Ready
nova-compute/2            active    idle   2        10.20.253.199                      Unit is ready
  neutron-openvswitch/2   active    idle            10.20.253.199                      Unit is ready
  ntp/2                   active    idle            10.20.253.199   123/udp            Ready
openstack-dashboard/0*    active    idle   1/lxd/2  10.20.253.13    80/tcp,443/tcp     Unit is ready
rabbitmq-server/0*        active    idle   3/lxd/2  10.20.253.86    5672/tcp           Unit is ready

Machine  State    DNS            Inst id              Series  AZ         Message
0        started  10.20.253.200  fxbapd               bionic  Openstack  Deployed
0/lxd/0  started  10.20.253.95   juju-53dcb3-0-lxd-0  bionic  Openstack  Container started
0/lxd/1  started  10.20.253.188  juju-53dcb3-0-lxd-1  bionic  Openstack  Container started
0/lxd/2  started  10.20.253.189  juju-53dcb3-0-lxd-2  bionic  Openstack  Container started
1        started  10.20.253.197  mqdnxt               bionic  Openstack  Deployed
1/lxd/0  started  10.20.253.83   juju-53dcb3-1-lxd-0  bionic  Openstack  Container started
1/lxd/1  started  10.20.253.134  juju-53dcb3-1-lxd-1  bionic  Openstack  Container started
1/lxd/2  started  10.20.253.13   juju-53dcb3-1-lxd-2  bionic  Openstack  Container started
2        started  10.20.253.199  ysg683               bionic  Openstack  Deployed
2/lxd/0  started  10.20.253.217  juju-53dcb3-2-lxd-0  bionic  Openstack  Container started
2/lxd/1  started  10.20.253.216  juju-53dcb3-2-lxd-1  bionic  Openstack  Container started
2/lxd/2  started  10.20.253.218  juju-53dcb3-2-lxd-2  bionic  Openstack  Container started
3        started  10.20.253.198  scycac               bionic  Openstack  Deployed
3/lxd/0  started  10.20.253.87   juju-53dcb3-3-lxd-0  bionic  Openstack  Container started
3/lxd/1  started  10.20.253.96   juju-53dcb3-3-lxd-1  bionic  Openstack  Container started
3/lxd/2  started  10.20.253.86   juju-53dcb3-3-lxd-2  bionic  Openstack  Container started

而如果我们必须运行 Openstack 的部署,在此之前,我们必须在 Juju Ui 中将字符串 osd-devices(字符串)从/dev/sdb/dev/vdb在 3 ceph-osd 中。然后我们可以继续提交。

答案2

ceph-base 的默认磁盘路径当前设置为:“/dev/sdb”。您必须将其设置为 ceph-osd 数据磁盘的路径(“/dev/vdb”):

$ juju config ceph-osd osd-devices
/dev/sdb
$ juju config ceph-osd osd-devices='/dev/vdb'

配置磁盘时,磁盘上不应有任何分区。此后,ceph-osds 应变为活动状态。

相关内容