新创建的 LV 无法在重启后继续存在 - 精简池检查失败

新创建的 LV 无法在重启后继续存在 - 精简池检查失败

为了使用 gluster 快照,我尝试通过 LVM 创建 LV,因为需要精简配置的逻辑卷才能使 gluster 快照工作。

创建成功,但设置无法在重启后继续。此过程中肯定有错误。以下是我创建 LV 的步骤:

user@node1:~$ sudo lvs
[sudo] password for user: 
  LV     VG        Attr      LSize Pool Origin Data%  Move Log Copy%  Convert
  root   rabbit-vg -wi-ao--- 8.86g                                           
  swap_1 rabbit-vg -wi-ao--- 5.86g                                           

显示物理卷:

user@node1:~$ sudo pvs
  PV         VG        Fmt  Attr PSize  PFree 
  /dev/sda5  rabbit-vg lvm2 a--  14.76g 48.00m
  /dev/sde1            lvm2 a--  20.00g 20.00g

创建卷

user@node1:~$ sudo vgcreate gluster /dev/sde1
  Volume group "gluster" successfully created

创建精简池

user@node1:~$ sudo lvcreate -L 19.9G -T gluster/mythinpool
  Rounding up size to full physical extent 19.90 GiB
  Rounding up size to full physical extent 20.00 MiB
  Logical volume "mythinpool" created

创建逻辑卷

user@node1:~$ sudo lvcreate -V 19.9G -T gluster/mythinpool -n thinv1
  Rounding up size to full physical extent 19.90 GiB
  Logical volume "thinv1" created

创建文件系统

user@node1:~$ sudo mkfs.ext4 /dev/gluster/thinv1 
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=16 blocks, Stripe width=16 blocks
1305600 inodes, 5217280 blocks
260864 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8160 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

显示设置

user@node1:~$ sudo lvscan
  ACTIVE            '/dev/gluster/mythinpool' [19.90 GiB] inherit
  ACTIVE            '/dev/gluster/thinv1' [19.90 GiB] inherit
  ACTIVE            '/dev/rabbit-vg/root' [8.86 GiB] inherit
  ACTIVE            '/dev/rabbit-vg/swap_1' [5.86 GiB] inherit

安装它

user@node1:~$ sudo mount /dev/gluster/thinv1 /bricks/brick1/

显示已安装的设备

user@node1:~$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/rabbit--vg-root  8.6G  7.4G  839M  90% /
none                         4.0K     0  4.0K   0% /sys/fs/cgroup
udev                         1.5G  4.0K  1.5G   1% /dev
tmpfs                        301M  592K  301M   1% /run
none                         5.0M     0  5.0M   0% /run/lock
none                         1.5G     0  1.5G   0% /run/shm
none                         100M     0  100M   0% /run/user
/dev/sda1                    236M   38M  186M  17% /boot
/dev/sdb1                     15G  4.8G  9.2G  35% /data/mysql
/dev/sdc1                     20G  7.2G   12G  39% /data/gluster
/dev/sdd1                     20G   17G  2.3G  88% /data/files
gs1:/volume1                  20G  7.2G   12G  39% /data/nfs
/dev/mapper/gluster-thinv1    20G   44M   19G   1% /bricks/brick1

现在重新启动并再次检查:

user@node1:~$ sudo lvscan
[sudo] password for user: 
inactive          '/dev/gluster/mythinpool' [19.90 GiB] inherit
inactive          '/dev/gluster/thinv1' [19.90 GiB] inherit
ACTIVE            '/dev/rabbit-vg/root' [8.86 GiB] inherit
ACTIVE            '/dev/rabbit-vg/swap_1' [5.86 GiB] inherit

卷处于非活动状态,尝试激活

user@node1:~$ sudo vgchange -ay gluster
/usr/sbin/thin_check: execvp failed: No such file or directory
Check of thin pool gluster/mythinpool failed (status:2). Manual repair required (thin_dump --repair /dev/mapper/gluster-mythinpool_tmeta)!
/usr/sbin/thin_check: execvp failed: No such file or directory
0 logical volume(s) in volume group "gluster" now active

无论我做什么,卷都保持非活动状态并且我无法安装它们。

我做错了什么?提前感谢您的帮助。

答案1

您必须安装精简配置工具来解决此问题。

首先,安装工具(以Ubuntu为例)

 sudo apt-get install thin-provisioning-tools

第二步,激活所有群组

 sudo vgchange -a y

相关内容