无法访问 LVM 缓存卷

无法访问 LVM 缓存卷

我通常不会寻求有关 Linux 的帮助,而是喜欢自己解决问题,但这次我真的陷入了困境,我不知道该向谁寻求帮助。

大约一年前,我尝试将 lvmcache 添加到我的一个 LV 中。几天前,我升级了内核(yum update),重启后,我无法再访问该 LV。它上面有我非常想找回的重要数据。

我有两个卷组,SSD 和 Rust:

  • SSD 是两个 1Tb NVMe 设备的软件 (MD) raid1 阵列。
  • Rust 是一个由四块 10Tb 硬盘组成的软件 (MD) raid5 阵列。

由于各种原因,我不希望 SSD 完全成为缓存而不能用于其他任何用途,因此我在其上设置了 LVM,并为根文件系统、LXC 容器等创建了 LV。这些是我拥有的 LV:

$ sudo lvs
  WARNING: Device for PV 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L not found or rejected by a filter.
  Couldn't find device with uuid 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L.
  LV           VG   Attr       LSize   Pool    Origin         Data%  Meta%  Move Log Cpy%Sync Convert
  Backups      Rust -wi-a-----   1.00t                                                               
  Dxxxxx       Rust -wi-a-----   3.00t                                                               
  Microserver  Rust -wi-a-----   5.00t                                                               
  NAS          Rust -wi-a-----   5.00t                                                               
  Photos       Rust Cwi---C-p-   3.00t [Cache] [Photos_corig]                                        
  Rescued      Rust -wi-a-----   6.00t                                                               
  Video        Rust -wi-a-----   2.00t                                                               
  CentOS7_Root SSD  -wi-ao----  50.00g                                                               
  Containers   SSD  -wi-ao---- 200.00g                                                               
  MD_Journal   SSD  -wi-ao----  16.00g                                                               
  Rust_Cache   SSD  -wi-a----- 256.00g                                                               
  home         SSD  -wi-ao---- 100.00g

Photos 是无法访问的缓存 LV。设置有点棘手,因为 lvmcache 坚持要求原始 LV 和缓存 LV 位于同一个 VG 中,而我的则不是(一个在 Rust 上,另一个在 SSD 上)。我通过在 SSD 上创建缓存卷,然后将其格式化为 PV,并将该 PV 添加到 Rust 组来解决这个问题。这些是我一年前运行的设置命令:

# Create a cache volume (LV) on the SSD device
lvcreate -L 256G -n Rust_Cache /dev/SSD
# Format that volume as a PV:
pvcreate /dev/SSD/Rust_Cache
# Add it to the Rust VG:
vgextend /dev/Rust /dev/SSD/Rust_Cache
# Create the cache data volume on it:
lvcreate -L 100G -n Cache Rust /dev/SSD/Rust_Cache
# Create the cache meta volume on it:
lvcreate -L 4G -n Cache_Meta Rust /dev/SSD/Rust_Cache
# Combine them into a cache pool:
lvconvert --type cache-pool /dev/Rust/Cache --poolmetadata /dev/Rust/Cache_Meta
# Set that pool as the cache of the Photos LV:
lvconvert --type cache /dev/Rust/Photos --cachepool /dev/Rust/Cache

我认为该问题与上面显示的错误消息有关:

  WARNING: Device for PV 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L not found or rejected by a filter.
  Couldn't find device with uuid 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L.

这是缓存 PV (/dev/SSD/Rust_Cache) 的 UUID,至少根据/etc/lvm/backup/Rust。我无法确认,因为它拒绝pvdisplay

$ sudo pvdisplay /dev/SSD/Rust_Cache 
  Failed to find device for physical volume "/dev/SSD/Rust_Cache".
  WARNING: Device for PV 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L not found or rejected by a filter.
  Couldn't find device with uuid 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L.

我不认为该设备真的“未找到”,因为 SSD 上的其他卷工作正常,并且它们都共享一个 PV。我认为它一定是被某种“隐藏”LVM 中的缓存卷和原始卷的魔法过滤掉了,但我不知道那是什么。

如果我无法恢复缓存,我准备接受我可能丢失了此卷上的一些数据,但是我不想完全丢失它。希望它全部备份在 S3 上,但考虑到数据量和小文件数量,很难确定。关于我可以做什么来使其再次可访问,有什么建议吗?

vgchange -ay拒绝激活它,但暗示我可以强制它,所以如果无法恢复缓存,这可能是一个选择:

$ sudo vgchange -ay
  WARNING: Device for PV 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L not found or rejected by a filter.
  Couldn't find device with uuid 0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L.
  Refusing activation of partial LV Rust/Photos.  Use '--activationmode partial' to override.
  6 logical volume(s) in volume group "Rust" now active
  5 logical volume(s) in volume group "SSD" now active

blkid根据要求在所有设备上输出:

[sudo] password for chris: 
/dev/mapper/SSD-Rust_Cache: UUID="0hAsMD-LsJf-YsiF-iQ0B-tI23-8hso-cRP93L" TYPE="LVM2_member" 
/dev/nvme0n1: PTTYPE="gpt" 
/dev/nvme0n1p1: UUID="29970b49-140d-6862-786d-f33b5edcab6d" UUID_SUB="7cfe4ba4-3062-3ebb-a2a3-0b3f119e2ccc" LABEL="blackbox.qwarx.com:pv00" TYPE="linux_raid_member" PARTUUID="92abf126-4192-4a7d-8381-85a1ad1b2eaf" 
/dev/nvme0n1p2: UUID="a42550e8-adac-3f6e-68ef-41bddb5fa54c" UUID_SUB="cd19178f-2428-2442-0c60-99c4abff3c59" LABEL="blackbox.qwarx.com:Boot" TYPE="linux_raid_member" PARTUUID="3e5c4c02-4483-48fd-82eb-6973bea5674f" 
/dev/nvme0n1p3: UUID="1d3395dc-7e7c-bf69-7e51-e49fcb2d085d" UUID_SUB="ede510d6-1468-b9a9-b4af-d8b77af5cd4e" LABEL="blackbox.qwarx.com:EFI" TYPE="linux_raid_member" PARTUUID="81fdb48b-7cd8-4e55-8358-3f3f8aee1800" 
/dev/nvme1n1: PTTYPE="gpt" 
/dev/nvme1n1p1: UUID="29970b49-140d-6862-786d-f33b5edcab6d" UUID_SUB="dc5e4a8f-3281-cf7e-55a6-b95924872cc6" LABEL="blackbox.qwarx.com:pv00" TYPE="linux_raid_member" PARTUUID="ef1a40a2-c8b2-4428-ad29-fdc1b2640634" 
/dev/nvme1n1p2: UUID="a42550e8-adac-3f6e-68ef-41bddb5fa54c" UUID_SUB="54dab930-9e4e-1009-8618-eb100d5de05c" LABEL="blackbox.qwarx.com:Boot" TYPE="linux_raid_member" PARTUUID="26a6252a-871c-4768-916c-1ce038511874" 
/dev/nvme1n1p3: UUID="1d3395dc-7e7c-bf69-7e51-e49fcb2d085d" UUID_SUB="e752061b-8410-46fa-c871-04b16b0844bf" LABEL="blackbox.qwarx.com:EFI" TYPE="linux_raid_member" PARTUUID="d16924c1-96b3-4f9f-bef0-8e5ce4402bf0" 
/dev/md127: LABEL="Boot" UUID="da2c2ddd-af52-4dfe-92ac-9775cc015234" TYPE="ext4" 
/dev/md126: UUID="AKAeOG-fIdp-DKl7-mDiI-LHcH-y09B-3Vxjqw" TYPE="LVM2_member" 
/dev/sdb: UUID="653729f4-efb2-af46-dd24-6510380b7c35" UUID_SUB="b2664b6b-fae1-76e1-845e-50faa4dfa13d" LABEL="blackbox.qwarx.com:Rust" TYPE="linux_raid_member" 
/dev/sdc: UUID="653729f4-efb2-af46-dd24-6510380b7c35" UUID_SUB="e9defcaf-3cba-2337-17a9-bd247317ecfe" LABEL="blackbox.qwarx.com:Rust" TYPE="linux_raid_member" 
/dev/sdd: UUID="653729f4-efb2-af46-dd24-6510380b7c35" UUID_SUB="121f8be1-5ab4-9de0-238d-6ba64aae8c00" LABEL="blackbox.qwarx.com:Rust" TYPE="linux_raid_member" 
/dev/sda: UUID="653729f4-efb2-af46-dd24-6510380b7c35" UUID_SUB="dbbffe6d-dae0-31dd-a310-913a1cc8a8e8" LABEL="blackbox.qwarx.com:Rust" TYPE="linux_raid_member" 
/dev/mapper/SSD-CentOS7_Root: LABEL="CentOS7_Root" UUID="aa48bf01-c012-4e54-806b-e0a341d548c2" TYPE="xfs" 
/dev/md125: SEC_TYPE="msdos" LABEL="EFI" UUID="F61A-994A" TYPE="vfat" 
/dev/md124: UUID="RUXdND-W4mD-lXdF-YWMk-J7yY-w50S-zgdxOh" TYPE="LVM2_member" 
/dev/mapper/SSD-home: UUID="4031d5d7-dc64-49c4-9ecc-b139709a96ab" TYPE="xfs" 
/dev/mapper/SSD-MD_Journal: UUID="653729f4-efb2-af46-dd24-6510380b7c35" UUID_SUB="b504320e-e625-30cd-0270-77b9b63e2482" LABEL="blackbox.qwarx.com:Rust" TYPE="linux_raid_member" 
/dev/mapper/SSD-Containers: UUID="c9cd5afb-bdd9-4561-bdac-d3793a7b0c1c" TYPE="ext4" 
/dev/mapper/Rust-Rescued: LABEL="Rescued" UUID="9c17d2d8-700c-49c2-98fe-1fb1dde733c6" TYPE="ext4" 
/dev/mapper/Rust-Microserver: LABEL="Microserver" UUID="242d2ddf-0759-4346-a545-8df042af5ebe" TYPE="ext4" 
/dev/mapper/Rust-NAS: LABEL="NAS" UUID="a37875cf-1f5c-46aa-a825-4204cc98e4c9" TYPE="ext4" 
/dev/mapper/Rust-Video: LABEL="Video" UUID="92b9e7e9-874f-433b-aa72-0465546986b7" TYPE="ext4" 
/dev/mapper/Rust-Dxxxxx: LABEL="Dxxxxx" UUID="689947c7-7e62-4a3c-a871-59ea923c4dcf" TYPE="ext4" 
/dev/mapper/Rust-Backups: LABEL="Backups" UUID="9d0534b2-a545-410f-a9d4-eda4f9836bfb" TYPE="ext4" 

答案1

我认为我已经修复它了!

这真的很简单。我知道卷组有一个可用的标志,您可以使用它来更改它:

vgchange -ay /dev/VG

但直到现在我才知道逻辑卷也有这样的标志,并且可以由用户设置。

与卷组一样,似乎有时在不可用的原因消失后,此标志仍会保留。因此,我能够像这样使 Photos LV 可访问,没有任何错误,然后挂载它:

lvchange -ay /dev/Rust/Photos 

我确实找到并更改了以下设置/etc/lvm/lvm.conf,因为它似乎可能与此特定设备无法自动激活有关,因此可能已经解决了根本问题/根本原因:

    # Configuration option devices/scan_lvs.
    # Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.
    # When 1, LVM will detect PVs layered on LVs, and caution must be
    # taken to avoid a host accessing a layered VG that may not belong
    # to it, e.g. from a guest image. This generally requires excluding
    # the LVs with device filters. Also, when this setting is enabled,
    # every LVM command will scan every active LV on the system (unless
    # filtered), which can cause performance problems on systems with
    # many active LVs. When this setting is 0, LVM will not detect or
    # use PVs that exist on LVs, and will not allow a PV to be created on
    # an LV. The LVs are ignored using a built in device filter that
    # identifies and excludes LVs.
    scan_lvs = 1

答案2

您是否尝试过使用修复盘或 livecd(如 knoppix)进行引导,看看是否可以通过这种方式挂载?如何从 grub 引导菜单引导到以前的内核?

相关内容