RHEL6 上的设备映射器无法为 LVM 逻辑卷创建设备

RHEL6 上的设备映射器无法为 LVM 逻辑卷创建设备

我有运行 RHEL6 的 XEN 来宾,并且它有一个从 Dom0 提供的 LUN。其中包含一个名为 vg_ALHINT 的 LVM 卷组(INT 表示集成,ALH 是其 Oracle 数据库名称的缩写)。数据是Oracle 11g。 VG 已导入、激活,udev 为每个逻辑卷创建了映射。

然而,设备映射器没有为其中一个逻辑卷 [LV] 创建映射,并且对于相关 LV,它创建了 /dev/dm-2,其主次编号与其余 LV 不同。

#  dmsetup table
vg_ALHINT-arch: 0 4300800 linear 202:16 46139392
vg0-lv6: 0 20971520 linear 202:2 30869504
vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632
vg0-lv5: 0 2097152 linear 202:2 28772352
vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000
vg0-lv4: 0 524288 linear 202:2 28248064
vg0-lv3: 0 4194304 linear 202:2 24053760
vg_ALHINT-oradata:     **
vg0-lv2: 0 4194304 linear 202:2 19859456
vg0-lv1: 0 2097152 linear 202:2 17762304
vg0-lv0: 0 17760256 linear 202:2 2048
vg_ALHINT-admin: 0 4194304 linear 202:16 41945088

** 可以看到上面的vg_ALHINT-oradata是空的。

# ls -l /dev/mapper/
total 0
[root@iui-alhdb01 ~]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root  10, 58 Apr  3 13:43 control
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv0 -> ../dm-0
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv1 -> ../dm-1
lrwxrwxrwx 1 root root       7 Apr  3 14:35 vg0-lv2 -> ../dm-2
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv3 -> ../dm-3
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv4 -> ../dm-4
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv5 -> ../dm-5
lrwxrwxrwx 1 root root       7 Apr  3 13:43 vg0-lv6 -> ../dm-6
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-admin -> ../dm-8
lrwxrwxrwx 1 root root       7 Apr  3 13:59 vg_ALHINT-arch -> ../dm-9
brw-rw---- 1 root disk 253,  7 Apr  3 14:37 vg_ALHINT-oradata
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset1 -> ../dm-10
lrwxrwxrwx 1 root root       8 Apr  3 13:59 vg_ALHINT-safeset2 -> ../dm-11

vg_ALHINT-oradata 直到我运行时才创建dmsetup mknodes

# cat /proc/partitions
major minor  #blocks  name

 202        0   26214400 xvda
 202        1     262144 xvda1
 202        2   25951232 xvda2
 253        0    8880128 dm-0
 253        1    1048576 dm-1
 253        2    2097152 dm-2
 253        3    2097152 dm-3
 253        4     262144 dm-4
 253        5    1048576 dm-5
 253        6   10485760 dm-6
 202       16   29360128 xvdb
 253        8    2097152 dm-8
 253        9    2150400 dm-9
 253       10    2093056 dm-10
 253       11    2097152 dm-11

dm-7 本来是 vg_ALHINT-oradata,但它丢失了。我运行dmsetup mknodes并被dm-7创建,但仍然失踪/proc/paritions​​。

# ls -l /dev/dm-7
brw-rw---- 1 root disk 253, 7 Apr  3 13:59 /dev/dm-7

它的主要和次要编号是253:7设备,并且其 VG 中的相同 LV 有 202:nn

lvs告诉我这个 LV 被暂停了:

# lvs
    Logging initialised at Thu Apr  3 14:44:19 2014
    Set umask from 0022 to 0077
    Finding all logical volumes
  LV       VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv0      vg0       -wi-ao----   8.47g
  lv1      vg0       -wi-ao----   1.00g
  lv2      vg0       -wi-ao----   2.00g
  lv3      vg0       -wi-ao----   2.00g
  lv4      vg0       -wi-ao---- 256.00m
  lv5      vg0       -wi-ao----   1.00g
  lv6      vg0       -wi-ao----  10.00g
  admin    vg_ALHINT -wi-a-----   2.00g
  arch     vg_ALHINT -wi-a-----   2.05g
  oradata  vg_ALHINT -wi-s-----  39.95g
  safeset1 vg_ALHINT -wi-a-----   2.00g
  safeset2 vg_ALHINT -wi-a-----   2.00g
    Wiping internal VG cache

该光盘是根据我们的生产数据库的快照创建的。 Oracle 已关闭且 VG 在快照之前已导出。我应该注意到,我每周通过脚本对数百个数据库执行相同的操作。因为这是一个快照,所以我从原始设备映射器中获取了表,并使用它来尝试重新创建其丢失的表:

0 35651584 linear 202:16 2048
35651584 4087808 linear 202:16 50440192
39739392 2097152 linear 202:16 39847936
41836544 41943040 linear 202:16 58714112

暂停设备后dmsetup suspend /dev/dm-7我跑了dmsetup load /dev/dm-7 $table.txt

接下来我尝试恢复该设备,

# dmsetup resume /dev/dm-7
device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument
Command failed
#

任何想法,因为我真的迷路了。 (是的,我已经重新启动并重新拍摄了很多次快照,但总是遇到同样的问题。我什至重新安装了该服务器并运行yum update。)

// 编辑。

我忘记补充一点,这是我们生产环境中的原始 dmsetup 表,我尝试将 oradata 布局加载到我们的集成服务器中,就像我上面提到的那样。

#  dmsetup table
vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632
vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000
vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048
vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192
vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936
vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112
vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088

//编辑

我运行 vgscan --mknodes 并得到:

The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation.



# ls -l /dev/vg_ALHINT/oradata
lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata

仍然无法激活它并出现以下错误消息:

device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7) 

//编辑

我在 /var/log/messages 中看到堆栈跟踪:

Apr  3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled
Apr  3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table
Apr  3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM
Apr  3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync
Apr  3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:33:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:33:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:33:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vi            D 0000000000000000     0  1394   1271 0x00000084
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds.
Apr  3 14:35:34 iui-alhdb01 kernel:      Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr  3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr  3 14:35:34 iui-alhdb01 kernel: vgdisplay     D 0000000000000000     0  1437   1423 0x00000080
Apr  3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c
Apr  3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380
Apr  3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8
Apr  3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290
Apr  3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr  3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch
Apr  3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr  3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256

答案1

devices.txt在内核文档中看到:Major 202是“Xen虚拟块设备”,Major 253是LVM/设备映射器。

您的所有dm-x设备都是253:n;他们只是观点202:n

错误信息很明确:

device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256

XEN 设备似乎发生了变化。无法加载您的vg_ALHPRD-oradata文件,因为它尝试访问202:16根本不存在的存储空间。

答案2

看起来虚拟机管理程序上的多路径拒绝更新其 LUN 大小的映射。

该LUN最初为28Gb,后来在存储阵列上增长到48Gb。

VG信息认为是48G,确实这张盘是48G,但是多路径不会更新,认为仍然是28G。

坚持 28G 的多路径:

# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 8:0:0:22   sdt   65:48    active undef running
  |- 10:0:0:22  sdbh  67:176   active undef running
  |- 7:0:0:22   sddq  71:128   active undef running
  |- 9:0:0:22   sdfb  129:208  active undef running
  |- 8:0:1:22   sdmz  70:432   active undef running
  |- 7:0:1:22   sdoj  128:496  active undef running
  |- 10:0:1:22  sdop  129:336  active undef running
  |- 9:0:1:22   sdqm  132:352  active undef running
  |- 7:0:2:22   sdxh  71:624   active undef running
  |- 8:0:2:22   sdzy  131:704  active undef running
  |- 10:0:2:22  sdaab 131:752  active undef running
  |- 9:0:2:22   sdaed 66:912   active undef running
  |- 7:0:3:22   sdakm 132:992  active undef running
  |- 10:0:3:22  sdall 134:880  active undef running
  |- 8:0:3:22   sdamx 8:1232   active undef running
  `- 9:0:3:22   sdaqa 69:1248  active undef running

实际存储的光盘大小:

# showvv ALHIDB_SNP_001
                                                                          -Rsvd(MB)-- -(MB)-
  Id Name           Prov Type  CopyOf            BsId Rd -Detailed_State- Adm Snp Usr  VSize
4098 ALHIDB_SNP_001 snp  vcopy ALHIDB_SNP_001.ro 5650 RW normal            --  --  --  49152

只是为了确保我有正确的光盘:

# showvlun -showcols VVName,VV_WWN| grep -i  0002acf962421ba
ALHIDB_SNP_001          50002ACF962421BA 

VG认为它是48G

  --- Volume group ---
  VG Name               vg_ALHINT
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  30
  VG Access             read/write
  VG Status             exported/resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               48.00 GiB
  PE Size               4.00 MiB
  Total PE              12287
  Alloc PE / Size       12287 / 48.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem

当我重新扫描 HBA 中的新光盘并重新配置 multipthing 时,光盘仍然显示 28G,因此我尝试了此操作,但没有任何变化:

# multipathd -k'resize map 350002acf962421ba'

版本:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

解决方法 因为我想不出解决方案,所以我这样做了:我之前没有写过我在其上运行 OVM 3.2,所以我的解决方案的一部分将包括 OVM。 i) 通过 OVM 关闭 Xen 上的 guest 虚拟机。 ii) 删除光盘 iii) 从 OVM 中删除 LUN iv) 从虚拟机管理程序中删除不存在的 LUN。 v) OVM 重新扫描存储。 vi) 等待 30 分钟;) vii) 将我的光盘提供给具有不同 LUN ID 的虚拟机管理程序。 viii) OVM 重新扫描存储

现在我神奇地看到了 48G 光盘。

# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:127  sdt   65:48    active undef running
  |- 9:0:1:127  sdbh  67:176   active undef running
  |- 9:0:2:127  sddo  71:96    active undef running
  |- 9:0:3:127  sdfb  129:208  active undef running
  |- 10:0:3:127 sdmz  70:432   active undef running
  |- 10:0:0:127 sdoh  128:464  active undef running
  |- 10:0:1:127 sdop  129:336  active undef running
  |- 10:0:2:127 sdqm  132:352  active undef running
  |- 7:0:1:127  sdzu  131:640  active undef running
  |- 7:0:0:127  sdxh  71:624   active undef running
  |- 7:0:3:127  sdaed 66:912   active undef running
  |- 7:0:2:127  sdaab 131:752  active undef running
  |- 8:0:0:127  sdakm 132:992  active undef running
  |- 8:0:1:127  sdall 134:880  active undef running
  |- 8:0:2:127  sdamx 8:1232   active undef running
  `- 8:0:3:127  sdaqa 69:1248  active undef running

相关内容