如何启动我的 mdadm RAID-5 阵列?

如何启动我的 mdadm RAID-5 阵列?
  1. 如何启动我的 mdadm RAID-5 阵列?
  2. 我怎样才能使这些改变持续下去?

我昨晚重启了我们的服务器,发现大约 8 个月前创建的 RAID 阵列没有恢复,我无法访问我的数据。我运行了一堆命令:

几个月前,我/dev/sdh向 RAID-5 阵列添加了一个新磁盘,该磁盘安装到/srv/share以下位置一切似乎都运行良好,我们有了额外的空间并一直在使用它——除了昨晚,我实际上不确定从那时起我们是否重新启动过。RAID-5 最初是在 ubuntu 18.04 下创建的,现在由 ubuntu 20.04 使用

$ cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdf[3](S) sdb[1](S) sda[0](S)
      23441691144 blocks super 1.2
       
unused devices: <none>


$ lsblk | grep -v loop
NAME   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda      8:0    0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdb      8:16   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdc      8:32   0   4.6T  0 disk  
└─sdc1   8:33   0   4.6T  0 part  /srv/datasets
sdd      8:48   0 298.1G  0 disk  
├─sdd1   8:49   0   190M  0 part  /boot/efi
└─sdd2   8:50   0 297.9G  0 part  /
sde      8:64   0   3.7T  0 disk  
└─sde1   8:65   0   3.7T  0 part  /srv
sdf      8:80   0   7.3T  0 disk  
└─md0    9:0    0  21.9T  0 raid5 
sdg      8:96   0   1.8T  0 disk  
├─sdg1   8:97   0   1.8T  0 part  /home
└─sdg2   8:98   0    47G  0 part  [SWAP]
sdh      8:112  0   7.3T  0 disk  
└─sdh1   8:113  0   7.3T  0 part  


$ sudo fdisk -l | grep sdh
Disk /dev/sdh: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
/dev/sdh1   2048 15628050431 15628048384  7.3T Linux filesystem



$ sudo mdadm -Db /dev/md0
INACTIVE-ARRAY /dev/md0 metadata=1.2 name=perception:0 UUID=c8004245:4e163594:65e30346:68ed2791
$ sudo mdadm -Db /dev/md/0
mdadm: cannot open /dev/md/0: No such file or directory



From /etc/mdadm/mdadm.conf:
ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0



$ sudo mdadm --detail /dev/md0 
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : perception:0
              UUID : c8004245:4e163594:65e30346:68ed2791
            Events : 91689

    Number   Major   Minor   RaidDevice

       -       8        0        -        /dev/sda
       -       8       80        -        /dev/sdf
       -       8       16        -        /dev/sdb


sudo mdadm --detail /dev/md/0 
mdadm: cannot open /dev/md/0: No such file or directory



mdadm --assemble --scan
  [does nothing]

$ blkid /dev/md0 [nothing]
$ blkid /dev/md/0 [nothing]

$ blkid | grep raid
/dev/sdb: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="3fefdb86-4c6b-fb76-a35e-3a846075eb54" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sdf: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="d4a58f2c-bc8b-8fd0-6b22-63b047e09c13" LABEL="perception:0" TYPE="linux_raid_member"
/dev/sda: UUID="c8004245-4e16-3594-65e3-034668ed2791" UUID_SUB="afaea924-a15a-c5cf-f9a8-d73075201ff7" LABEL="perception:0" TYPE="linux_raid_member"

相关行/etc/fstab是:

UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb /srv/share     ext4    defaults        0       2


$ sudo mount -a
mount: /srv/share: can't find UUID=f495abb3-36e6-4782-8f5e-83c6d3fc78eb.

我尝试将 UUID 更改为fstabc8004245:4e163594:65e30346:68ed2791然后重新挂载:

$ sudo mount -a
mount: /srv/share: can't find UUID=c8004245:4e163594:65e30346:68ed2791.

然后我改为c8004245-4e16-3594-65e3-034668ed2791并重新安装:

$ sudo mount -a
mount: /srv/share: /dev/sdb already mounted or mount point busy.

然后我使用新的 fstab 条目重新启动:c8004245-4e16-3594-65e3-034668ed2791

但与上述任何命令仍然没有区别^

我尝试改变mdadm.conf

ARRAY /dev/md/0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

到:

ARRAY /dev/md0  metadata=1.2 UUID=c8004245:4e163594:65e30346:68ed2791 name=perception:0

=> 没什么区别?

尝试使用 -v 停止并启动

$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

$ sudo mdadm --assemble --scan -v                                   
[ excluding all the random loop drive stuff ]
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdf is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdb to /dev/md/0 as 1
mdadm: added /dev/sdf to /dev/md/0 as 2
mdadm: no uptodate device for slot 3 of /dev/md/0
mdadm: added /dev/sda to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 3 drives (out of 4).


$ dmesg
[  988.616710] md/raid:md0: device sda operational as raid disk 0
[  988.616718] md/raid:md0: device sdf operational as raid disk 2
[  988.616721] md/raid:md0: device sdb operational as raid disk 1
[  988.618892] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
[  988.639345] md0: detected capacity change from 0 to 46883371008

cat /proc/mdstat现在说突袭正在进行中

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sda[0] sdf[3] sdb[1]
      23441685504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 0/59 pages [0KB], 65536KB chunk
unused devices: <none>

mount表示已/srv/share成功安装

sudo mount -a -v
/                        : ignored
/boot/efi                : already mounted
none                     : ignored
/home                    : already mounted
/srv                     : already mounted
/srv/share               : successfully mounted
/srv/datasets            : already mounted

/srv/share仍然没有出现在df -h

我仍然看不到数据/srv/share

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  2.5M  6.3G   1% /run
/dev/sdd2       293G   33G  245G  12% /
tmpfs            32G   96K   32G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/sde1       3.6T  455G  3.0T  14% /srv
/dev/sdd1       188M  5.2M  182M   3% /boot/efi
/dev/sdc1       4.6T  3.6T  768G  83% /srv/datasets
/dev/sdg1       1.8T  1.5T  164G  91% /home

答案1

答案在这里https://unix.stackexchange.com/questions/210416/new-raid-array-will-not-auto-assemble-leads-to-boot-problems

帮助

dpkg-reconfigure mdadm    # Choose "all" disks to start at boot
 update-initramfs -u       # Updates the existing initramfs

相关内容