我已经使用内置 RAID 系统的 Debian 设置了 Soft Raid 1。我设置了 raid,因为当我设置服务器时我有一个空间 HDD,并想为什么不呢。 RAID 是使用我安装操作系统时 Debian 所做的一切设置的(抱歉,不是 Linux 技术人员)。
然而现在,我确实可以将磁盘用于更有用的目的。
是否可以轻松地停止raid而无需重新安装操作系统,我将如何做到这一点?
fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000d9640
Device Boot Start End Blocks Id System
/dev/sda1 2048 976771071 488384512 fd Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0009dd99
Device Boot Start End Blocks Id System
/dev/sdb1 2048 950560767 475279360 83 Linux
/dev/sdb2 950562814 976771071 13104129 5 Extended
Partition 2 does not start on physical sector boundary.
/dev/sdb5 950562816 976771071 13104128 82 Linux swap / Solaris
Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6fa10d6b
Device Boot Start End Blocks Id System
/dev/sdc1 63 3907024064 1953512001 7 HPFS/NTFS/exFAT
Disk /dev/sdd: 7803 MB, 7803174912 bytes
122 heads, 58 sectors/track, 2153 cylinders, total 15240576 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc3072e18
Device Boot Start End Blocks Id System
/dev/sdd1 * 8064 15240575 7616256 b W95 FAT32
fstab
内容:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb1 during installation
UUID=cbc19adf-8ed0-4d20-a56e-13c1a74e9cf0 / ext4 errors=remount-ro 0 1
# swap was on /dev/sdb5 during installation
UUID=f6836768-e2b6-4ccf-9827-99f58999607e none swap sw 0 0
/dev/sda1 /media/usb0 auto rw,user,noauto 0 0
/dev/sdc1 /media/mns ntfs-3g defaults 0 2
答案1
最简单的方法(无需对设置进行任何更改)可能是将 RAID 减少为单个磁盘。这样您就可以选择添加磁盘,从而在以后重新使用 RAID。
mdadm /dev/mdx --fail /dev/disky1
mdadm /dev/mdx --remove /dev/disky1
mdadm --grow /dev/mdx --raid-devices=1 --force
结果看起来像这样:
mdx : active raid1 diskx1[3]
62519296 blocks super 1.2 [1/1] [U]
Ta-daa 单个磁盘“RAID1”。
如果您想完全摆脱 RAID 层,则需要mdadm --examine /dev/diskx1
(找出数据偏移量),mdadm --zero-superblock
(摆脱 RAID 元数据),并按parted
数据偏移量移动分区,使其指向文件系统,然后更新引导加载程序和系统配置以反映 RAID 的缺失...
答案2
只是失败并删除您的驱动器之一:
mdadm /dev/md0 --fail /dev/sdb --remove /dev/sdb
之后将您更改/etc/fstab
为使用 RAID 中剩余的驱动器。
重启。然后摧毁你的 RAID:
mdadm /dev/md0 --destroy
玩得开心 :)