我刚刚将一个 12TB RAID5(4 个 4TB 磁盘)从我的 QNAP NAS 移到我的 Linux 服务器(/dev/md0
),我注意到每个磁盘上都有三个“剩余”分区(/dev/sd*1
、、),包括 3 个额外的 RAID (/dev/sd*2
、、)。/dev/sd*4
/dev/md4
/dev/md9
/dev/md13
我想做以下事情:
- 摆脱那 3 次无用的突袭 (
/dev/md4
,/dev/md9
,/dev/md13
) - 删除那 12 个无用的分区(
/dev/sda1
,/dev/sda2
,/dev/sda4
,/dev/sdb1
,/dev/sdb2
,/dev/sdb4
,/dev/sdc1
,/dev/sdc2
,/dev/sdc4
,/dev/sdd1
)/dev/sdd2
/dev/sdd4
- 调整有用分区的大小以占据整个磁盘(
/dev/sda3
,,,)/dev/sdb3
/dev/sdc3
/dev/sdd3
- (如果这是好的做法)更正
/dev/md0
后,它不再针对分区 (/dev/sda3
,/dev/sdb3
,/dev/sdc3
,/dev/sdd3
),而是针对整个磁盘 (/dev/sda
,/dev/sdb
,/dev/sdc
,/dev/sdd
)
我非常担心我会搞砸它并丢失所有数据...我该如何安全地进行此操作?
以下是一些命令的输出:
$ sudo fdisk -l /dev/sda
Disk /dev/sda: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68W
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 48FF81CA-3ABB-49C7-8488-0932C696B043
Device Start End Sectors Size Type
/dev/sda1 40 1060289 1060250 517.7M Microsoft basic data
/dev/sda2 1060296 2120579 1060284 517.7M Microsoft basic data
/dev/sda3 2120584 7813019969 7810899386 3.6T Microsoft basic data
/dev/sda4 7813019976 7814015999 996024 486.3M Microsoft basic data
$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D7C976BE-2F37-482C-9F33-0FD44ED41F97
Device Start End Sectors Size Type
/dev/sdb1 40 1060289 1060250 517.7M Microsoft basic data
/dev/sdb2 1060296 2120579 1060284 517.7M Microsoft basic data
/dev/sdb3 2120584 7813019969 7810899386 3.6T Microsoft basic data
/dev/sdb4 7813019976 7814015999 996024 486.3M Microsoft basic data
$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68W
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 38C30A2A-2D9A-4231-95EB-E26DA29C8785
Device Start End Sectors Size Type
/dev/sdc1 40 1060289 1060250 517.7M Microsoft basic data
/dev/sdc2 1060296 2120579 1060284 517.7M Microsoft basic data
/dev/sdc3 2120584 7813019969 7810899386 3.6T Microsoft basic data
/dev/sdc4 7813019976 7814015999 996024 486.3M Microsoft basic data
$ sudo fdisk -l /dev/sdd
Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFRX-68N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: DF6B4D11-3250-413A-8E31-330D321D848C
Device Start End Sectors Size Type
/dev/sdd1 40 1060289 1060250 517.7M Microsoft basic data
/dev/sdd2 1060296 2120579 1060284 517.7M Microsoft basic data
/dev/sdd3 2120584 7813019969 7810899386 3.6T Microsoft basic data
/dev/sdd4 7813019976 7814015999 996024 486.3M Microsoft basic data
$ sudo parted -l
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary msftdata
2 543MB 1086MB 543MB linux-swap(v1) primary msftdata
3 1086MB 4000GB 3999GB ext4 primary msftdata
4 4000GB 4001GB 510MB ext3 primary msftdata
Model: ATA WDC WD40EFAX-68J (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary msftdata
2 543MB 1086MB 543MB linux-swap(v1) primary msftdata
3 1086MB 4000GB 3999GB primary msftdata
4 4000GB 4001GB 510MB ext3 primary msftdata
Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdc: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary msftdata
2 543MB 1086MB 543MB primary msftdata
3 1086MB 4000GB 3999GB primary msftdata
4 4000GB 4001GB 510MB ext3 primary msftdata
Model: ATA WDC WD40EFRX-68N (scsi)
Disk /dev/sdd: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary msftdata
2 543MB 1086MB 543MB linux-swap(v1) primary msftdata
3 1086MB 4000GB 3999GB ext4 primary msftdata
4 4000GB 4001GB 510MB ext3 primary msftdata
Model: Linux Software RAID Array (md)
Disk /dev/md4: 543MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 543MB 543MB linux-swap(v1)
Model: Linux Software RAID Array (md)
Disk /dev/md0: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 12.0TB 12.0TB ext4
Model: Linux Software RAID Array (md)
Disk /dev/md9: 543MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 543MB 543MB ext3
Model: Linux Software RAID Array (md)
Disk /dev/md13: 470MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 470MB 470MB ext3
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sda2 8:2 0 517.7M 0 part
│ └─md4 9:4 0 517.7M 0 raid1
├─sda3 8:3 0 3.6T 0 part
│ └─md0 9:0 0 10.9T 0 raid5 /mnt/raid
└─sda4 8:4 0 486.3M 0 part
└─md13 9:13 0 448.1M 0 raid1
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdb2 8:18 0 517.7M 0 part
│ └─md4 9:4 0 517.7M 0 raid1
├─sdb3 8:19 0 3.6T 0 part
│ └─md0 9:0 0 10.9T 0 raid5 /mnt/raid
└─sdb4 8:20 0 486.3M 0 part
└─md13 9:13 0 448.1M 0 raid1
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdc2 8:34 0 517.7M 0 part
│ └─md4 9:4 0 517.7M 0 raid1
├─sdc3 8:35 0 3.6T 0 part
│ └─md0 9:0 0 10.9T 0 raid5 /mnt/raid
└─sdc4 8:36 0 486.3M 0 part
└─md13 9:13 0 448.1M 0 raid1
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdd2 8:50 0 517.7M 0 part
│ └─md4 9:4 0 517.7M 0 raid1
├─sdd3 8:51 0 3.6T 0 part
│ └─md0 9:0 0 10.9T 0 raid5 /mnt/raid
└─sdd4 8:52 0 486.3M 0 part
└─md13 9:13 0 448.1M 0 raid1
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Sat Nov 30 17:49:29 2013
Raid Level : raid5
Array Size : 11716348608 (10.91 TiB 12.00 TB)
Used Dev Size : 3905449536 (3.64 TiB 4.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sat Jan 7 19:34:13 2023
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : resync
Name : 0
UUID : ee9fbd79:deea02d8:8b716bd2:ec481317
Events : 106797
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
5 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
4 8 51 3 active sync /dev/sdd3
$ sudo mdadm -D /dev/md4
/dev/md4:
Version : 1.0
Creation Time : Mon Jun 6 18:43:58 2022
Raid Level : raid1
Array Size : 530128 (517.70 MiB 542.85 MB)
Used Dev Size : 530128 (517.70 MiB 542.85 MB)
Raid Devices : 2
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Jan 1 13:40:57 2023
State : clean
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2
Consistency Policy : resync
Name : 4
UUID : 88ff890c:38568eb8:3e3e16e4:4b73851e
Events : 62
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
2 8 18 1 active sync /dev/sdb2
3 8 34 - spare /dev/sdc2
4 8 50 - spare /dev/sdd2
$ sudo mdadm -D /dev/md9
/dev/md9:
Version : 0.90
Creation Time : Sat Nov 30 17:33:27 2013
Raid Level : raid1
Array Size : 530048 (517.63 MiB 542.77 MB)
Used Dev Size : 530048 (517.63 MiB 542.77 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 9
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 7 20:15:27 2023
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
UUID : 24852f71:56985a76:2842e49e:777267ff
Events : 0.10614
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
$ sudo mdadm -D /dev/md13
/dev/md13:
Version : 0.90
Creation Time : Sat Nov 30 17:33:35 2013
Raid Level : raid1
Array Size : 458880 (448.13 MiB 469.89 MB)
Used Dev Size : 458880 (448.13 MiB 469.89 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 13
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 7 20:15:26 2023
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
UUID : a670af11:ad25d8d9:a20e0676:03ceb388
Events : 0.5048
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 36 1 active sync /dev/sdc4
2 8 52 2 active sync /dev/sdd4
3 8 20 3 active sync /dev/sdb4
答案1
我最终做了什么:
- 将旧磁盘插入另一台计算机,然后在 Linux 实时 CD 上启动并安装 RAID。
- 将新磁盘插入服务器,准备一个新的干净的 RAID。使用 LLVM 加密。
- 使用 rsync,通过 SSH 移动所有内容...
不间断地复制需要 3-4 天的时间,这可能不是复制这些文件的最佳方式,但最终我得到了一个新的加密 RAID(以前未加密),以及一个备份,以 4 个旧磁盘的形式。