在 Hetzner 上对 5 个硬盘进行 Raid

在 Hetzner 上对 5 个硬盘进行 Raid

您好,从 Hetzner 订购了一台服务器,并为服务器添加了一个 500GB SSD。运行了安装程序映像,但我不确定软件 Raid 是否在我的所有三个硬盘上都有效。我如何才能将软 Raid 也添加到新添加的 SSD 中?

我不介意重新安装服务器。

硬盘我有 2 x 1TB SATA 1 x 500GB SSD

这是我的配置

df -h 输出

[root@CentOS-610-64-minimal ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        906G  886M  859G   1% /
tmpfs            16G     0   16G   0% /dev/shm
/dev/md1        496M   35M  436M   8% /boot
[root@CentOS-610-64-minimal ~]#

fdisk -l 输出

[root@CentOS-610-64-minimal ~]# fdisk -l

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xca606b93

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2089    16777216   fd  Linux raid autodetect
/dev/sdb2            2089        2155      524288   fd  Linux raid autodetect
/dev/sdb3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/sdc: 512.1 GB, 512110190592 bytes
255 heads, 63 sectors/track, 62260 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8b577ece

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        2089    16777216   fd  Linux raid autodetect
/dev/sdc2            2089        2155      524288   fd  Linux raid autodetect
/dev/sdc3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x595cad86

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        2089    16777216   fd  Linux raid autodetect
/dev/sda2            2089        2155      524288   fd  Linux raid autodetect
/dev/sda3            2155       62261   482804056   fd  Linux raid autodetect

Disk /dev/md1: 536 MB, 536805376 bytes
2 heads, 4 sectors/track, 131056 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md0: 17.2 GB, 17179738112 bytes
2 heads, 4 sectors/track, 4194272 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/md2: 988.8 GB, 988782002176 bytes
2 heads, 4 sectors/track, 241401856 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000

cat /proc/mdstat 输出

[root@CentOS-610-64-minimal ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid5 sda3[0] sdb3[1] sdc3[3]
      965607424 blocks super 1.0 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

md0 : active raid1 sda1[0] sdb1[1] sdc1[2]
      16777088 blocks super 1.0 [3/3] [UUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2]
      524224 blocks [3/3] [UUU]

unused devices: <none>

mdadm -D /dev/md0 输出

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Oct  6 04:49:31 2018
     Raid Level : raid1
     Array Size : 16777088 (16.00 GiB 17.18 GB)
  Used Dev Size : 16777088 (16.00 GiB 17.18 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sat Oct  6 06:02:45 2018
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:0
           UUID : b4cf051f:22b30734:e45d5bca:cfff80e8
         Events : 21

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1

mdadm -D /dev/md1 输出

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sat Oct  6 04:49:31 2018
     Raid Level : raid1
     Array Size : 524224 (511.94 MiB 536.81 MB)
  Used Dev Size : 524224 (511.94 MiB 536.81 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Oct  6 04:53:41 2018
          State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           UUID : f1fd684a:98b3c1eb:776c2c25:004bd7b2
         Events : 0.23

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2

mdadm -D /dev/md2 输出

[root@CentOS-610-64-minimal ~]# mdadm -D /dev/md2
/dev/md2:
        Version : 1.0
  Creation Time : Sat Oct  6 04:49:37 2018
     Raid Level : raid5
     Array Size : 965607424 (920.88 GiB 988.78 GB)
  Used Dev Size : 482803712 (460.44 GiB 494.39 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct  6 11:02:41 2018
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : rescue:2
           UUID : 6ebb511f:a7000ca5:c98b1501:4d2b3707
         Events : 1330

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       3       8       35        2      active sync   /dev/sdc3

Hetzner 安装程序映像文件

DRIVE1 /dev/sda
DRIVE2 /dev/sdb
DRIVE3 /dev/sdc

SWRAID 1
SWRAIDLEVEL 5

PART swap swap 16G
PART /boot ext3 512M
PART / ext4 all

答案1

您的输出显示您有 2*2TB 磁盘,并且有一个 RAID5 和两个 RAID1。

md2 : active raid5 
md0 : active raid1
md1 : active raid1

正如评论中所提到的,RAID5 中的一个 SSD 与两个传统磁盘放在一起并没有多大意义。

我建议使用 SSD 和旋转磁盘设置为主要写入的 RAID1。

您使用 SSD 和另外两个磁盘的 500MB 选项创建 RAID1 --bitmap=internal /dev/ssd --write-mostly --write-behind /dev/disk1 /dev/disk2。请参阅man mdadm以了解详细信息。

这会将所有内容写入 SSD,最终写入旋转磁盘。除非 SSD 发生故障,否则将从快速 SSD 读取数据,只有这样才会从其他磁盘读取数据。如果 SSD 发生故障,您可以从 SSD 以及其他磁盘上的镜像快速读取和写入数据。

旋转磁盘上的另外 1.5GB 可以组合成另一个 RAID1,用于不需要快速访问且不适合 0.5GB SSD RAID 的数据。

相关内容