環境
VMware workstation 17pro
CentOS Linux release 7.9.2009 (Core)
——內存8G,16core
——硬盤系統盤100G
——四塊20G硬盤
注意事項
1、在沒有操作系統的情況下,可以在裝系統時將磁盤做軟raid,然后使用軟raid作為系統盤
2、在重構時,軟raid會大大增加CPU的負擔,在實際生產環境中不建議使用
3、同一塊盤的不同分區也可以進行軟raid
4、環境硬盤均為SCSI類型,且為精簡置備
創建raid0,raid1
當前環境狀態
#查看當前磁盤狀態
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 50G 0 lvm /├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]└─centos-home 253:2 0 45.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 20G 0 disk
sde 8:64 0 20G 0 disk
sr0 11:0 1 4.5G 0 rom /run/media/root/CentOS 7 x86_64
安裝mdadm
#如果是經過yum update那么大概率是不用安裝的
yum -y install mdadm
創建raid
#創建raid
#創建raid名為/dev/md0,選另外名字可能報錯 使用mdadm -C亦可
mdadm --create /dev/md0 \
-a yes \ #自動創建raid設備
-l 0 \ #設定raid類型為raid0
-n 2 /dev/sdb /dev/sdc #指定2塊硬盤,sdb與sdc#成功會顯示
#mdadm: Defaulting to version 1.2 metadata
#mdadm: array /dev/md1 started.mdadm --create /dev/md1 -a yes -l 1 -n 2 /dev/sdd /dev/sde
#輸入yes忽略提示
查看軟raid信息
#查看軟raid信息
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 50G 0 lvm /├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]└─centos-home 253:2 0 45.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdc 8:32 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdd 8:48 0 20G 0 disk
└─md1 9:1 0 20G 0 raid1
sde 8:64 0 20G 0 disk
└─md1 9:1 0 20G 0 raid1
sr0 11:0 1 4.5G 0 rom /run/media/root/CentOS 7 x86_64mdadm --detail /dev/md0
#mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Dec 12 05:41:07 2023Raid Level : raid0Array Size : 41908224 (39.97 GiB 42.91 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Tue Dec 12 05:41:07 2023 State : cleanActive Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Chunk Size : 512KConsistency Policy : noneName : 192.168.8.151:0 (local to host 192.168.8.151)UUID : cb7e5ace:f809e250:75079d40:21413521Events : 0Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc
#查看raid狀態
cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sde[1] sdd[0]20954112 blocks super 1.2 [2/2] [UU]md0 : active raid0 sdc[1] sdb[0]41908224 blocks super 1.2 512k chunks
停止與啟動陣列,添加刪除硬盤
#停止陣列
mdadm --stop /dev/md0
mdadm --stop /dev/md1#重新啟動陣列
mdadm -A /dev/md1#清除使用后的raid超級塊信息
mdadm --misc --zero-superblock /dev/sdb /dev/sdc
#將信息徹底清除,使其可以再被用于創建新陣列#模擬磁盤故障
mdadm /dev/md1 -f /dev/sdd #查看信息
cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sde[1] sdd[0](F)20954112 blocks super 1.2 [2/1] [_U]
mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Tue Dec 12 06:43:12 2023Raid Level : raid1Array Size : 20954112 (19.98 GiB 21.46 GB)Used Dev Size : 20954112 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Tue Dec 12 06:47:50 2023State : clean, degraded Active Devices : 1Working Devices : 1Failed Devices : 1Spare Devices : 0#移除故障的磁盤
mdadm --manage /dev/md1 --remove /dev/sdd#此時再查看就只剩一塊盤了
mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Tue Dec 12 06:43:12 2023Raid Level : raid1Array Size : 20954112 (19.98 GiB 21.46 GB)Used Dev Size : 20954112 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 1Persistence : Superblock is persistentUpdate Time : Tue Dec 12 06:50:27 2023State : clean, degraded Active Devices : 1Working Devices : 1Failed Devices : 0Spare Devices : 0#再添加一塊好的盤進去
mdadm --manage /dev/md1 --add /dev/sdc#此時再查看mdstat狀態,可以看到硬盤正在重構
cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdc[2] sde[1]20954112 blocks super 1.2 [2/1] [_U][=>...................] recovery = 8.5% (1800192/20954112) finish=1.4min speed=225024K/sec#重構完重新查看
mdadm -D /dev/md1
/dev/md1:Version : 1.2Creation Time : Tue Dec 12 06:43:12 2023Raid Level : raid1Array Size : 20954112 (19.98 GiB 21.46 GB)Used Dev Size : 20954112 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 2Persistence : Superblock is persistentUpdate Time : Tue Dec 12 06:56:35 2023State : clean Active Devices : 2Working Devices : 2Failed Devices : 0Spare Devices : 0Consistency Policy : resync#清理環境
mdadm --stop /dev/md1
mdadm --misc --zero-superblock /dev/sdc /dev/sde
創建raid5
#創建raid5
mdadm --create /dev/md0 -a yes -l 5 -n 2 -x 2 /dev/sdb /dev/sdc /dev/sdd /dev/sde
#-x 是指定熱備盤數量#查看信息,可以看到正在重構
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 sdc[4] sde[3](S) sdd[2](S) sdb[0]20954112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_][==>..................] recovery = 14.3% (3000192/20954112) finish=1.4min speed=200012K/secmdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Dec 12 07:05:36 2023Raid Level : raid5Array Size : 20954112 (19.98 GiB 21.46 GB)Used Dev Size : 20954112 (19.98 GiB 21.46 GB)Raid Devices : 2Total Devices : 4Persistence : Superblock is persistentUpdate Time : Tue Dec 12 07:06:48 2023State : clean, degraded, recovering Active Devices : 1Working Devices : 4Failed Devices : 0Spare Devices : 3Layout : left-symmetricChunk Size : 512KConsistency Policy : resyncRebuild Status : 69% completeName : 192.168.8.151:0 (local to host 192.168.8.151)UUID : 82e7f291:65e54bf3:d96624ce:964e3637Events : 12Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb4 8 32 1 spare rebuilding /dev/sdc2 8 48 - spare /dev/sdd3 8 64 - spare /dev/sdelsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 50G 0 lvm /├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]└─centos-home 253:2 0 45.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
└─md0 9:0 0 20G 0 raid5
sdc 8:32 0 20G 0 disk
└─md0 9:0 0 20G 0 raid5
sdd 8:48 0 20G 0 disk
└─md0 9:0 0 20G 0 raid5
sde 8:64 0 20G 0 disk
└─md0 9:0 0 20G 0 raid5 #查看系統集成的mod,能看到kernal有支持raid的mod
lsmod | grep raid
raid456 151196 1
async_raid6_recov 17288 1 raid456
async_memcpy 12768 2 raid456,async_raid6_recov
async_pq 13332 2 raid456,async_raid6_recov
raid6_pq 102527 3 async_pq,raid456,async_raid6_recov
async_xor 13127 3 async_pq,raid456,async_raid6_recov
async_tx 13509 5 async_pq,raid456,async_xor,async_memcpy,async_raid6_recov
raid1 44113 0
raid0 18164 0
libcrc32c 12644 4 xfs,raid456,nf_nat,nf_conntrack