Detailed explanation of mdadm command
Detailed explanation of mdadm command and experimental process
Online learning video sharing:linux video tutorial
1. Concept
mdadm is the abbreviation of multiple devices admin. It is a standard software RAID management tool under Linux. The author is Neil Brown
2. Features
mdadm can diagnose, monitor and collect detailed array information
mdadm is a separate integrated program rather than a collection of scattered programs, so it has common commands for different RAID management Syntax
mdadm can perform almost all functions without a configuration file (there is no default configuration file)
3. Function (reference)
In a Linux system Currently, software RAID is implemented in the form of MD (Multiple Devices) virtual block devices. Multiple underlying block devices are used to virtualize a new virtual device, and striping technology is used to evenly distribute data blocks to multiple disks. Improve the read and write performance of virtual devices, use different data redundancy algorithms to protect user data from being completely lost due to a block device failure, and restore lost data to new devices after the device is replaced. .
Currently, MD supports linear, multipath, raid0 (stripping), raid1 (mirror), raid4, raid5, raid6, raid10 and other different redundancy levels and grading methods. Of course, it can also support multiple RAID arrays. The cascades form arrays of raid1 0, raid5 1 and other types.
IV. Experiment
Test question: Create 4 disks with a size of 1G, and create 3 of them as raid5 array disks. 1 is a hot spare disk. Test the hot spare disk to replace the disk in the array and synchronize the data. Remove the damaged disk and add a new disk as a hot spare. Finally, it is required to be automatically mounted at boot.
4.1Create disk
[root@xiao ~]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n First cylinder (10486-13054, default 10486): Using default value 10486 Last cylinder, +cylinders or +size{K,M,G} (10486-13054, default 13054): +1G Command (m for help): n First cylinder (10618-13054, default 10618): Using default value 10618 Last cylinder, +cylinders or +size{K,M,G} (10618-13054, default 13054): +1G Command (m for help): n First cylinder (10750-13054, default 10750): Using default value 10750 Last cylinder, +cylinders or +size{K,M,G} (10750-13054, default 13054): +1G Command (m for help): n First cylinder (10882-13054, default 10882): Using default value 10882 Last cylinder, +cylinders or +size{K,M,G} (10882-13054, default 13054): +1G Command (m for help): t Partition number (1-8): 8 Hex code (type L to list codes): fd Changed system type of partition 8 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-8): 7 Hex code (type L to list codes): fd Changed system type of partition 7 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-8): 6 Hex code (type L to list codes): fd Changed system type of partition 6 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-8): 5 Hex code (type L to list codes): fd Changed system type of partition 5 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008ed57 Device Boot Start End Blocks Id System /dev/sda1 * 1 26 204800 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 26 10225 81920000 83 Linux /dev/sda3 10225 10486 2097152 82 Linux swap / Solaris /dev/sda4 10486 13054 20633279 5 Extended /dev/sda5 10486 10617 1058045 fd Linux raid autodetect /dev/sda6 10618 10749 1060258+ fd Linux raid autodetect /dev/sda7 10750 10881 1060258+ fd Linux raid autodetect /dev/sda8 10882 11013 1060258+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: 设备或资源忙. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
4.2Load kernel
[root@xiao ~]# partx -a /dev/sda5 /dev/sda
[ root@xiao ~]# partx -a /dev/sda6 /dev/sda
[root@xiao ~]# partx -a /dev/sda7 /dev/sda
[root@ xiao ~]# partx -a /dev/sda8 /dev/sda
4.3 Create raid5 and its hot backup disk
[root@xiao ~]# mdadm -C /dev/md0 -l 5 -n 3 -x 1 /dev/sda{5,6,7,8} mdadm: /dev/sda5 appears to be part of a raid array: level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014 mdadm: /dev/sda6 appears to be part of a raid array: level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014 mdadm: /dev/sda7 appears to be part of a raid array: level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014 mdadm: /dev/sda8 appears to be part of a raid array: level=raid5 devices=3 ctime=Wed Dec 17 00:58:24 2014 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
4.4 The initialization time is related to the application of reading and writing the disk array, use cat /proc/mdstat information queries the current reconstruction speed and expected completion time of the RAID array.
[root@xiao ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md0 : active raid5 sda7[4] sda8[3](S) sda6[1] sda5[0] 2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [=========>...........] recovery = 45.5% (482048/1056768) finish=0.3min speed=30128K/sec unused devices: <none> [root@xiao ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md0 : active raid5 sda7[4] sda8[3](S) sda6[1] sda5[0] 2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@xiao ~]# mke2fs -t ext3 /dev/md0 //格式化raid
4.5 Mount the raid to the /mnt directory and check whether it is normal (lost found is displayed as normal)
[root@xiao ~]# mount /dev/md0 /mnt [root@xiao ~]# ls /mnt lost+found
4.6 Check the detailed information of the raid array
[root@xiao ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Dec 17 03:38:08 2014 Raid Level : raid5 Array Size : 2113536 (2.02 GiB 2.16 GB) Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Dec 17 03:55:11 2014 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : xiao:0 (local to host xiao) UUID : bce110f2:34f3fbf1:8de472ed:633a374f Events : 18 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 6 1 active sync /dev/sda6 4 8 7 2 active sync /dev/sda7 3 8 8 - spare /dev/sda8
4.7 Simulates the damage to one of the disks. Here I choose the /dev/sda6 disk
[root@xiao ~]# mdadm /dev/md0 --fail /dev/sda6 mdadm: set /dev/sda6 faulty in /dev/md0
4.7 Check the raid array details and find that /dev/sda8 automatically replaces the damaged /dev/sda6 disk .
[root@xiao ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Dec 17 03:38:08 2014 Raid Level : raid5 Array Size : 2113536 (2.02 GiB 2.16 GB) Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Dec 17 04:13:59 2014 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 43% complete Name : xiao:0 (local to host xiao) UUID : bce110f2:34f3fbf1:8de472ed:633a374f Events : 26 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 3 8 8 1 spare rebuilding /dev/sda8 4 8 7 2 active sync /dev/sda7 1 8 6 - faulty /dev/sda6 [root@xiao ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md0 : active raid5 sda7[4] sda8[3] sda6[1](F) sda5[0]
2113536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] #The normal situation will be [UUU], if the first disk is damaged, it will display [_UU].
4.8 Remove the damaged hard disk
[root@xiao ~]# mdadm /dev/md0 -r /dev/sda6 mdadm: hot removed /dev/sda6 from /dev/md0
4.9 Add a new hard disk as a hot backup disk
[root@xiao ~]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n First cylinder (11014-13054, default 11014): Using default value 11014 Last cylinder, +cylinders or +size{K,M,G} (11014-13054, default 13054): +1G Command (m for help): t Partition number (1-9): 9 Hex code (type L to list codes): fd Changed system type of partition 9 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008ed57 Device Boot Start End Blocks Id System /dev/sda1 * 1 26 204800 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 26 10225 81920000 83 Linux /dev/sda3 10225 10486 2097152 82 Linux swap / Solaris /dev/sda4 10486 13054 20633279 5 Extended /dev/sda5 10486 10617 1058045 fd Linux raid autodetect /dev/sda6 10618 10749 1060258+ fd Linux raid autodetect /dev/sda7 10750 10881 1060258+ fd Linux raid autodetect /dev/sda8 10882 11013 1060258+ fd Linux raid autodetect /dev/sda9 11014 11145 1060258+ fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: 设备或资源忙. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. [root@xiao ~]# partx -a /dev/sda9 /dev/sda [root@xiao ~]# mdadm /dev/md0 --add /dev/sda9 mdadm: added /dev/sda9 [root@xiao ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Dec 17 03:38:08 2014 Raid Level : raid5 Array Size : 2113536 (2.02 GiB 2.16 GB) Used Dev Size : 1056768 (1032.17 MiB 1082.13 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Dec 17 04:39:35 2014 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : xiao:0 (local to host xiao) UUID : bce110f2:34f3fbf1:8de472ed:633a374f Events : 41 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 3 8 8 1 active sync /dev/sda8 4 8 7 2 active sync /dev/sda7 5 8 9 - spare /dev/sda9
5. Automatically mount at boot
Edit /etc/ fsab file
/dev/md0 /mnt ext3 defaults 0 0
:wq
6.mdadm Chinese man( Quote)
Basic syntax: mdadm [mode] [options]
[mode] There are 7 types:
Assemble: Add a previously defined array to the currently used one array.
Build: Build a legacy array, each device has no superblocks
Create: Create a new array, each device has superblocks
Manage: Manage the array, such as add Or remove
Misc: allows individual operations on a device in the array, such as erasing superblocks or terminating the array in use.
Follow or Monitor: Monitor the status of raid 1,4,5,6 and multipath
Grow: Change the raid capacity or the number of devices in the array
Available [options ]:
-A, --assemble:Add a previously defined array
-B, --build:Build a legacy array without superblocks.
-C, --create: Create a new array
-Q, --query: Check a device to determine whether it is an md device or part of an md array
-D, -- detail: Print the detailed information of one or more md devices
-E, --examine: Print the contents of the md superblock on the device
-F, --follow, --monitor: Select Monitor mode
-G, --grow: change the size or shape of the array in use
-h, --help: help information, this option will be displayed after using the above options info
--help-options
-V, --version
-v, --verbose: show details
-b, -- brief: Fewer details. For --detail and --examine options
-f, --force
-c, --config=: specify the configuration file, the default is /etc/mdadm/mdadm. conf
-s, --scan: Scan configuration files or /proc/mdstat for missing information. Configuration file /etc/mdadm/mdadm.conf
Options used by create or build:
-c, --chunk=:Specify chunk size of kibibytes. The default is 64.
--rounding=: Specify rounding factor for linear array (==chunk size)
-l, --level=:Set raid level.
--create available: linear, raid0, 0, stripe, raid1,1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp.
--build available: linear, raid0, 0, stripe.
-p, --parity=: Set the parity check rules of raid5: eft-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left -symmetric
--layout=: Similar to --parity
-n, --raid-devices=: Specifies the number of devices available in the array, this number can only be modified by --grow
-x, --spare-devices=: Specify the number of spare devices in the initial array
-z, --size=: After setting up RAID1/4/5/6, remove the number of spare devices from each device Total space acquired
--assume-clean: Currently only used with --build option
-R, --run: A certain part of the array appears in other arrays or file systems , mdadm will confirm the array. This option will not confirm.
-f, --force: Usually mdadm does not allow you to create an array with only one device, and one device will be used as the missing drive when creating raid5. This option is just the opposite.
-a, --auto{=no,yes,md,mdp,part,p}{NN}:
For more programming related content, please pay attention to php Chinese website Introduction to Programming column!
The above is the detailed content of Detailed explanation of mdadm command. For more information, please follow other related articles on the PHP Chinese website!