Linux Services Organization

Our goal, introduce Linux services to the enterprise world.
Contact us in contact@linuxsv.org

Linux Services Organization : Linux Raid Linux Server

The main goal of RAID (Redundant Array of Inexpensive Disks) is combine multiple inexpensive, small disk drives into an array of disks in order to provide redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes that one large and expensive drive does not provide. This array of drives appears to the system as a single drive.

RAID can be implemented via hardware devices as RAID controllers or via software controlled by the Linux Kernel . This chapter focuses on RAID implemented via software where the Linux Kernel uses the MD driver that allows the RAID solution to be hardware independent. The RAID software performance depends directly on the system CPU and load.

RAID Levels

Several levels of software RAID are supported by CentOS/RHEL systems: levels 0, 1, 5, and 6

RAID 0
It requires a minimum of two disks. Read-Write access to the array is faster because it is done in parallel on all the array components and the information is stripped across all array members without providing redundancy (parity). The total storage capacity of the array is the capacity sum of all array components and if one disk crashes the information that contains will be lost. RAID 0 is also known as striping without parity.

RAID 1
It requires a minimum of two disks identically-sized. The same information is written in all array members so the performance is lower than RAID 0 but in this case it provides redundancy (parity). If one disk crashes the information can be recovered from the other disk. The total storage capacity of the array is the capacity of one of the members, the other is used to store the parity to implement the redundancy. RAID 1 is also known as disk mirroring.

RAID 5
It requires a minimum of three disks identically-sized. In this case the parity is stripped across all array components and if one disks crashes the information can be recovered using the parity stored on the rest of the disks array. If two disks crashes all array information is lost. The total storage capacity of the array is the capacity sum of all array members less the capacity on one disk that is used to store the parity. RAID 5 provides the same redundancy as RAID 0 with an higher performance. RAID 5 is also known as disk striping with parity.

RAID 6
It requires a minimum of four disks identically-sized . It uses two parity levels and the information can be recovered in case of crash of two array members.

Spare disks
In all RAID levels additional disks for failover can be added, the spare disks. When one member of the array fails, it is marked as bad and removed from the array. Automatically one spare disk is added to the array and the array is rebuilt immediately. or no downtime.

RAID Building

* The first step in order to create a RAID array is create the disk partitions (with the same size) that are going to be the array members as RAID partition with the command fdisk (code 'fd'). For example create a RAID 1 array with two partitions of 100M on sdb1 and sdc1 :

$ fdisk /dev/sdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130): +100M

Command (m for help): t
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Set the partition type as RAID : 'fd'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Repeat the same operation for disk sdc. The final result is two identical RAID partitions of 100M sdb1 and sdc1 ready to form a raid array :

$ fdisk -l

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device      Boot      Start      End      Blocks      Id      System
/dev/sdb1                  1          13      104391        fd      Linux raid autodetect

Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device      Boot      Start      End      Blocks      Id      System
/dev/sdc1                  1          13      104391        fd      Linux raid autodetect



* Next step is create a RAID 1 sdb1,sdc1 array using the command mdadm :

$ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.


* Verify raid status :

$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
                  803136 blocks [2/2] [UU]

unused devices:



* Create filesystem on RAID array using mkfs command :

$ mkfs.ext4 /dev/md0

mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
26104 inodes, 104320 blocks
5216 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.



* Mount the partition :

$ mount /dev/md0 /mnt
$ df -h
...
/dev/md0 99M 5.6M 89M 6% /mnt
...

/dev/md0 RAID-1 100MB mounted on /mnt

mdadm howto

The command mdadm can be used to manage the MD devices in RAID software :

* Create a RAID-5 array :

$ mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

* Create a RAID-5 array with one spare partition, sde1 :

$ mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

* Remove a RAID array :

$ mdadm --remove /dev/md0

* Mark sdb1 partition as failed on RAID array and remove it from RAID array :

$ mdadm --verbose /dev/md0 -f /dev/sdb1 -r /dev/sdb1

* Add sdb1 partition to the RAID array and start array reconstruction :

$ mdadm --verbose /dev/md0 -a /dev/sdb1

Questions

1.- RAID-0 supports the failure of one of the RAID array partition (true/false)

2.- RAID-5 requires at least three equal-size partitions in order to provide redundancy (true/false)

3.- RAID array can be constructed using partitions in 'Linux' format (true/false)

4.- Which command must be used in order remove sdc1 partition to /dev/md0 RAID device ?

5.- Which command must be used in order to add sdc1 partition to /dev/md0 RAID device ?

6.- On RAID software partitions only ext4 filesystem can be created ? (true/false)

7.- Which command must be used in order to remove /dev/md0 RAID array ?

8.- Which command shows all software RAID array status ?

9.- Which of the following commands can be used in order to monitor /dev/md0 RAID array ?
A - cat /proc/mdstat
B - mdadm --detail /dev/md0
C - Both of them
D - None of them

10.- Which of the following is not a supported software RAID level ?
A - RAID 3
B - RAID 4
C - RAID 6
D - RAID 10

Labs

1.- Create a RAID-5 400M partition on disk sdb with one spare partition. Create a ext4 filesystem on RAID-5 array, mount it on /mnt and copy the content of /tmp on /mnt.

2.- Mark as failed and remove one partition from the previous RAID-5 array. Verify that spare partition has been added automatically to the RAID-5 and no data has been lost.

3.- Add the previous removed partition to the RAID-5 and verify the result.

-- This page is part of Linux Server online tutorial --