In RAID 5, data stripes are distributed across multiple drives with distributed parity. Splitting with distributed parity means that you will split parity information and band data across multiple disks, which will have good data redundancy.

For the RAID level you must have at least three hard drives or more. RAID 5 is used in the large-scale production environment, where it is cost-effective and provides performance and redundancy.
What is parity?
Parity is the simplest common method for detecting errors in data storage. Parity stores information on each disk, let’s say we have 4 disks, on 4 disks a disk space will be divided across all disks to store parity information. If any of the disks fail, we can still get the data rebuilding from the parity information after replacing the failed disk.
Pros and cons of RAID 5
- Offers better performance
- Redundancy support and fault tolerance
- Supports hot spare options
- You will lose a single disk capacity to use parity information
- There is no data loss if a single disk fails. We can rebuild from parity after replacing the failed disk.
- It adapts to a transaction-oriented environment, since the reading will be faster.
- Due to parity overhead, writing will be slow.
- Rebuilding takes a long time.
.
.
.
Requirements
A minimum of 3 hard drives are required to create Raid 5, but you can add more disks, only if you have a dedicated hardware RAID controller with multiple ports. Here, we are using the software RAID and the ‘mdadm‘ package to create a raid.
mdadm is a package that allows us to configure and manage RAID devices on Linux. By default there is no configuration file available for RAID, we must save the configuration file after creating and configuring the RAID configuration in a separate file called mdadm.conf.
Before proceeding, I suggest you review the following articles to understand the basics of RAID on Linux.
RAID Basics on Linux – Part 1 Creating RAID 0 (Stripe) on Linux – Part 2 Configuring RAID 1
- (Mirroring) on Linux –
Part 3 Configuring
My Server
Operating System : CentOS 6.5 Final IP Address : 192.168.0.227 Hostname : rd5.tecmintlocal.com Disk 1 [20GB] : /dev/sdb Disk 2 [20GB] : /dev/sdc Disk 3 [20GB] : /dev/sdd
This article is Part 4 From a RAID series of 9 tutorials, here we are going to configure a RAID 5 software with distributed parity on Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.
Step 1: Installing mdadm and verifying drives
1. As we said earlier, we are using CentOS 6.5 Final version for this raid configuration, but the same steps can be followed for RAID configuration on any Linux-based distribution.
# lsb_release -a # ifconfig | grep inet
2. If you are following our raid series, we assume you have already installed the ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package.
# yum install mdadm [on RedHat systems] # apt-get install mdadm [on Debain systems]
3. After the installation of the ‘mdadm‘ package, let’s list the three 20 GB disks that we have added to our system using the ‘fdisk‘ command.
# fdisk -l | grep sd

4. Now it’s time to scan the three connected drives for existing RAID blocks on these drives using the following command.
# mdadm -E /dev/sd[b-d] # mdadm -examine /dev/sdb /dev/sdc /dev/sdd

Note: The image illustrated above shows that no superblocks have been detected yet. Therefore, there is no RAID defined on all three drives. Let’s start creating one now.
Step 2: Partition the disks for RAID
5. First, we need to partition the disks (/dev/sdb, /dev/sd c and /dev/sdd) before adding them to a RAID, so let’s define the partition using the ‘fdisk’ command, before forwarding it to the next steps.
# fdisk /dev/sdb # fdisk /dev/sdc # fdisk /dev/sdd Create Partition /dev/sdb
Follow the instructions below to create a partition on the /dev/sdb drive
. Press ‘n’ to
- create a new partition
- Then choose ‘P‘ for the primary partition. Here we are choosing Primary because there are no defined partitions yet.
- Then choose ‘1‘ to be the first partition. By default, it will be 1.
- don’t have to choose the specified size because we need the entire partition for RAID, so just press Enter twice to choose the default full size.
- Next, press ‘p‘ to print the created partition.
- Change the type, if we need to know all the available types press ‘L‘.
- Here, we are selecting ‘fd‘ since my type is RAID.
- Then press ‘p‘ to print the defined partition.
- Then, again, use ‘p‘ to print the changes we’ve made.
- Use ‘w‘ to write the changes.
.
Here for the cylinder size, we
“Create sdb partition” />Create sdb partition
Note: We need to follow the steps mentioned above to create partitions for sdc and sdd drives as well. Create Partition /dev/sdc Now partition the sdc and
sdd
leads by following the steps given in the screenshot or you can follow the steps above. # fdisk /dev/sdc Create sdc partition Create partition /dev/sdd # fdisk /dev/sdd

6 partition. After creating partitions, check for changes to the three sdb, sdc, and sdd drives.
# mdadm -examine /dev/sdb /dev/sdc /dev/sdd or # mdadm -E /dev/sd[b-d]

Changes Note: In the image above. representing the type is fd is that is, for RAID.
7. Now check the RAID blocks on the newly created partitions. If no superblocks are detected, we can move forward to create a new RAID 5 configuration on these drives.

Step 3: Create the md md0
8 device. Now create a RAID device ‘md0’ (i.e. /dev/md0) and include the raid level on all newly created partitions (sdb1, sdc1 and sdd1) using the following command.
# mdadm -create /dev/md0 -level=5 -raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 or # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
9. After creating the raid device, check and verify the RAID, included devices, and RAID level from the mdstat output.
# cat /proc/mdstat

If you want to monitor the current build process, you can use the ‘watch’ command, just go through the ‘cat/proc/mdstat’ with the watch command that will refresh the screen every 1 second. # clock -n1 cat /proc/mdstat
Monitor Raid 5 Process<
src=”https://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1-383×450.png” alt=”Raid 5 Process Summary
10. After raid creation, verify the RAID devices using the following command.
# mdadm -E /dev/sd[b-d]1

Note: The output of the above command will be a bit long as it prints the information of the three drives
.
11. Next, check the RAID array to assume that the devices we have included in the RAID level are running and started to re-sync.
# mdadm -detail /dev/md0

Step 4: Create the file system for md0
12. Create a file system for device ‘md0‘ using ext4 before mounting.
# mkfs.ext4 /dev/md0 <img src="https://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png" alt="Create md0 file
13. Now create a directory in ‘/mnt’, then mount the file system created in /mnt/raid5 and check the files on the mount point, you will see the lost+found directory.
# mkdir /mnt/raid5 # mount /dev/md0 /mnt/raid5/ # ls -l /mnt/raid5/
14. Create some files under mount point /mnt/raid5 and add text in any of the files to verify the contents.
# touch /mnt/raid5/raid5_tecmint_{1..5} # ls -l /mnt/raid5/ # echo “tecmint raid setups” > /mnt/raid5/raid5_tecmint_1 # cat /mnt/raid5/raid5_tecmint_1 # cat /proc/mdstat
15. We need to add an entry in fstab, otherwise it will not show our mount point after the system reboot. To add an entry, we need to edit the fstab file and add the following line as shown below. The mount point will vary depending on your environment.
# vim /etc/fstab /dev/md0 /mnt/raid5 ext4 defaults 0
16. Next, run the ‘mount -av‘ command to check for errors in the fstab entry.
# mount -av <img src="https://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png" alt="Check Fstab Errors
Step 5: Save the Raid 5 17 configuration
. As mentioned earlier in the requirements section, by default RAID does not have a configuration file. We have to save it manually. If this step is not followed, the RAID device will not be at md0, it will be at some other random number.
Therefore, we must save the settings before the system restarts. If the configuration is saved, it will be loaded into the kernel during system reboot and RAID will also be loaded.
# mdadm -detail -scan -verbose >> /etc/mdadm.conf
Configuration Note: Saving the configuration will keep the RAID level stable on the md0 device
.
Step 6: Add spare units
18. What’s the use of adding a spare unit? It is very useful if we have a spare drive, if any of the disks fail in our array, this spare drive will activate and rebuild the process and synchronize the data from other disks, so we can see a redundancy here.
For more instructions on how to add a spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.
- Add Spare Drive to Raid 5 Setup
Conclusion
Here in this article, we have seen how to set up a RAID 5 using three disks. Later in my next articles, we’ll look at how to troubleshoot when a disk fails in RAID 5 and how to replace it for recovery.