In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.
What are we trying to achieve
We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.
Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:
striping, which is the technique of segmenting logically sequential data, so that consecutive segments are stored on different physical storage devices.
distributed parity, where parity data are distributed between the physical disks, where there is only one parity block per disk, this provide protection against one physical disk failure, where the minimum number of disks are three.
This is how a RAID5 array looks like (image from diskpart.com):
Hardware Overview
We will have a Linux server with one root disk and six extra disks:
12345678910
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk
xvdc 202:32 0 10G 0 disk
xvdd 202:48 0 10G 0 disk
xvde 202:64 0 10G 0 disk
xvdf 202:80 0 10G 0 disk
xvdg 202:96 0 10G 0 disk
Dependencies
We require mdadm to create our raid configuration:
12
$ sudo apt update
$ sudo apt install mdadm -y
Format Disks
First we will format and partition the following disks: /dev/xvdb, /dev/xvdc, /dev/xvdd, I will demonstrate the process for one disk, but repeat them for the other as well:
$ fdisk /dev/xvdc
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
The old ext4 signature will be removed by a write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a2d2f6.
Command (m forhelp): n
Partition typep primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions)Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P}(2048-20971519, default 20971519):
Created a new partition 1 of type'Linux' and of size 10 GiB.
Command (m forhelp): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m forhelp): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Create RAID5 Array
Using mdadm, create the /dev/md0 device, by specifying the raid level and the disks that we want to add to the array:
123
$ mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Now that our device has been added, we can monitor the process:
To persist the device across reboots, add it to the /etc/fstab file:
12
$ cat /etc/fstab
/dev/md0 /mnt ext4 defaults 0 0
Now our filesystem which is mounted at /mnt is ready to be used.
RAID Configuration (across reboots)
By default RAID doesn’t have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.
So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.
Note: Saving the configuration will keep the RAID level stable in the md0 device.
Adding Spare Devices
Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:
Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:
$ resize2fs /dev/md0
resize2fs 1.45.5 (07-Jan-2020)Resizing the filesystem on /dev/md0 to 13094400(4k) blocks.
The filesystem on /dev/md0 is now 13094400(4k) blocks long.
Then we remount our filesystem:
1
$ mount /dev/md0 /mnt
After the filesystem has been mounted, we can view the disk size and confirm that the size increased:
123
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/md0 50G 52M 47G 1% /mnt
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.