Finally I have Ubuntu setup the way I like on the Acer easyStore h340 Windows Home Server hardware. I did not employ any Drive Pooling software as I experienced it on Windows Home Server and I didn’t quite like it. This time, I want my files to be stored in a more standardized, redundant, fault tolerant setup: RAID10. Here is the steps I went through to get my 8 x 1TB drives setup to two separate software RAID10 arrays, creating an effectively ~4TB of redundant storage.
Disclaimer: I am new to Linux and do not know enough about what I am doing. If you follow these steps and resulted in data lost, I am sorry but I can’t be held responsible…. other than that, hope this helps you with what you need to accomplish.
I had Ubuntu 10.04 LTS installed in my Acer easyStore h340 hardware previously. I wanted to maximize the amount of SATA drive bays for my setup: 4 internal to the h340 and 4 in the TowerRAID external SATA enclosure. To preserve this, I ended up installing Ubuntu in an external USB hard drive (200GB). This turns out pretty handy as I was also experimenting with Fedora 13 for a little bit using another external USB hard drive. Anyway, after Ubuntu is setup to my satisfaction, there are a couple of additional software I need to get from the Ubuntu Software Centre:
mdadm is the multi-disk administration tool.
Run Ubuntu Software Centre from the Applications menu to search for and download these software.
The following steps are obtained from Kezhong’s blog and updated and expanded to account for the differences in my hardware. In the example below please ignore the leading “>” as it indicates a command used in the terminal.
Step 1: Prepare the file system on the physical drives
I used the Disk Utility to “Format Drive” (create partition) and then “Format Volume” (create file system) on each of the 1TB drives connected to my system. I used GUID partition table and ext4 file system on each of my drives. After doing this I have a total of 8 drive devices ready for my RAID10 arrays.
Step 2: Create the multi-disk array
Using terminal, I elevated my privilege to super user (root) account for convenience. Otherwise you have to use “sudo” before every command, and Linux will promote you for the password for each command.
In this example, I will use these devices to make up my multi-disk array: /dev/sdg1, /dev/sdh1, /dev/sdi1 and /dev/sdj1. These are the formatted devices from step 1 above. I am also naming my multi-disk device “md2”.
Using this command:
mdadm will manage creating the multi-disk array for you. The above command creates a new multi-disk device called “/dev/md2” with RAID10 (-l10) using 4 devices (-n4) mapped to “/dev/sd[g,h,i,j]1”. This could take a while depending on your hardware. For the easyStore h340 server with the 1TB WD Caviar Green drives attached to the internal SATA connectors, it took about 3 hours for mdadm to finish creating the multi-disk array.
You can check the progress of the creation using the following command:
When mdadm finishes, you should see the multi-disk array.
Step 3: Adding the new array to mdadm.conf
Before editing the mdadm.conf file, you need to find out the UUID of the array. The following command displays the detail info of /dev/md2 that includes the UUID info:
> mdadm --detail /dev/md2
Use gedit to edit the mdadm.conf file. In terminal, run this command to run gedit and load the mdadm.conf file into the editor:
> gedit /etc/mdadm/mdadm.conf
Step 4: Create a new physical volume
Use this command to create the physical volume for the multi-disk device:
Step 5: Create a new Volume Group
For my experimental RAID10 array, I used the following command to create the volume group called “RAIDVG”:
> vgcreate RAIDVG /dev/md1
This creates a new volume group called “RAIDVG” and add the “/dev/md1” multi-disk device to the group.
For my second RAID10 array, I had to use a different command as I did not want to create yet another volume group. You can have multiple devices mapped to a volume group. This time I extend the existing volume group with the second multi-disk device.
Using this command:
This command adds the “/dev/md2” to an existing volume group “RAIDVG”.
Step 6: Create a new Logical Volume
After adding the multi-disk device to the volume group, I need to map it to a logical volume. Before doing that, I need to know how much space is available to be mapped to a logical volume.
Use this command:
And look for the “Free PE / Size” information. The first number is the number of physical extents available (476934 in this example) and the second number is the size available (1.82 TiB). I used the number of free physical extents in the next command to be accurate. The size info is rounded for human interpretation and is not useful for creating a logical volume.
Use this command:
> lvcreate -n RAID10FS2 -l476934 RAIDVG
This command creates a logical volume named “RAID10FS2” using the number of physical extents specified (-l476934) from the “RAIDVG” volume group.
Step 7: Create the File System
After the logical volume is created, create the file system next.
Use the command:
> mkfs.ext4 /dev/RAIDVG/RAID10FS2
This command creates a ext4 file system on the “RAID10FS2” logical volume in the “RAIDVG” volume group. This would take some time as well.
When it finishes, you should see the above.
Next, create a mount point for the newly created file system to mount to:
I created a folder called “RAID10-2”, then use the following command to mount the file system to it:
> mount /dev/RAIDVG/RAID10FS2 /RAID10-2
Now you can access it using the folder “/RAID10-2”. The RAID device will not appear as a drive device like a CD-ROM drive or an external USB hard drive. At this point, it appears as a folder from the root “/RAID10-2”.
Step 8: Ensure it mounts on boot
It is handy to be able to mount it using a command, but you probably want to have it available after rebooting without having to mount it manually. To do that, we need to know how the file system is mapped.
Use the command:
> df -Th
This displays the file system disk space usage, and how the file system is mapped for the logical volumes we created.
I have 2 mapped file systems mapped: “/dev/mapper/RAIDVG-RAID10FS1” and “/dev/mapper/RAIDVG-RAID10FS2”.
Next, edit the /etc/fstab file to add the mapped file system so that they will be mounted on reboot.
Use the command to open /etc/fstab in gedit:
> gedit /etc/fstab
Add the mapped file system to the fstab file like above. Specify the mapped file system, the mount point, the file system type and the options. I added these to my fstab file:
/dev/mapper/RAIDVG-RAID10FS1 /RAID10 ext4 defaults 1 3
/dev/mapper/RAIDVG-RAID10FS2 /RAID10-2 ext4 defaults 1 3
Ensure you type in the info correctly. I had a typo and Linux complained about a mounting error on reboot. Once you saved the changes, you can continue to access the RAID volumes via the mount point even after rebooting.
From Disk Utility, this is how a RAID10 array looks like. For my setup, each array consists of 4 x 1TB drives, resulting in 2TB of effective redundant storage.
In Nautilus, the RAID array is accessible via the mount point just like a folder from root.
That’s all the steps you need. Hopefully this helps you with your RAID10 setup with Linux. Thanks Kezhong for providing his excellent blog post on how to manage a RAID10 array in Linux to help me kick start mine.