This page describes the layout of my server, and how I have setup my Raid/LVM. Most of the install instructions however have been taken from the arch wiki, and all credit should go to the author there.
At the bottom of the page the references are listed. For deeper study/explanation I suggest you check them out!
MAINBOARD: Asus P3B-F
CPU: 700MHz Pentium 3
RAM: 512MB (4x128MB SD-Ram)
GFX: Matrox G4+ MA32G (MGA G400/450)
NIC: Realtek 8139
CONTROLLER: On-Board Dual IDE ATA controller, 2ch SATA150 (fake)RAID controller with a Silicon Image SIL 3112 chipset
Storage hardware in the server
And how the raid system is configured.
Primary Master is IDE (sda) - Western Digital 40GB Harddrive (WDC WD400JB-00JJC)
Secondary Slave is IDE (sr0) - LiteOn DVD-writer
SATA ch1 is S-IDE (sdb) - Seagate ST3250820AS
SATA ch2 is S-IDE (sdc) - Seagate ST3250820AS
Partition setup for sda
partition size mountpoint Filesystem options sda1 150MB /boot ext2 sda5 4000MB SWAP SWAP sda6 30000MB / ext3 sda7 1000MB /home ext3 sda8 1000MB /var/lib/pacman ext3 NOATIME Remaining Free Space 3800MB
Partition setup for sdb
partition size mountpoint Filesystem sdb1 250000MB Linux raid autodetect(FD)
Partition setup for sdc
partition size mountpoint Filesystem sdc1 250000MB Linux raid autodetect(FD)
Software Raid/LVM setup
- First install arch linux on sda then update it to the newest release available.
- To activate software raid in the kernel run
[root@server]$ modprobe raid1
- Create the raid1 array with
[root@server]$ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
- Activate the device-mapper (used by LVM) with
[root@server]$ modprobe dm-mod
- Create physical LVM volume
[root@server]$ lvm pvcreate /dev/md0
- Create Volume group
[root@server]$ lvm vgcreate array /dev/md0
- Create logical volume (logical drive)
[root@server]$ lvm lvcreate --size 50G --name music array
- Create filesystem on logical volume
[root@server]$ mkfs.ext3 /dev/array/music
Other logical volumes can also be created now. Just repeat step 7 and 8 with the appropriate size and names.
- To make sure LVM works on the next reboot, edit /etc/rc.conf and make the following change
- Save the raid configuration with
[root@server]$ mdadm -D --scan >>/etc/mdadm.conf
How to retrieve information on the raid and LVM setup
To see information about the raid
Info on the physical LVM volume
Info on the LVM volume group
Info on the logical LVM volume
Rebuilding the Raid after a failure
This is a thing that I hope you never have to do. But just in case here is a link to a document that, among other things, describes how you can rebuild your Raid.
Alternately here is the short short version I used when the Raid failed. (due to me removing the powercable on one of the drives! ;-) )
Check the status on the Raid
It will give an overview on the raid, and at the bottom you can see that one of the drives is marked as removed and faulty.
I'm not sure if it is necessary, but I unmounted all the partitions before continuing.
Then ran the following command to add the drive to the array again.
The Raid will begin to resync, and you can follow its progress with
When both the drives are marked as active sync, then the rebuild is finished.