Foreword

This page describes the layout of my server, and how I have setup my Raid/LVM. Most of the install instructions however have been taken from the arch wiki, and all credit should go to the author there.
At the bottom of the page the references are listed. For deeper study/explanation I suggest you check them out!

Main System

MAINBOARD: Asus P3B-F
CPU: 700MHz Pentium 3
RAM: 512MB (4x128MB SD-Ram)
GFX: Matrox G4+ MA32G (MGA G400/450)
NIC: Realtek 8139
CONTROLLER: On-Board Dual IDE ATA controller, 2ch SATA150 (fake)RAID controller with a Silicon Image SIL 3112 chipset

Storage hardware in the server

And how the raid system is configured.

Drives

Primary Master is IDE (sda) - Western Digital 40GB Harddrive (WDC WD400JB-00JJC)
Secondary Slave is IDE (sr0) - LiteOn DVD-writer

SATA ch1 is S-IDE (sdb) - Seagate ST3250820AS
SATA ch2 is S-IDE (sdc) - Seagate ST3250820AS

Partition Schemes

Partition setup for sda

partition  size  mountpoint      Filesystem  options 
sda1       150MB  /boot            ext2
sda5      4000MB  SWAP             SWAP
sda6     30000MB  /                ext3
sda7      1000MB  /home            ext3
sda8      1000MB  /var/lib/pacman  ext3       NOATIME

Remaining Free Space 3800MB

Partition setup for sdb

partition  size    mountpoint    Filesystem
sdb1      250000MB              Linux raid autodetect(FD)

Partition setup for sdc

partition  size    mountpoint    Filesystem
sdc1      250000MB              Linux raid autodetect(FD)

Software Raid/LVM setup

  1. First install arch linux on sda then update it to the newest release available.
  2. To activate software raid in the kernel run
    [root@server]$ modprobe raid1
  3. Create the raid1 array with
    [root@server]$ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
  4. Activate the device-mapper (used by LVM) with
    [root@server]$ modprobe dm-mod
  5. Create physical LVM volume
    [root@server]$ lvm pvcreate /dev/md0
  6. Create Volume group
    [root@server]$ lvm vgcreate array /dev/md0
  7. Create logical volume (logical drive)
    [root@server]$ lvm lvcreate --size 50G --name music array
  8. Create filesystem on logical volume
    [root@server]$ mkfs.ext3 /dev/array/music

    Other logical volumes can also be created now. Just repeat step 7 and 8 with the appropriate size and names.

  9. To make sure LVM works on the next reboot, edit /etc/rc.conf and make the following change
    USELVM="yes"
  10. Save the raid configuration with
    [root@server]$ mdadm -D --scan >>/etc/mdadm.conf

How to retrieve information on the raid and LVM setup

To see information about the raid

[root@server]$ mdadm --misc --detail /dev/md0

Info on the physical LVM volume

[root@server]$ lvm pvscan

Info on the LVM volume group

[root@server]$ lvm vgscan

Info on the logical LVM volume

[root@server]$ lvm lvscan

Rebuilding the Raid after a failure

This is a thing that I hope you never have to do. But just in case here is a link to a document that, among other things, describes how you can rebuild your Raid.

Alternately here is the short short version I used when the Raid failed. (due to me removing the powercable on one of the drives! ;-) )

Check the status on the Raid

[root@server]$ mdadm --detail /dev/md0

It will give an overview on the raid, and at the bottom you can see that one of the drives is marked as removed and faulty.

I'm not sure if it is necessary, but I unmounted all the partitions before continuing.
Then ran the following command to add the drive to the array again.

[root@server]$ mdadm --add /dev/md0 /dev/sdc1

The Raid will begin to resync, and you can follow its progress with

[root@server]$ cat /proc/mdstat

Or

[root@server]$ mdadm --detail /dev/md0

When both the drives are marked as active sync, then the rebuild is finished.

References

Installing_with_Software_RAID_or_LVM
LVM-HOWTO
Linux-Software-RAID-Tutorial
mdadm man page



Categories

Arch-Linux