Having built a couple of other Debian servers using software RAID 1, but not recalling exactly how I got it to work, I decided to actually document the results here.
So I needed to build up a system that we could dump really large drives into for some customers so they can do offsite backups. We had started doing this using FireWire drives attached to a G4 running Panther server, but it started to get a bit messy and sometimes FireWire busses can be a bit finicky.
We had a rackmount system that had 8 hot swap IDE bays in it, powered by an old AMD board with 6 PCI slots and it was perfect for what we needed. We had it at the colo for doing backups there, but the RAID card had some issues, so we had pulled it and it was sitting on a shelf for the past year.
I took a SATA PCI card (fake raid, don’t get me started) and mounted two 160GB SATA drives (that we had pulled from two different PowerMac G5s) into one of the internal drive cages. This gave me 2 nice big disks to create my boot system with.
Booting from a RC1 biz card install of Debian Etch, I got to the Partition Disks section of the install. This is the really tricky part because if you don’t do things in the right order, the partioner will not be able to set things up correctly and produce something you can actually install onto.
Here is the basic outline of what I ended up with in terms of partitions:
/boot
/
/swap
however!!! all mounts are not created the same.
so, let’s start with our two physical disks, sda and sdb
on each of these I created two actual partitions:
- one small partition at the beginning (around 64MB) that will be used for the boot mount
- the rest of the disk that will be used for everything else
so I ended up with 4 partitions: sda1, sda2, sdb1 and sdb2. All 4 of these partitions are set to have a type to be used for software raid
next I created 2 new software raid1 devices using the corresponding partitions on each disk.
- The first raid1 disk I formatted as ext3 and designate it’s mount as /boot. Nothing more needs to done with this disk.
- The second raid1 disk I do NOT format, but designate it as to be used for the Logical Volume Manager (LVM) All remaining partitions will be created from this device.
Proceeding into the LVM screens I did the following:
- I create a single Logical Volume Group using the single raid1 device I made from the sda2 and sdb2 partitions.
- I then created two Logical Volumes from this one LVG: sys (most of the disk) and swap (the ending 8GB of space)
- I formatted the sys LV as ext3 and designated it to be the root mount point /
- I designated the swap LV as (surprise) swap
Once this tree of raid1 devices and LVG/LVs were in place…. I had no problem installing Debian and continuing on with setting up my big drive box. I will use PureFTPd and netatalk (for reasons I will explain in another article) for server side daemons.