If you are thinking about implementing a Linux software raid then here is the most important link that you should investigate before you start:
Linas Vepsta's raid page: https://linas.org/linux/raid.html
The date of this posting is Oct 29/98 and the present raid documentation is incomplete and confusing. This posting is to clear up problems that you will encounter implementing raid0 and raid1.
I wanted to implement mirror over striping. The striping gives good read/write performance increases and the mirroring gives backup and read performance increases.
I started with kernel 2.0.30 and implemented raid0 (striping). Then I upgraded my kernel to 2.0.35 and the fun began. After struggling to get raid0 working with 2.0.35, I tackled raid1. Well, guess what, throw everything that you learned about raid out the window and start from scratch! A good idea is to start simple, get raid0 up and running then add raid1. The story begins:
Linear and raid0 (striping) are implemented in the kernel since 2.x. You have to recompile your kernel with multiple devices installed. I recommend installing it in the kernel to start. You will have enough problems without implementing it as a module.
To check if you have multiple devices installed. dmesg |more and look to see if you have the md dirver loaded and raid0 registered (can't remember the exact phrase - late at night ;-( )
Or type cat /proc/mdstat to see the status of your md devices. You should see /dev/md0 to /dev/md3 inactive.
Strangely, the kernel tools mdtools-0.35 are not usually supplied with distributions. These are the tools that are required for setting up the raid, running and stopping it.
You can find them on the Slackware distribution at (23k in size)
https://sunsite.unc.edu/pub/Linux/distributions/slackware/slakware/ap1/md.tgz
Download to /usr/local/src then:
cd / tar -zxvf /usr/local/src/md.tgzIt will put the files in the correct place.
sbin/mdadd sbin/mdcreate usr/etc/mdtab install/doinst.sh usr/man/man5/mdtab.5.gz usr/man/man8/mdadd.8.gz usr/man/man8/mdcreate.8.gz usr/doc/md/COPYING usr/doc/md/ChangeLog usr/doc/md/README usr/doc/md/md_FAQRead through the README file (ignore warnings of course) The documentation is quite good for kernel 2.0.30 and linear /raid0 mode. The Linux Journal (June or July 98) has an excellent article on how to implement raid0 (striping). It was what spiked my interest.
The Linux Gazette has another article that helps:
https://linuxgazette.net/issue17/raid.html
You should start the raid array before fsck -a, usually located in /etc/rc.d/rc.s for slackware distributions and should stop the raid array in both /etc/rc.d/rc.0 and rc.6. (BTW since they are identical files in slackware, can't we just do a softlink from one to the other and modify only one?)
To check to see if it is working, type cat /proc/mdstat, it should indicate what states the md devices are (/dev/md0 raid0 using /dev/sda1 and /dev/sdb1).
Test, test, test your raid. Shutdown, power-up, see if it is working like you expected.
I did some fancy copying using cp -rap switches to copy complete directory structures to the raid arrays. Then modified /etc/fstab to include the new drives.
Swap partitions do not need to be striped. They are automatically striped if a priority is used. Check the Software-RAID-mini-HOWTO and the Bonehead question section for details. It is amazingly simple.
If you lose power (AC line), you will lose your raid array and any data that is on it! You should implement a UPS backup power supply. The purpose of the UPS is to keep your system running for a short period of time during brownouts and power fails. The UPS should inform your system that the power has failed through a serial port. There is a daemon that runs in the background that monitors the serial port. When it is informed that there is a power failure, it will wait a preset period of time (usually 5 minutes) than perform a system shutdown. The idea is that after 5 minutes of no power, the power will be down for a long time.
Most Linux distributions come with the basic UPS daemon powerd. Check "man powerd" for more info. It is a simple daemon that is implemented in /etc/inittab under what happens when the power fails. Basically, a dumb UPS, just closes a relay contact that is connected to the serial port. powerd monitors to see if the contact has closed. If it does it shuts down the PC after a predetermined time, warns users and can send an email to root.
I used an APC smart UPS that communicates through the serial port. There is an excellent daemon called apcupsd that works like a charm. It is located here. Please read the notice and sympathize with the author, he has done an excellent job (kudos to the author!). The installation works like a charm and the documentation is excellent.
https://www.dyer.vanderbilt.edu/server/apcupsd/
Don't have a clue. I upgraded from 2.0.30 to 2.0.35 because 2.0.35 is the latest stable release.
The mdtools compiled perfectly on my home machine (testbed running 2.0.30) but would not compile on my work machine (upgraded to 2.0.35). I kept getting an error about MD_Version (can't remember the exact name) not being defined. After a lot of head scratching and searching, I found that /usr/src/include/md.h contains the version number of the md driver. With version 2.0.30, it was ver 0.35, with 2.0.35 it is ver 0.36. If you "mdadd -V" it will indicate the version of md that mdadd will work with. So I had the wrong mdtools version. Here is the location of the correct version:
ftp://ftp.kernel.org/pub/linux/daemons/raid/raidtools-0.41.tar.gz
Download to /usr/local/src then
tar -zxvf raidtools-0.41.tar.gzA new directory will be made /usr/local/src/raidtools-0.41
Change to the new directory and read the INSTALL file, then
./configureI can't remember if I had to then make and make install after this. I can't duplicate this now that I've upgraded to a new raid patch.
You should have a new mkraid and mdadd binary. Type mdadd -V to check if your binaries are updated. It should respond with something that says something like mdadd 0.3d compiled for raidtools-0.41. Then read the QuickStart.RAID for the latest info. For raid0, not much has changed from the previous versions.
You must patch the kernel to enable RAID 1, 4 and 5. The patch is located at
ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raid0145-19981005-c-2.0.35.tz
Copy to /usr/src directory and uncompress the patch:
tar -zxvf raid0145-19981005-c-2.0.35.tzNote the patch will be looking for /usr/src/linux-2.0.35 directory. If you have your 2.0.35 source installed as /usr/src/linux, you should mv /usr/src/linux /usr/src/linux-2.0.35 and soft link /usr/src/linux to it. ln -s /usr/src/linux-2.0.35 /usr/src/linux
To apply the patch, in /usr/src:
patch -p0 <raid0145-19981005-C-2.0.35(someplace the lowercase c got changed to an uppercase C in my system? Maybe after tar?)
You now get to recompile the kernel. When you select multiple devices, you will see options for raid 1, 4 and 5 available. So the steps are
make menuconfig (or config or xconfig) make clean make dep make zImage make modules (if you are using modules) make modules_installCopy the new kernel to wherever your distribution looks for it (/ or /boot). I suggest that you have a base kernel that works without raid and then a raid kernel. You can modify lilo.conf to allow you to select which kernel that you want to boot to. It's not difficult at all but at first glance it looks terrifying. Check /usr/lib/lilo for good examples and documentation.
Check dmesg | more to see if you have md drivers loaded and raid0 & 1 registered. Type cat /proc/mdstat to see if you have the new md driver. You should see 16 md devices instead of 4.
You will have to upgrade your raidtools. mdadd, /etc/mdtab and mdcreate are obsolete as well as a bunch of others. The new tools are raidstart, /etc/raidtab and mkraid. At this point the documentation is well out of date.
ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raidtools-19981005-B-0.90.tar.gz
Download to /usr/local/src then
tar -zxvf raidtools-19981005-B-0.90.tar.gzThis will make a new directory /usr/local/src/raidtools-0.90. Change to it and
./configureAgain, I can't remember if I had to then make and make install after this.
Steps to make a raid0 array /dev/md0 using two scsi drives /dev/sda1 and /dev/sdb1:
# Striping example # /dev/md0 using /dev/sda1 and /dev/sdb1 raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblocks 1 nr-spare-disks 0 chunk-size 32 device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1
Steps to make a raid1 array /dev/md2 using two striped pairs /dev/md0 (/dev/sda1 + /dev/sdb1) and /dev/md1 (/dev/sdc1 + /dev/sdd1:
# Striping example # /dev/md0 using /dev/sda1 and /dev/sdb1 raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblocks 1 nr-spare-disks 0 chunk-size 32 device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 # /dev/md1 using /dev/sdc1 and /dev/sdd1 raiddev /dev/md1 raid-level 0 nr-raid-disks 2 persistent-superblocks 1 nr-spare-disks 0 chunk-size 32 device /dev/sdc1 raid-disk 0 device /dev/sdd1 raid-disk 1 # Mirror example # /dev/md2 using /dev/md0 and /dev/md1 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 persistent-superblocks 1 nr-spare-disks 0 chunk-size 32 device /dev/md0 raid-disk 0 device /dev/md1 raid-disk 1
/dev/md2 /raidtest ext2 defaults 1 1
Other resources that you may want to look at if you run into trouble: