"Linux Gazette...making Linux just a little more fun!"


Quick and Dirty RAID Information

By Eugene Blanchard


If you are thinking about implementing a Linux software raid then here is the most important link that you should investigate before you start:

Linas Vepsta's raid page: https://linas.org/linux/raid.html

The date of this posting is Oct 29/98 and the present raid documentation is incomplete and confusing. This posting is to clear up problems that you will encounter implementing raid0 and raid1.

I wanted to implement mirror over striping. The striping gives good read/write performance increases and the mirroring gives backup and read performance increases.

I started with kernel 2.0.30 and implemented raid0 (striping). Then I upgraded my kernel to 2.0.35 and the fun began. After struggling to get raid0 working with 2.0.35, I tackled raid1. Well, guess what, throw everything that you learned about raid out the window and start from scratch! A good idea is to start simple, get raid0 up and running then add raid1. The story begins:

Raid0 (striping) with kernel 2.0.30

Linear and raid0 (striping) are implemented in the kernel since 2.x. You have to recompile your kernel with multiple devices installed. I recommend installing it in the kernel to start. You will have enough problems without implementing it as a module.

To check if you have multiple devices installed. dmesg |more and look to see if you have the md dirver loaded and raid0 registered (can't remember the exact phrase - late at night ;-( )

Or type cat /proc/mdstat to see the status of your md devices. You should see /dev/md0 to /dev/md3 inactive.

Strangely, the kernel tools mdtools-0.35 are not usually supplied with distributions. These are the tools that are required for setting up the raid, running and stopping it.

You can find them on the Slackware distribution at (23k in size)

https://sunsite.unc.edu/pub/Linux/distributions/slackware/slakware/ap1/md.tgz

Download to /usr/local/src then:

 
cd /  
tar -zxvf /usr/local/src/md.tgz
It will put the files in the correct place.
sbin/mdadd
sbin/mdcreate
usr/etc/mdtab
install/doinst.sh
usr/man/man5/mdtab.5.gz
usr/man/man8/mdadd.8.gz
usr/man/man8/mdcreate.8.gz
usr/doc/md/COPYING
usr/doc/md/ChangeLog
usr/doc/md/README
usr/doc/md/md_FAQ
Read through the README file (ignore warnings of course) The documentation is quite good for kernel 2.0.30 and linear /raid0 mode. The Linux Journal (June or July 98) has an excellent article on how to implement raid0 (striping). It was what spiked my interest.

The Linux Gazette has another article that helps:
https://linuxgazette.net/issue17/raid.html

You should start the raid array before fsck -a, usually located in /etc/rc.d/rc.s for slackware distributions and should stop the raid array in both /etc/rc.d/rc.0 and rc.6. (BTW since they are identical files in slackware, can't we just do a softlink from one to the other and modify only one?)

To check to see if it is working, type cat /proc/mdstat, it should indicate what states the md devices are (/dev/md0 raid0 using /dev/sda1 and /dev/sdb1).

Test, test, test your raid. Shutdown, power-up, see if it is working like you expected.

I did some fancy copying using cp -rap switches to copy complete directory structures to the raid arrays. Then modified /etc/fstab to include the new drives.

Swap partitions do not need to be striped. They are automatically striped if a priority is used. Check the Software-RAID-mini-HOWTO and the Bonehead question section for details. It is amazingly simple.

Implement UPS NOW!

If you lose power (AC line), you will lose your raid array and any data that is on it! You should implement a UPS backup power supply. The purpose of the UPS is to keep your system running for a short period of time during brownouts and power fails. The UPS should inform your system that the power has failed through a serial port. There is a daemon that runs in the background that monitors the serial port. When it is informed that there is a power failure, it will wait a preset period of time (usually 5 minutes) than perform a system shutdown. The idea is that after 5 minutes of no power, the power will be down for a long time.

Most Linux distributions come with the basic UPS daemon powerd. Check "man powerd" for more info. It is a simple daemon that is implemented in /etc/inittab under what happens when the power fails. Basically, a dumb UPS, just closes a relay contact that is connected to the serial port. powerd monitors to see if the contact has closed. If it does it shuts down the PC after a predetermined time, warns users and can send an email to root.

I used an APC smart UPS that communicates through the serial port. There is an excellent daemon called apcupsd that works like a charm. It is located here. Please read the notice and sympathize with the author, he has done an excellent job (kudos to the author!). The installation works like a charm and the documentation is excellent.

https://www.dyer.vanderbilt.edu/server/apcupsd/

RAID0 and 2.0.31 to 2.0.34

Don't have a clue. I upgraded from 2.0.30 to 2.0.35 because 2.0.35 is the latest stable release.

RAID0 and Kernel 2.0.35

The mdtools compiled perfectly on my home machine (testbed running 2.0.30) but would not compile on my work machine (upgraded to 2.0.35). I kept getting an error about MD_Version (can't remember the exact name) not being defined. After a lot of head scratching and searching, I found that /usr/src/include/md.h contains the version number of the md driver. With version 2.0.30, it was ver 0.35, with 2.0.35 it is ver 0.36. If you "mdadd -V" it will indicate the version of md that mdadd will work with. So I had the wrong mdtools version. Here is the location of the correct version:

ftp://ftp.kernel.org/pub/linux/daemons/raid/raidtools-0.41.tar.gz

Download to /usr/local/src then

 
tar -zxvf raidtools-0.41.tar.gz
A new directory will be made /usr/local/src/raidtools-0.41

Change to the new directory and read the INSTALL file, then

./configure   
I can't remember if I had to then make and make install after this. I can't duplicate this now that I've upgraded to a new raid patch.

You should have a new mkraid and mdadd binary. Type mdadd -V to check if your binaries are updated. It should respond with something that says something like mdadd 0.3d compiled for raidtools-0.41. Then read the QuickStart.RAID for the latest info. For raid0, not much has changed from the previous versions.

RAID1 and Kernel 2.0.35

You must patch the kernel to enable RAID 1, 4 and 5. The patch is located at

ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raid0145-19981005-c-2.0.35.tz

Copy to /usr/src directory and uncompress the patch:

tar -zxvf raid0145-19981005-c-2.0.35.tz
Note the patch will be looking for /usr/src/linux-2.0.35 directory. If you have your 2.0.35 source installed as /usr/src/linux, you should mv /usr/src/linux /usr/src/linux-2.0.35 and soft link /usr/src/linux to it. ln -s /usr/src/linux-2.0.35 /usr/src/linux

To apply the patch, in /usr/src:

patch -p0 <raid0145-19981005-C-2.0.35 
(someplace the lowercase c got changed to an uppercase C in my system? Maybe after tar?)

You now get to recompile the kernel. When you select multiple devices, you will see options for raid 1, 4 and 5 available. So the steps are

make menuconfig (or config or xconfig)
make clean
make dep
make zImage
make modules  		(if you are using modules)
make modules_install
Copy the new kernel to wherever your distribution looks for it (/ or /boot). I suggest that you have a base kernel that works without raid and then a raid kernel. You can modify lilo.conf to allow you to select which kernel that you want to boot to. It's not difficult at all but at first glance it looks terrifying. Check /usr/lib/lilo for good examples and documentation.

Check dmesg | more to see if you have md drivers loaded and raid0 & 1 registered. Type cat /proc/mdstat to see if you have the new md driver. You should see 16 md devices instead of 4.

You will have to upgrade your raidtools. mdadd, /etc/mdtab and mdcreate are obsolete as well as a bunch of others. The new tools are raidstart, /etc/raidtab and mkraid. At this point the documentation is well out of date.

ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raidtools-19981005-B-0.90.tar.gz

Download to /usr/local/src then

 
tar -zxvf raidtools-19981005-B-0.90.tar.gz
This will make a new directory /usr/local/src/raidtools-0.90. Change to it and
./configure 
Again, I can't remember if I had to then make and make install after this.

New Simpler Method for RAID0 and Kernel 2.0.35

Steps to make a raid0 array /dev/md0 using two scsi drives /dev/sda1 and /dev/sdb1:

  1. Partition /dev/sda1 and /dev/sdb1 so that they have identical block sizes.
  2. Set the parition type to 0xfd. This is used by the new kernel to autodetect the raid on startup.
  3. Modify the /etc/raidtab file as per this example (the examples supplied with the raidtools are missing some important info):
     
    	# Striping example 
    	# /dev/md0 using /dev/sda1 and /dev/sdb1
    	
    	raiddev /dev/md0
    		raid-level		0
    		nr-raid-disks		2
    		persistent-superblocks	1
    		nr-spare-disks		0
    		chunk-size		32
    		device			/dev/sda1
    		raid-disk		0
    		device			/dev/sdb1
    		raid-disk		1
    
  4. Type mkraid -f /dev/md0 IMPORTANT - Read the error message and FOLLOW the directions explicitly!
  5. cat /proc/mdstat to see if the md device was made correctly
  6. Format the new raid device by mke2fs -c /dev/md0
  7. Make a directory to mount to (like /raidtest) just to test if it works.
  8. mount /dev/md0 /raidtest
  9. Copy a file to /raidtest to see if you can. If you have individual LEDs on your hard-drives, you should see both drives working.
  10. Reboot and see if the kernel autmatically shuts down the raid device md0. There should be some messges scroll past the screen. (anyone know how to read these shutdown messages like "dmesg"?)
  11. Check that on rebooting the computer that the kernel autodetects the raid devices and /dev/md0 comes up as a raid0 array. If not check that each of the previous steps have been followed especially step 2 and 4.

New Method for RAID1 and Kernel 2.0.35

Steps to make a raid1 array /dev/md2 using two striped pairs /dev/md0 (/dev/sda1 + /dev/sdb1) and /dev/md1 (/dev/sdc1 + /dev/sdd1:

  1. Follow the steps above for /dev/md0 and duplicate for /dev/md1. IMPORTANT - You don't mount or mke2fs /dev/md0 and /dev/md1. This was only to test if the raid0 worked!
  2. Modify the /etc/raidtab file as per this example (the examples supplied with the raidtools are missing some important info):
    	# Striping example 
    	# /dev/md0 using /dev/sda1 and /dev/sdb1
    	
    	raiddev /dev/md0
    		raid-level		0
    		nr-raid-disks		2
    		persistent-superblocks	1
    		nr-spare-disks		0
    		chunk-size		32
    		device			/dev/sda1
    		raid-disk		0
    		device			/dev/sdb1
    		raid-disk		1
    
    	# /dev/md1 using /dev/sdc1 and /dev/sdd1
    	
    	raiddev /dev/md1
    		raid-level		0
    		nr-raid-disks		2
    		persistent-superblocks	1
    		nr-spare-disks		0
    		chunk-size		32
    		device			/dev/sdc1
    		raid-disk		0
    		device			/dev/sdd1
    		raid-disk		1
    
    	# Mirror example 
    	# /dev/md2 using /dev/md0 and /dev/md1
    	
    	raiddev /dev/md2
    		raid-level		1
    		nr-raid-disks		2
    		persistent-superblocks	1
    		nr-spare-disks		0
    		chunk-size		32
    		device			/dev/md0
    		raid-disk		0
    		device			/dev/md1
    		raid-disk		1
    
  3. Type "mkraid -f /dev/md2" IMPORTANT - Read the error message and FOLLOW the directions explicitly! This step will take a while as the disks are synced together (over 30 min)
  4. cat /proc/mdstat to see if the md devices was made correctly
  5. Format the new raid device by mke2fs -c /dev/md2
  6. Make a directory to mount to (like /raidtest_mirror)
  7. mount /dev/md0 /raidtest
  8. Copy a file to /raidtest to see if you can. If you have individual LEDs on your hard-drives, you should all drives working.
  9. Add raidstart /dev/md2 to your /etc/rc.d/rc.s file just before fsck -a. A good place is right after swapon -a. At this time, the kernel does not autodetect raid1. This will be added to the next patch.
  10. Modify /etc/fstab to mount /dev/md2 as /raidtest.
    	/dev/md2	/raidtest	ext2	defaults	1	1
    
  11. Reboot and see if the kernel autmatically shuts down the raid device md0, md1 and md2. There should be some messges scroll past the screen. (anyone know how to read these shutdown messages like "dmesg"?)
  12. Check that on rebooting the computer that the kernel autodetects the raid devices /dev/md0 and /dev/md1 both come up as raid0 arrays. Check that /dev/md2 is detected as a raid1 array.
  13. cat /proc/mdstat to see if the md devices were made correctly.
You should have raid1 over raid0 array running.

Other resources that you may want to look at if you run into trouble:

  1. The linux raid archives: https://www.linuxhq.com/lnxlists/linux-raid/
  2. Post a news message to comp.os.linux.setup
  3. Search www.dejanews.com - archive site of the past 5 years of news postings
  4. Absolutely last if you are really stuck, e-mail the Linux RAID Mailing List. To send an enquiry, e-mail linux-raid@vger.rutgers.edu
    To join the kernel RAID list, e-mail majordomo@vger.rutgers.edu and put in the body of the message subscribe linux-raid
  5. Don't e-mail me, everything I know is recorded here!


Copyright © 1998, Eugene Blanchard
Published in Issue 35 of Linux Gazette, December 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next