The RAID patches can be applied to kernels v2.0.29 and higher, and 2.1.26 through 2.1.62 (versions 2.1.x newer than this come with the RAID code built-in). Please avoid kernel 2.0.30, it has serious memory management, TCP/IP Masquerading and ISDN problems. (Earlier and later versions should be OK). Mirroring-over-striping is supported, as well as other combinations (RAID 1,4,5 can be put on top of other RAID-1,4,5 devices, or over the linear or striped personalities. Linear & striping over RAID 1,4,5 are not supported).
Please note that many of the 2.1.x series development kernels have problems, and thus the latest RAID patches from the ALPHA directory at http://ftp.kernel.org/pub/linux/daemons/raid/ need to be applied.
Hard-drive errors (which typically occur when a disk is failing or has failed) are written to the syslog message daemon. You need to configure a system-log-checking utility (such as 'logcheck') to extract the important messages from the chaff of other system activity. I don't know of any package that will keep statistics or show any 'frequency of intermittent failures' report.
You can view the status of RAID by performing a cat /proc/mdstat. You can fiddle with IDE parameters (at the risk of making you system unusable) with hdparm; you can run IDE diagnostics with ide-smart. Benchmarks include bonnie among others. Volume management can be done with the LVM tools. But there is no complete, unified, graphical suite that I know of.
Manufacturer: | DPT |
Model Number: | PM3334UW/2 (two-channel "SmartRAID IV") |
Number of disks, total size, raid-config: | Two RAID-5 groups, one on each SCSI channel, each array consisting of nine non-hot-swappable 9 GB disk drives. The ninth drive on each RAID group is designated as a "hot spare". One channel also includes a Quantum DLT 7000 backup tape device. |
On-controller memory | 64 MB as qty 4 non-parity, non-ECC, non-EDO 60ns 16MB single-sided 72-pin SIMMs |
Months in use: | 10 months in heavy use. |
OS kernel version, vendor and vendor version: | 2.0.32, RedHat Linux 5.0 |
Use (news spool, file server, web server?): | File server (directories for developers) |
Support (1=bad, 5=excellent or n/a didn't try support): | 3 |
Performance (1=very dissatisfied 5=highly satisfied or n/a): | 4 |
Reliability (1=terrible 5=great or n/a no opinion): | 4 |
Installation (1=hard, 5=very easy) (includes s/w install issues): | 3 |
Overall satisfaction (1 to 5): | 4 |
Comments: |
Regarding DPT support:
Try DPT's troubleshooting web pages first.
DPT's support staff does respond to e-mail, typically within
one to two working days, and they do sometimes come up with
interesting suggestions for work-arounds to try.
But in my admittedly limited experience with DPT support staff
as an end-user, unless you're truly stuck you're more likely
to find a work-around to your problems before they do.
Regarding DPT documentation: The SmartRAID IV User's Manual is better than some documentation I've seen, but like most documentation it's nearly useless if you encounter a problem. The documentation does not appear to be completely accurate as regards hot spare allocation. And unsurprisingly, the printed documentation does not cover Linux. Regarding DPT PM3334UW/2 installation: The following combinations of SCSI adapters and motherboards did not work for us:
Symptoms of non-working combinations may include that the Windows-based DPT Storage Manager application reports "Unable to connect to DPT Engine" or "There are no DPT HBAs on the local machine". Regarding the DPT Storage Manager application: The Windows-based DPT Storage Manager application version 1.P6 must have all "options" installed, or it cannot run. Some variant of this application is required in order to build RAID groups. The DPT Storage Manager application is dangerous--if you click on the wrong thing the application may immediately wipe out a RAID group, without confirmation and without hope of recovery. If you are adding a RAID group, you are advised to disconnect physically any other RAID groups on which you do not plan to operate, until you have finished working with the Storage Manager application. There is no Linux version of the Storage Manager application (or any other DPT run-time goodies) available at present. Regarding Michael Neuffer's version 2.70a eata_dma.o driver for Linux: The eata_dma driver does appear to work, with the following minor problems:
Miscellaneous issues: if a hot spare is available, experiments appear to show that it is not possible to detect when the hot spare has been deployed automatically as the result of a drive failure. If a hot spare is not available, then an audible alarm sounds (earsplittingly) when a drive fails.
|
Author: (yourname and email or anonymous) | Jerry Sweet ([email protected]) |
Date: | November 10, 1998 |
Manufacturer: | DPT |
Model Number: | 3334UW |
Number of disks, total size, raid-config: | 3 x 9 GB => 17 GB (RAID 5) |
On-controller memory | 64 MB parity, non ECC |
Months in use: | 3 months, 2 weeks in heavy use |
OS kernel version, vendor and vendor version: | 2.0.30, RedHat Linux 4.2 |
Use (news spool, file server, web server?): | File server (home directories) |
Support (1=bad, 5=excellent or n/a didn't try support): | n/a |
Performance (1=very dissatisfied 5=highly satisfied or n/a): | 4 |
Reliability (1=terrible 5=great or n/a no opinion): | 4 |
Installation (1=hard, 5=very easy) (includes s/w install issues): | 3 |
Overall satisfaction (1 to 5): | 4 |
Comments: |
Works nicely, and installation was easy enough in DOS, they even have
a Linux icon included now. What I really would benefit from would
be dynamic partitioning a la AIX, but that is a file system matter as
well.
If the kernel crashes on mkfs.ext2 right after boot, try generating some traffic on the disk (dd if=/dev/sdb of=/dev/null bs=512 count=100) before making the file system. (Thanks Mike!) (ed note: this is a well known Linux 2.0.30 bug; try using 2.0.29 instead). |
Author: (yourname and email or anonymous) | Oskari J��skel�inen ([email protected]) |
Date: | October 1997 |
Linux 2.1.58 with an Adaptec 2940 UW card, two IBM DCAS drives and the DiLog 2XFR: -------Sequential Output-------- ---Sequential Input-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 8392 99.0 13533 61.2 5961 48.9 8124 96.4 15433 54.3 Same conditions, one drive only: -------Sequential Output-------- ---Sequential Input-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 6242 72.2 7248 32.4 3491 25.1 7356 84.2 7864 25.2
The following are comparisons of hardware and software RAID performance. The test machine is a dual-P2, 300MHz, with 512MB RAM, a BusLogic Ultra-wide SCSI controller, a DPT 3334UW SmartRAID IV controller w/64MB cache, and a bunch of Seagate Barracuda 4G wide-SCSI disks.
These are very impressive figures, highlighting the strength of software raid!
-------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU (DPT hardware RAID5, 3 disks) DPT3x4G 1000 1914 20.0 1985 2.8 1704 6.5 5559 86.7 12857 15.6 97.1 1.8 (Linux soft RAID5, 3 disks) SOF3x4G 1000 7312 76.2 10908 15.5 5757 20.2 5434 86.4 14728 19.9 69.3 1.5 (DPT hardware RAID5, 6 disks) DPT6x4G 1000 2246 23.4 2371 3.4 1890 7.1 5610 87.3 9381 10.9 112.1 1.9 (Linux soft RAID5, 6 disks) SOF6x4G 1000 7530 76.8 16991 32.0 7861 39.9 5763 90.7 23246 49.6 145.4 3.7 (I didn't test DPT RAID5 w/8 disks because the disks kept failing, even though it was the exact same SCSI chain as the soft RAID5, which returned no errors; please interpolate!) (Linux soft RAID5, 8 disks) SOF8x4G 1000 7642 77.2 17649 33.0 8207 41.5 5755 90.6 22958 48.3 160.5 3.7 (Linux soft RAID0, 8 disks) SOF8x4G 1000 8506 86.1 27122 54.2 11086 58.9 6077 95.9 27436 62.9 185.3 4.9
Here's the output of the Bonnie program, on a DPT 2144 UW with 16MB of cache and three 9GB disks in a RAID 5 setup. The machine is on a dual processor Pentium Pro running Linux 2.0.32. For comparison, the Bonnie results for the IDE drive on that machine are also given. For comparison, some hardware raid figures are also given, for a Mylex controller on a DEC/OSF1 machine (KSPAC), with a 12 9 GB disk RAID. (Note that the test size is rather small, at 100MB, it tests memory performance as well as disk).
-------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU Machine 100 3277 32.0 6325 23.5 2627 18.3 4818 44.8 59697 88.0 575.9 16.3 IDE 100 9210 96.8 1613 5.9 717 5.8 3797 36.1 90931 96.8 4648.2 159.2 DPT RAID 100 5384 42.3 5780 18.7 5287 42.1 12438 87.2 62193 83.9 4983.0 65.5 Mylex RAID