# problem (?) with promise fasttrak tx4 + 2*WD400BB

## tacki

Hi there,

i have a Promise Fasttrak100 TX4 Controller (yep, the one with 4 channels) and 2 Western Digital 400BB disks attached to IDE1 and IDE3.

/dev/hde is the first Disk, /dev/hdi the second... i created a software-raid and everything works perfectly. today i checked hdparm on both disks and got a strange result:

hdparm -tT /dev/hde:

/dev/hde:

 Timing buffer-cache reads:   128 MB in  0.86 seconds =149.54 MB/sec

 Timing buffered disk reads:  64 MB in  2.37 seconds = 27.04 MB/sec

hdparm -tT /dev/hdi 

/dev/hdi:

 Timing buffer-cache reads:   128 MB in  0.86 seconds =149.03 MB/sec

 Timing buffered disk reads:  64 MB in  1.39 seconds = 45.98 MB/sec

hdparm -tT /dev/md0:

/dev/md0:

 Timing buffer-cache reads:   128 MB in  0.85 seconds =149.88 MB/sec

 Timing buffered disk reads:  64 MB in  1.20 seconds = 53.13 MB/sec

Why is the buffered disk read of the first disk so damn slow? 

I also noticed some /var/log/messages entries if the disks are under heavy load:

Jun 29 17:29:57 maggie kernel: hde: dma_intr: bad DMA status (dma_stat=35)

Jun 29 17:29:57 maggie kernel: hde: dma_intr: status=0x50 { DriveReady SeekComplete }

Jun 29 17:29:59 maggie kernel: hdi: dma_intr: bad DMA status (dma_stat=35)

Jun 29 17:29:59 maggie kernel: hdi: dma_intr: status=0x50 { DriveReady SeekComplete }

Can anyone explain what this means? 

thnx

----------

## delta407

Hmm... it seems your two drives are either different hardware or detected using different settings. Compare the drives using:

```
# hdparm /dev/hde /dev/hdi
```

You may have to emerge hdparm first, but this will show you what the kernel thinks it's doing with your diskies.

----------

## rommel

what kernel are you using and did you enable all the options under the promise section when you configured your kernel....you know whats strange...the western digital is slow...and one drive shows that your getting 45mb/s which is like atleat 8mb/s above what it should be...then the other is about the same just in the other direction.

----------

## tacki

hdparm /dev/hde /dev/hdi shows that both drives are exactly the same:

/dev/hde:

 multcount    = 16 (on)

 I/O support  =  0 (default 16-bit)

 unmaskirq    =  0 (off)

 using_dma    =  1 (on)

 keepsettings =  0 (off)

 nowerr       =  0 (off)

 readonly     =  0 (off)

 readahead    =  8 (on)

 geometry     = 77545/16/63, sectors = 78165360, start = 0

 busstate     =  1 (on)

 acoustic     =  0 (128=quiet ... 254=fast)

/dev/hdi:

 multcount    = 16 (on)

 I/O support  =  0 (default 16-bit)

 unmaskirq    =  0 (off)

 using_dma    =  1 (on)

 keepsettings =  0 (off)

 nowerr       =  0 (off)

 readonly     =  0 (off)

 readahead    =  8 (on)

 geometry     = 77545/16/63, sectors = 78165360, start = 0

 busstate     =  1 (on)

 acoustic     =  0 (128=quiet ... 254=fast)

I'm using the 2.4.19-gentoo-r5 Kernel and activated both Promise-Features (Special UDMA, Special Fasttrak).

----------

## rommel

what is the actual partition layout like...on mine i have three scsi drives...sda1 is /boot ...sdb1 is swap sdc1 also swap (each of these is 512mbs) then the remainder of the drives i have an extended partition on each with one logical drive in each sda5, sdb5 and sdc5...they make up my /dev/md0...try running hdparm on a partition that is not part of the raid....is this possible for comparison...do you have a non-raided partition on each of the drives ?

----------

## delta407

Try booting off the install CD and running hdparm -Tt on both disks... then, swap the jumpers/cables/whatever, and try again. That would say if it's the drive or the controller or something else...

----------

## tacki

i bought 2 additional wd400bb's 

after some searching and testing, i found a small difference between those 4 disks:

root@maggie tacki # cat /proc/ide/hde/model 

WDC WD400BB-00CXA0

root@maggie tacki # cat /proc/ide/hdg/model

WDC WD400BB-00CAA0

root@maggie tacki # cat /proc/ide/hdi/model

WDC WD400BB-00CAA0

root@maggie tacki # cat /proc/ide/hdk/model

WDC WD400BB-00CAA0

well... 

i created a raid5 and got terrible results (28 MB/s buffered disk reads).

----------

## Forge

I have four WD400BBs on a FTTX4 (Stay far the Hel away from i8** P4 mobos, with this card), I get ~45MB/s on both disks on chip #1, and around 30MB/s on both disks on chip #2, and horrible performance if a RAID uses both chips at once. I worked around this GLARING GAPING FLAW in Promise's TX4 by putting one each RAID0 on each chip, and not sharing any RAIDs across the two.

Card: (wince now, most bb's don't do ASCII art well) (edit: somewhat repaired)

---------------------

|`IDE3````IDE1`|

|`IDE4````IDE2`|

|`CHIP2``CHIP1`|

|`````DEC*`````|

----------------------

* The Intel/DEC PCI-to-PCI bridge (THE PROBLEM)

The current kernel driver doesn't set up DMA correctly on CHIP #2 and it's connectors. The ones that did set DMA right caused other nasty, nasty errors. I'd pester Andre Hedrick and Arjan Van Der Ven (very nicely), mention you have the card, and just wait for better support.

The other issue, that hit your RAID5, is the DEC/Intel PCI-to-PCI bridge. Do not build RAIDs that use both chips at once. You have been warned.

The kernel ataraid driver often gives about the same performance as MDraid, and with a different and more entertaining sety of errata/bugs. Think about it.

----------

## delta407

 *Forge wrote:*   

> (wince now, most bb's don't do ASCII art well) (edit: somewhat repaired)
> 
> ---------------------
> 
> |`IDE3````IDE1`|
> ...

 

You could have put it in a [ code ] block...

----------

## rommel

well he has answered hopefully the issue "CODE BLOCK" or not...try what forbes is saying tacki...worth a shot and simple enough...although you will not have use really of two channels of a 4 channel card for a single array...that kinda bites....i had terrible results with smp and scsi lsr raid....on a single processor machine the results where better -T givign up 297mb/s and -t at around 80mb/s.....i would like to see it higher though...lol

edit: hey tacki jsut for the hell of it use both channels set up two raid 0 arrays and see if the differences reported in the drives makes any difference in the raid performance....would be interesting to note i guess...but kinda alot of trouble maybe

----------

## Forge

Apparently someone heard my whining.  :Smile: 

----------

