# DMA Problems VIA VT82C586A/B/VT82C686/A/B/VT823x/A/C

## hifi

Hello, 

I've encountered problems with heavy disc load on my system. 

dmesg output: 

```

hda: dma_timer_expiry: dma status == 0x20

hda: DMA timeout retry

hda: timeout waiting for DMA

hda: status error: status=0x58 { DriveReady SeekComplete DataRequest }

ide: failed opcode was: unknown

hda: drive not ready for command

hda: status error: status=0x50 { DriveReady SeekComplete }

ide: failed opcode was: unknown

hda: no DRQ after issuing MULTWRITE

hda: status error: status=0x50 { DriveReady SeekComplete }

ide: failed opcode was: unknown

hda: no DRQ after issuing MULTWRITE

hda: status timeout: status=0xd0 { Busy }

ide: failed opcode was: unknown

hda: no DRQ after issuing MULTWRITE

ide0: reset: success

```

Yes I know this looks quite like a hardware problem, but I'm very sure, that it isn't (Tried other disks, same problems) Tried same disks in other machines no problems. And Problems first occured after a kernel update. 

My system is:

```

Linux vdr 2.6.12-gentoo-r6 #7 Mon Aug 22 11:56:52 CEST 2005 i686 VIA Nehemiah CentaurHauls GNU/Linux

```

It's a VIA Epia M10000 or so. ....

This is my Kernelconfig. 

http://www.sbox.tugraz.at/home/h/hifi/.config

Maybe some of U Gurus knows  a solution for my problem.  :Wink: 

cu Soon Robert

----------

## hifi

So I've did some testing, my hardware seems to be sureley ok. 

Absolutely no problems with a knoppix cd with kernel 2.4 dma is enabled and could copy a 10G file with about 30 MBs per second. 

Also freebsd doesn't seem to have problems. 

but  2.6.12-gentoo-r9 

and  2.6.13 vanilla

do have the same problem. 

Anyone here who could help?

maybe some things I should deactivate?

cu Robert

----------

## hifi

Hi there,

I did some testing yesterday. And in coincidence I realized, that the dma errors only occur if speedfreqd is running. 

I will do some testing to check wheter the problem is longhaul/speedfreq or the via82cxx driver. 

You will be informed.

----------

## nic0000

 *hifi wrote:*   

> 
> 
> I will do some testing to check wheter the problem is longhaul/speedfreq or the via82cxx driver. 
> 
> You will be informed.

 

I have problems with my M1000 too.

Did you have solved it?

----------

## marjag

Hi 

I have a SP8000 board and its totally impossible to use the longhaul driver. Whenever I try to run it I get hangings and strange problems. I guess you can remove the longahaul module and try without it to see if the problems disappear.

----------

## nic0000

 *marjag wrote:*   

>  I guess you can remove the longahaul module and try without it to see if the problems disappear.

 

I remove the longhaul driver, then acpi, then the whole power managment from the kernel without any result.

I have massive DMA problems on /dev/hd[c,d].

Syslog:

```
Apr  5 03:46:11 via-epia hdc: dma_intr: status=0x51 { DriveReady SeekComplete 

Error }

Apr  5 03:46:11 via-epia hdc: dma_intr: error=0x84 { DriveStatusError BadCRC }

Apr  5 03:46:11 via-epia ide: failed opcode was: unknown

Apr  5 03:46:13 via-epia hdc: dma_intr: status=0x51 { DriveReady SeekComplete 

Error }

Apr  5 03:46:13 via-epia hdc: dma_intr: error=0x84 { DriveStatusError BadCRC }

Apr  5 03:46:13 via-epia ide: failed opcode was: unknown

Apr  5 03:46:27 via-epia hdc: dma_intr: status=0xd0 { Busy }

Apr  5 03:46:27 via-epia ide: failed opcode was: unknown

Apr  5 03:46:27 via-epia ide1: reset: success

Apr  5 03:46:43 via-epia hdc: dma_intr: status=0x70 { DriveReady DeviceFault 

SeekComplete }

Apr  5 03:46:43 via-epia ide: failed opcode was: unknown

Apr  5 03:46:43 via-epia hdc: DMA disabled

Apr  5 03:46:43 via-epia ide1: reset: success

```

It ends with diable of DMA on this drive. 

But I need the secondary IDE for my RAID 1. Whithout this option the M10000 is useless for me.

tnx for answering, but I have do your tip already.

PS

I have update my BIOS on beta1 and beta2

http://www.viaarena.com/Guide/dmatest.bin

http://www.viaarena.com/Guide/i010t117.bin

but it do not help me

----------

## agent_jdh

You could try a couple of things, I've got an old server box with a 686A chipset and this seemed to help-

Recompile your kernel with the "Use multi-mode by default" option enabled - this helped with a Maxtor drive I kept getting Drive Ready, SeekComplete o/ps with.

It looks like DMA mode is being disabled after an IDE bus reset.  You can use hdparm (and add it to your default runlevel if you've not already done so) with the -k1 (or -K1, can't recall, on sata here at the moment) option to retain your settings after a bus reset.  Edit /etc/conf.d/hdparm to set per-drive options.

_however_

This

```
Apr  5 03:46:11 via-epia hdc: dma_intr: error=0x84 { DriveStatusError BadCRC }

Apr  5 03:46:11 via-epia ide: failed opcode was: unknown 
```

and this

```
Apr  5 03:46:43 via-epia hdc: dma_intr: status=0x70 { DriveReady DeviceFault

SeekComplete }

Apr  5 03:46:43 via-epia ide: failed opcode was: unknown 
```

Do not look good.  It looks like that's the reason for your bus reset.  I'd get the manufacturer's drive diagnostic utility for whatever drive you had and run a thorough test on your secondary master hard drive.  What make is it?  It looks like that unit has some serious problems.

EDIT - Shouldn't have to add, but back everything up ASAP, serious problems tend to be terminal w.r.t. hard drives.

EDIT #2 - Do you have a different cable you can try?  Or at least disconnect and reconnect the drive to ensure the cables are properly seated?  Trouble with DMA transfers might be a cable issue, but the CRC fail is still a concern, because that is all done internally in the drive.

----------

## nic0000

Thanks a lot for fast answer agent_jdh

but the problem I had are definetly via-epia specific.

I use a diffrent drives, diffrent cables, diffrent kernel, diffrent distros...

On the official site the DMA problem was known, but they say: 

"update your BIOS, this solve the problem"

I many forums I read: "I wait 3 Years for a way to solve the problem. Via ignore us"

Some people do not use a second drive, so they can ignore or do not see the problem, but I can not.

I buy this stuff to replace my old home server because it was to slow (450mhz) and supports only 120GB drives on UDMA33 . 

I wont to have my $HOME on a RAID 1 and export it via nfs to my workstations.

I am so frustrated  :Sad: 

----------

## agent_jdh

 *nic0000 wrote:*   

> Thanks a lot for fast answer agent_jdh
> 
> but the problem I had are definetly via-epia specific.
> 
> I use a diffrent drives, diffrent cables, diffrent kernel, diffrent distros...
> ...

 

But this problem only happens on /dev/hdc?  No problems with Pri. Master /dev/hda? You could try slaving your 2nd drive to hda to make it /dev/hdb.  It could just be that the DMA issues are with the secondary controller.

A workaround would obviously be to buy a cheap PCI IDE controller, not ideal I know, but it would be cheaper than buying a new motherboard.

ps, 450MHz and ATA/33 are plenty fast enough for an nfs server (assuming you use 100MBit ethernet and not gigabit) - also, newer linux kernels are capable of working around the 120GB limit - I've got (as I said) an old VIA board with a 600MHz P3 on it as my server, and I recently upgraded one of the drives to a 200GB one - the bios only 'sees' it as a 120GB drive, but once linux boots, the full 200GB is available to me.

----------

## nic0000

 *agent_jdh wrote:*   

> But this problem only happens on /dev/hdc?  No problems with Pri. Master /dev/hda?

 

I am not sure, but I install gentoo on this drive without (note) any problems. I install on an RAID1 LVM2 combo, but set the hdc drive "missing". So I note the problems later. But it is a goot sugesstion, I will test it.

 *agent_jdh wrote:*   

> You could try slaving your 2nd drive to hda to make it /dev/hdb.  It could just be that the DMA issues are with the secondary controller.

 

I think you be right, but it does not help me. I need two IDE controllers for an raid. 

1)  preformace

2) security, when a drive die on the bus it can kick the second drive too.

 *agent_jdh wrote:*   

> A workaround would obviously be to buy a cheap PCI IDE controller, not ideal I know, but it would be cheaper than buying a new motherboard.

 

That is not so importend, my time is dearer.  :Wink: 

I have a IDE controller but I can use it also in my old Server with more PCI/EIDE slots for other stuff;-)

 *agent_jdh wrote:*   

> ps, 450MHz and ATA/33 are plenty fast enough for an nfs server (assuming you use 100MBit ethernet and not gigabit) - also, newer linux kernels are capable of working around the 120GB limit - I've got (as I said) an old VIA board with a 600MHz P3 on it as my server, and I recently upgraded one of the drives to a 200GB one - the bios only 'sees' it as a 120GB drive, but once linux boots, the full 200GB is available to me.

 

Yes, I know about the performance issues for nfs but I wont have more then 2 Workstations in my Network. On the Server was an DB and mldonkey, so the performance is not so good when I copy large files. Then freez KDE for an while on my Workstations, it was not realy fine.

About the 120GB Limit: My Mobo freezes in the early boot state when the drive was bigger then 120GB. 

I experiment with hdx=stroke and have this "great" idea to save time and buy an modern hardware with fast DDR Memory and many other features.

Now I stand on the same place like before *damn*

I think I buy a 1-1,2Ghz Celeron or an Athlon 4 Mobile with a nice and small Mobo.

thx for you help  :Wink: 

----------

## durilka

just in case if it's still the question or other people may need it.

I have similair problem and yes, it's dma. The only thing i've found is to disable dma for the drive in trouble. ie: hdparm -d0 /dev/hdd in my case. From googling I guess the problem's root is that all we have hyper modern preemptive(whatever this means  :Wink:  kernels and our old drives simply cannot work this fast. 

so edit your /etc/conf.d/hdparm and put there -d0 for your old cdrom.

----------

