# Disk activity spikes the load

## spupy

I'm not sure if this is how it is supposed be. Extensive disk activity (updatedb, copying big/many files, etc.) brings the load up to 7-8, sometimes even above 11. What could be the reason. Please, any ideas? ;_;

Some info:

kernel version 2.6.24.

Filesystem: ext3.

I don't know what other info might be needed.

----------

## Hypnos

In what units are you reporting these "load" figures?  CPU %?

If so, 11% CPU load doesn't seem bad for heavy disk activity.

If you want to explore this further, try this howto.

----------

## spupy

 *Hypnos wrote:*   

> In what units are you reporting these "load" figures?  CPU %?

 

Load as in load reported by uptime and htop.  A load over 1 means processes need to wait for the CPU. So 8 is VERY BAD.

----------

## Hypnos

Ah, you mean this.

That is bad -- didn't believe it!  That means your computer is unusable during disk I/O.

How long has this behavior been going on?  Has it been this way since you built your last kernel?  Please post your kernel .config ...

----------

## Hu

To provide the kernel configuration and other potentially relevant information, please provide the output of sfdisk -l | nl ; nl /etc/fstab ; nl /proc/mounts ; zgrep -E '^[^#]' /proc/config.gz.

----------

## HeissFuss

Also, 'hdparm -I' from the device in question.  It may be that DMA isn't on (if this is a PATA device).

----------

## spupy

 *Hypnos wrote:*   

> Ah, you mean this.
> 
> That is bad -- didn't believe it!  That means your computer is unusable during disk I/O.
> 
> How long has this behavior been going on?  Has it been this way since you built your last kernel?  Please post your kernel .config ...

 

It is possible, although I haven't changed my kernel config in quite a while.

 *Hu wrote:*   

> To provide the kernel configuration and other potentially relevant information, please provide the output of sfdisk -l | nl ; nl /etc/fstab ; nl /proc/mounts ; zgrep -E '^[^#]' /proc/config.gz.

 

http://pastebin.com/m7450bee3

 *HeissFuss wrote:*   

> Also, 'hdparm -I' from the device in question.  It may be that DMA isn't on (if this is a PATA device).

 

http://pastebin.com/m308acc2d

----------

## Hypnos

How full are your partitions?  I.e., what is the output of "df -hl" ?

If a partition gets very full (> 90% ?) performance really drops ...

----------

## spupy

 *Hypnos wrote:*   

> How full are your partitions?  I.e., what is the output of "df -hl" ?
> 
> If a partition gets very full (> 90% ?) performance really drops ...

 

Looks kind of full?

```
$ df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda1              15G   14G  715M  95% /

udev                   10M   76K   10M   1% /dev

/dev/sda3              75G   63G  9.9G  87% /home

shm                   949M     0  949M   0% /dev/shm
```

----------

## Hypnos

Well, I would create about 2GB of free space in "/" and see how things go ...

----------

## spupy

By the way, it is possible that this all started after I upgraded from 512MB to 2GB RAM.

----------

## Hypnos

 *spupy wrote:*   

> By the way, it is possible that this all started after I upgraded from 512MB to 2GB RAM.

 

That's easy to test -- go back to 512MB and see how it performs.

----------

## richard.scott

Are you on a 32bit Gentoo install or 64bit.

In my experience disk performace sucks under 64bit   :Crying or Very sad: 

----------

## Hypnos

 *richard.scott wrote:*   

> Are you on a 32bit Gentoo install or 64bit.
> 
> In my experience disk performace sucks under 64bit  

 

I''ve never heard of this problem, so it's probably an issue with your setup.

Perhaps you have the same problem as the OP here:

* Are your partitions full?

* How much RAM?

----------

## richard.scott

lol, so you've not see this thread then:

AMD64 system slow/unresponsive during disk access...

I'm on a VIA chipset tho, not Nvidia.

My partitions aren't full:

```
# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/md2               73G   23G   50G  32% /

udev                   10M  160K  9.9M   2% /dev

/dev/md0               92M   17M   70M  20% /boot

shm                   215M     0  215M   0% /dev/shm

```

I have 1GB of ram and the same of swap space.

I have a 32bit install of Gentoo on a different pair of disks (of the exact same model and brand) and it runs like a dream in the same chassis!

I've never figured out what it is.

----------

## Hypnos

Well, it's not a problem with 64-bit (since it works flawlessly for most), it's a problem with your setup when you happen to run 64-bit.  I understand that it affects many people, but it's still quite a minority.

So the variables:

[X] Full Partitions

[ ] RAM

[ ] 64-bit

The last would require 32-bit and 64-bit LiveCDs.

----------

## richard.scott

Yes, my install is 64bit and I have 512MB of ram.... 7MB is free and 0mb swap space used  :Smile: 

The machine doesn't do much, as its a storage server.

I've just tried to re-compile my kernel and I get this as my load average:

```
12:35:57 up 30 min,  2 users,  load average: 2.24, 2.24, 1.93
```

I have heard that switching off "optimise for size" in the kernel may help, so I'm trying that.

Rich.

----------

## richard.scott

Using the same config my 2.6.25-hardened-r13 64bit system runs really smoothly.

my HD's don't gurgle like they do with the 2.6.28 kernels, they are almost silent.

----------

## Hypnos

Well, good luck bisecting the kernel commits to find the regression  :Smile: 

Poking around on Google, it seems that BIOS settings having to do with the disk controller can effect performance.  For example, some have an option to set "Performance" or "Bypass", the former causing problems that sound like yours.

----------

## tgR10

change your i/o schedule to anticipatory, cfq didn't work well for me too on x86_64

```
# CONFIG_IOSCHED_DEADLINE is not set

# CONFIG_IOSCHED_CFQ is not set

CONFIG_DEFAULT_IOSCHED="anticipatory"
```

----------

## richard.scott

I'd read that somewhere before and set it to anticipatory.

I'm just re-compiling now with the removal of cfq and deadline from being compiled to see if it helps.

----------

## szczerb

Works for me, thanks  :Smile:  But cfq used to work fine for me on amd64 since .25 or .26 (that's when I did my amd64 install) to probably .28

----------

