# [SOLVED] GSATA vs. MDADM + RAID performance issue

## vitoriung

Hi,

I got new machine, i7-920 on mb Gigabyte GA-EX58-UD5, 12 GB RAM, 2x 500 GB WD SATA in RAID0 connected into the GSATA2 controller.

This PC is meant to run QEMU-KVM guests.

I have installed just base x64 system, kernel 2.6.33, without graphics environment, and after I installed first Windows 2003 guest found out very poor performance.

Guest randomly stuck for few seconds then running ok for few moments so I started investigating the IO performance.

Running vmstat  gives following output:

```

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----

 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa

0  0     12    142    144   2169    0    0     0   246 10176 24845  4  3 92  2

 2  1     12    141    144   2169    0    0     0    25 10549 25325  4  3 92  1

 0  6     12    141    144   2169    0    0     0    52 8144 17052 10  1 66 22

 1  7     12    141    144   2169    0    0     0    20 3875 7761  1  1 75 24

 1  6     12    141    144   2169    0    0     0     5 4719 9940  1  1 72 26

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----

 0  6     12    140    145   2169    0    0     0     0 3982 7701  1  1 73 24

 1  6     12    140    145   2169    0    0     0    31 3908 7669  1  2 64 34

 0  6     12    140    145   2169    0    0     0     0 4102 7364  2  2 61 35

 0  7     12    140    145   2169    0    0     0    11 3967 7795  1  2 59 38

 1  8     12    141    145   2169    0    0     0    25 8282 15407  7  2 48 43

 1  8     12    141    145   2169    0    0     0    18 5441 7613 13  1 50 37

 1  8     12    141    145   2169    0    0     0   111 5611 7996 13  1 49 38

 1  9     12    141    145   2169    0    0     0     4 5490 7788 13  1 37 49

 2  9     12    141    145   2169    0    0     0     0 5443 7592 13  1 37 49

 1  8     12    141    145   2169    0    0     0    25 7081 11611 12  1 40 47

```

Watching running guest stuck comes typically when column b (The number of processes in interruptible sleep) in procs gets higher than 0.

Running  iostat -x 5 during HDD performance test on the Guest gives following:

```

[Guest is running doing nothing]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

          10.98    0.00    1.00    0.15    0.00   87.87

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util

sda               0.00     0.00    0.00    5.80     0.00    36.80     6.34     0.01    2.76   1.66   0.96

dm-0              0.00     0.00    0.00    5.00     0.00    33.60     6.72     0.01    3.16   1.88   0.94

dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

[Write test (goes up to)]:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

           3.91    0.00    1.88   10.47    0.00   83.74

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util

sda              89.80     1.80   67.40 1303.40  5017.60 166335.20   125.00     0.85    0.62   0.58  79.98

dm-0              0.00     0.00  158.00 1304.60  5056.00 166309.60   117.17     0.96    0.66   0.55  80.14

dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

[Read and random R/W]

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

           0.12    0.00    0.02   22.99    0.00   76.88

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util

sda               0.00     0.20    0.40    1.00    12.80    14.40    19.43     2.19 1041.00 714.29 100.00

dm-0              0.00     0.00    0.80    0.80    25.60    16.00    26.00     2.46  910.88 625.00 100.00

dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

           2.53    0.00    0.17   21.92    0.00   75.38

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util

sda               0.00     1.00   46.20   38.40  1478.40  1195.20    31.60     2.44   40.89  11.81  99.88

dm-0              0.00     0.00   45.80   39.00  1465.60  1193.60    31.36     2.53   45.14  11.78  99.88

dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

```

whole iostat test here - http://pastebin.com/QvJvKFZy

Wanted to rather test with smartctl but cannot enable it:

```

/etc/init.d/smartd start

messages:

May 19 14:27:59 kvm1srv smartd[7608]: smartd version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen

May 19 14:27:59 kvm1srv smartd[7608]: Home page is http://smartmontools.sourceforge.net/

May 19 14:27:59 kvm1srv smartd[7608]: Opened configuration file /etc/smartd.conf

May 19 14:27:59 kvm1srv smartd[7608]: Drive: DEVICESCAN, implied '-a' Directive on line 23 of file /etc/smartd.conf

May 19 14:27:59 kvm1srv smartd[7608]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices

May 19 14:27:59 kvm1srv smartd[7608]: Problem creating device name scan list

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, opened

May 19 14:27:59 kvm1srv smartd[7608]: Device /dev/sda: using '-d sat' for ATA disk behind SAT layer.

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, opened

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, not found in smartd database.

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, can't monitor Current Pending Sector count - no Attribute 197

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, can't monitor Offline Uncorrectable Sector count  - no Attribute 198

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, appears to lack SMART Self-Test log; disabling -l selftest (override with -T permissive Directive)

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, appears to lack SMART Error log; disabling -l error (override with -T permissive Directive)

May 19 14:27:59 kvm1srv smartd[7608]: Device: /dev/sda, is SMART capable. Adding to "monitor" list.

May 19 14:27:59 kvm1srv smartd[7608]: Monitoring 0 ATA and 1 SCSI devices

May 19 14:27:59 kvm1srv smartd[7617]: smartd has fork()ed into background mode. New PID=7617.

May 19 14:27:59 kvm1srv smartd[7617]: file /var/run/smartd.pid written containing PID 7617

# smartctl -i /dev/sda

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen

Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===

Device Model:     Performance

Serial Number:    7HTXHBD0ERM9NWCXY0XA

Firmware Version: 0953

User Capacity:    1,000,056,291,328 bytes

Device is:        Not in smartctl database [for details use: -P showall]

ATA Version is:   7

ATA Standard is:  Exact ATA specification draft version not indicated

Local Time is:    Wed May 19 16:48:55 2010 BST

SMART support is: Available - device has SMART capability.

SMART support is: Disabled

# smartctl -s on /dev/sda

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen

Home page is http://smartmontools.sourceforge.net/

=== START OF ENABLE/DISABLE COMMANDS SECTION ===

SMART Enabled.

# smartctl -a -d ata /dev/sda

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen

Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===

Device Model:     Performance

Serial Number:    7HTXHBD0ERM9NWCXY0XA

Firmware Version: 0953

User Capacity:    1,000,056,291,328 bytes

Device is:        Not in smartctl database [for details use: -P showall]

ATA Version is:   7

ATA Standard is:  Exact ATA specification draft version not indicated

Local Time is:    Wed May 19 16:50:17 2010 BST

SMART support is: Available - device has SMART capability.

SMART support is: Disabled

=== START OF READ SMART DATA SECTION ===

SMART overall-health self-assessment test result: PASSED

General SMART Values:

Offline data collection status:  (0x00) Offline data collection activity

                                        was never started.

                                        Auto Offline Data Collection: Disabled.

Total time to complete Offline 

data collection:                 (   0) seconds.

Offline data collection

capabilities:                    (0x00)         Offline data collection not supported.

SMART capabilities:            (0x0000) Automatic saving of SMART data                                  is not implemented.

Error logging capability:        (0x00) Error logging NOT supported.

                                        No General Purpose Logging support.

SMART Attributes Data Structure revision number: 5

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

194 Temperature_Celsius     0x0022   044   050   000    Old_age   Always       -       44 (0 21 0 0)

Warning: device does not support Error Logging

Error SMART Error Log Read failed: Input/output error

Smartctl: SMART Error Log Read Failed

Warning: device does not support Self Test Logging

Error SMART Error Self-Test Log Read failed: Input/output error

Smartctl: SMART Self Test Log Read Failed

Device does not support Selective Self Tests/Logging

```

 hdparm test gives this:

```

# hdparm -i /dev/sda

/dev/sda:

 Model=Performance, FwRev=0953, SerialNo=7HTXHBD0ERM9NWCXY0XA

 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs }

 RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0

 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1

 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=1953234944

 IORDY=on/off, tPIO={min:240,w/IORDY:120}, tDMA={min:120,rec:120}

 PIO modes:  pio0 pio1 pio2 pio3 pio4 

 DMA modes:  mdma0 mdma1 mdma2 

 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 

 AdvancedPM=no WriteCache=enabled

 Drive conforms to: Unspecified:  ATA/ATAPI-2,3,4,5,6,7

# hdparm -tT /dev/sda

/dev/sda:

 Timing cached reads:   17824 MB in  2.00 seconds = 8922.68 MB/sec

 Timing buffered disk reads:  408 MB in  3.00 seconds = 135.79 MB/sec
```

I am kind of stuck now, don't know if this is the hardware configuration issue or something is messed up in my system.

Any help is highly appreciative.Last edited by vitoriung on Mon Sep 13, 2010 9:26 am; edited 2 times in total

----------

## erik258

Hi, 

Is there a filesystem that the guest's files reside on?  If so, is it ext4?  Ext4 seems to enjoy stuttering.

----------

## krinn

i'm sorry i don't really know what's your problem there.

The root key of your problem is that your vm are slow, and you think it comes from the hdd ?

I don't think 135mb/s is slow for hdd

i would have check my system vm / the sheduler or the preemption model, things like that

----------

## vitoriung

As I am learning KT troubleshooting technique in my job now, I will Separate and Clarify a bit if you don't mind  :Smile:  :

 *erik258 wrote:*   

> Hi, 
> 
> Is there a filesystem that the guest's files reside on?  If so, is it ext4?  Ext4 seems to enjoy stuttering.

 

Yes it's an ext4, so I will try to convert to ext3(?)

 *krinn wrote:*   

> 
> 
> i'm sorry i don't really know what's your problem there.
> 
> The root key of your problem is that your vm are slow, and you think it comes from the hdd ?
> ...

 

Well I always thought that is an VM issue, but I had lots of communication with KVM people, who helped me to find out that switch cache='none' for virtual hard drive help reduce stuttering. However nobody would have similar issue as me, nobody could have reproduced that even on much slower HW, so today I am convinced that is can't be just VM issue but at least combination with the issue in the system.

It is definitely not normal when have this issue, whole system cannot be used, can only typing in the terminal, but running any command eg. man ps stuck an waiting few seconds with the virtual machine and continues working in exactly same moment.

I would like to test if the issue exist even without any VM running, but as I can't monitor the disk with smatrd, not sure what to use to put my RAID under constant pressure and monitor the situation.

Then I could maybe decide which from the next steps would be the right one:

- Check vm / the scheduler or the pre emption model (not sure how though but will google)

- Change filesystem from ext4 to -?

- Install older kernel version

- Connect RAID to normal SATA ports (non GSATA2) (doesn't look like it would help as hdparm result are satisfying)

- Change distro (problem is that not many distros could have recognized all my HW, hopefully changed from February when I tried the first time)

----------

## krinn

https://forums.gentoo.org/viewtopic-t-828644-highlight-.html

have a look here

----------

## vitoriung

Thanks krinn,

it looks like that guy has an issue with 2.6.33 kernel inside the guest, I use it on the host, however it seems to have influence on my case as well..

I went back to 2.6.31 and results are significantly better, even though issue has not went away completely.

Another improvement came after I disabled VIRTIO driver for Windows 2k3 guest and rolled back to IDE, issue is almost not noticeable, but still exist.

I will try to get rid off ext4 to see if it helps the situation, but it will take me some time.

Thanks again

----------

## Mad Merlin

You said you had 2x500G drives, but smartctl reports a single 1T drive. Are you using fake RAID? You'd be better off with mdadm RAID.

----------

## dmpogo

 *Mad Merlin wrote:*   

> You said you had 2x500G drives, but smartctl reports a single 1T drive. Are you using fake RAID? You'd be better off with mdadm RAID.

 

This text does not have that much sense. Modern motherboards, like the one in question, support RAID 0,1,5,10 and, as you see yourself, present array to the OS as a single disk, and do not require OS to know about raid.  Thus, two of the main distinctions the author makes between "fake" and "hardware"  raids are not there.  So the difference whether raid is done in 'hardware' or in on-chip firmware is pretty much blurred.Last edited by dmpogo on Fri May 21, 2010 2:01 pm; edited 1 time in total

----------

## krinn

so is the performance too, i goes from software 2xsata1 from 120mb to 160mb with hardware raid and my 2xsas give me a nice perf even using 2 disks only.

You're page is kinda outdated.

----------

## Mad Merlin

 *dmpogo wrote:*   

>  *Mad Merlin wrote:*   You said you had 2x500G drives, but smartctl reports a single 1T drive. Are you using fake RAID? You'd be better off with mdadm RAID. 
> 
> This text does not have that much sense. Modern motherboards, like the one in question, support RAID 0,1,5,10 and, as you see yourself, present array to the OS as a single disk, and does not require OS to know about raid.  Thus, two of the main distinctions the author makes between "fake" and "hardware"  raids are not there.  So the difference whether raid is done in 'hardware' or in on-chip firmware is pretty much blurred.

 

No, it presents the array as an array, you must use device-mapper/dmraid and the correct drivers to access it. It cannot be accessed like a normal SATA drive.

----------

## vitoriung

 *Mad Merlin wrote:*   

>  *dmpogo wrote:*    *Mad Merlin wrote:*   You said you had 2x500G drives, but smartctl reports a single 1T drive. Are you using fake RAID? You'd be better off with mdadm RAID. 
> 
> This text does not have that much sense. Modern motherboards, like the one in question, support RAID 0,1,5,10 and, as you see yourself, present array to the OS as a single disk, and does not require OS to know about raid.  Thus, two of the main distinctions the author makes between "fake" and "hardware"  raids are not there.  So the difference whether raid is done in 'hardware' or in on-chip firmware is pretty much blurred. 
> 
> No, it presents the array as an array, you must use device-mapper/dmraid and the correct drivers to access it. It cannot be accessed like a normal SATA drive.

 

So recommendation would be to rather use software raid? I thought that RAID supported by mb in BIOS would be the best solution as it would present itself as a single SATA disk and could be used completely same way. I was obviously very wrong...   :Rolling Eyes: 

After reading that article, and thank you for that Mad Merlin, it's much clearer for me, so I understand that I am using FAKE RAID at the moment.

To use it properly, would need dmraid and proper drivers and then I can use smartctl?

Now I am not sure what will be easier, dmraid or rather setup two single SATA disks in BIOS and reinstall the system. Sounds to me like effort to rebuild whole system on software raid would be better.

----------

## dmpogo

 *Mad Merlin wrote:*   

> 
> 
> No, it presents the array as an array, you must use device-mapper/dmraid and the correct drivers to access it. It cannot be accessed like a normal SATA drive.

 

OK, if it does not present itself as a SATA disk, it is indeed a critical distinction.

----------

## krinn

 *vitoriung wrote:*   

> 
> 
> Now I am not sure what will be easier, dmraid or rather setup two single SATA disks in BIOS and reinstall the system. Sounds to me like effort to rebuild whole system on software raid would be better.

 

saddly yes

software raid will remove the need of initramfs and other boring stuff like that, it's the same, just easier to use and more flexible.

----------

## Mad Merlin

 *vitoriung wrote:*   

>  *Mad Merlin wrote:*    *dmpogo wrote:*    *Mad Merlin wrote:*   You said you had 2x500G drives, but smartctl reports a single 1T drive. Are you using fake RAID? You'd be better off with mdadm RAID. 
> 
> This text does not have that much sense. Modern motherboards, like the one in question, support RAID 0,1,5,10 and, as you see yourself, present array to the OS as a single disk, and does not require OS to know about raid.  Thus, two of the main distinctions the author makes between "fake" and "hardware"  raids are not there.  So the difference whether raid is done in 'hardware' or in on-chip firmware is pretty much blurred. 
> 
> No, it presents the array as an array, you must use device-mapper/dmraid and the correct drivers to access it. It cannot be accessed like a normal SATA drive. 
> ...

 

You probably won't be able to use smartctl with fake RAID at all, as SMART isn't normally exposed through fake RAID. I would recommend switching to mdadm RAID, as it will allow you to utilize smartctl and SMART as well as provide all the other benefits mentioned in this thread.

----------

## dmpogo

On a side note, how did you like the motherboard altogether ? I'm probably getting the same one.  In particular, why did you put drives on GSATA ports, and not on the ports off ICH10R controller ? Is Gigabyte controller better ? Also, as I understand, this board does not have real eSATA, just a backplane extension for 2 SATA ports ?

----------

## vitoriung

 *dmpogo wrote:*   

> On a side note, how did you like the motherboard altogether ? I'm probably getting the same one.  In particular, why did you put drives on GSATA ports, and not on the ports off ICH10R controller ? Is Gigabyte controller better ? Also, as I understand, this board does not have real eSATA, just a backplane extension for 2 SATA ports ?

 

That has come from the vendor connected to GSATA ports already I haven't changed it. I read more about it now.

GSATA ports provide Hardware RAID build on Jmicron chipset, also provides the E-SATA port on the rear I/O panel.

Even though it is a HW RAID many people says it's not very good eg. here http://www.xtremesystems.org/forums/showthread.php?t=235722 and recommend rather ICH10R SATA ports.

So I will reconnect them to ICH10 and make rather software RAID, have seen few post where people were having serious issues to get they raids back to life, so it's better to rely on Linux, hw is really more likely to fail then the system.

----------

## dmpogo

 *vitoriung wrote:*   

>  *dmpogo wrote:*   On a side note, how did you like the motherboard altogether ? I'm probably getting the same one.  In particular, why did you put drives on GSATA ports, and not on the ports off ICH10R controller ? Is Gigabyte controller better ? Also, as I understand, this board does not have real eSATA, just a backplane extension for 2 SATA ports ? 
> 
> That has come from the vendor connected to GSATA ports already I haven't changed it. I read more about it now.
> 
> GSATA ports provide Hardware RAID build on Jmicron chipset, also provides the E-SATA port on the rear I/O panel.
> ...

 

I did get the same motherboard,  but I can't understand what is the role of  GSATA/JMicron set for e-sata either.  Manual does not say what ports the external bracket should be connected to, and it seems it can be connected to ICH10R port as well (not sure if one will be able to use RAID on ICH10R only for subset of devices).  Notably, Gigabyte site does not seem to mention e-sata for this board at all (in principle, e-sata has a bit different electrical parameters from regular SATA)

All in all Jmicron chips provide SATA functionality, while Gigabyte chip combines it with PATA. One difference is that this Gigabyte chipset sits on PCIe bus, the one that is shared with network chips,  while  ICH10R ports are directly managed by SouthBridge chipset.  So one would exect GSATA ports to be a notch slower.  They do come with some probrietary backup solution, so that's why perhaps your manufacturer put everything there, but I don't think it is of relevance for us.

So I think I'm going to put both my drives, DVD and two external SATA ports on ICH10R, and do software raid on a pair of drive.   0 Raud on swap partition and mirror on the rest (if that is possible)

----------

## dmpogo

 *vitoriung wrote:*   

> 
> 
> Even though it is a HW RAID many people says it's not very good eg. here http://www.xtremesystems.org/forums/showthread.php?t=235722 and recommend rather ICH10R SATA ports.
> 
> 

 

It's a bit worse than that. If we look at the board manual, we see that RAID array rebuilding is only initiated by BIOS, but is done 'in operating system' (meaning Windows and corresponding drivers).

In this sense this on-board RAID is just useless.

----------

## dmpogo

 *vitoriung wrote:*   

> 
> 
> GSATA ports provide Hardware RAID build on Jmicron chipset, also provides the E-SATA port on the rear I/O panel.
> 
> Even though it is a HW RAID many people says it's not very good eg. here http://www.xtremesystems.org/forums/showthread.php?t=235722 and recommend rather ICH10R SATA ports.
> ...

 

Hm, I looked at the JMB322 chipset in more detail, and based on its description on JMicron site

 *Quote:*   

> 
> 
> http://www.jmicron.com/JMB322.html
> 
> 

 

it is actually capable to provide a real, hardware RAID

 *Quote:*   

> 
> 
> RAID 
> 
> .Fully hardware-accelerated RAID Engine
> ...

 

It seems, motherboard manufacturers just not using it to full capacity ??

----------

## jonnevers

this is tangential to the thread but the usages of RAID0 is questionable. yes you get performance gains but the increase in the chance of losing data is very high.

raid in general, outside of perhaps raid6, is probably not worth the effort in the small scale.

----------

## gerard27

Hi,

The Gigabyte GA-EX58-UD5 contains a JMB362 chip and not JMB322.

Gerard.

----------

## gerard27

Hi,

Sorry pushed submit by accident.

Gerard.

----------

## vitoriung

 *dmpogo wrote:*   

> 
> 
> All in all Jmicron chips provide SATA functionality, while Gigabyte chip combines it with PATA. One difference is that this Gigabyte chipset sits on PCIe bus, the one that is shared with network chips,  while  ICH10R ports are directly managed by SouthBridge chipset.  So one would exect GSATA ports to be a notch slower.  They do come with some probrietary backup solution, so that's why perhaps your manufacturer put everything there, but I don't think it is of relevance for us.
> 
> So I think I'm going to put both my drives, DVD and two external SATA ports on ICH10R, and do software raid on a pair of drive.   0 Raud on swap partition and mirror on the rest (if that is possible)

 

I want to try if I can use dmraid before I will swap to ICH10R to see any eventual performance improvement. But don't know if using different driver does not mean that I have to rebuild the partitions, I hope not.

I am not sure about your idea of partitioning, even though I haven't figure out my one yet, but according the Gentoo RAID

 *Quote:*   

> Important: The partition you boot from must not be striped. It may not be raid-5 or raid-0.
> 
> Note: On the one hand, if you want extra stability, consider using raid-1 (or even raid-5) for your swap partition(s) so that a drive failure would not corrupt your swap space and crash applications that are using it. On the other hand, if you want extra performance, just let the kernel use distinct swap partitions as it does striping by default. 

 

I initially thought that the best option could be raid-0 for root filesystem and then the rest I'd put to the mirror (/home,/var,/opt) so I keep application data safe and system would run on faster Raid-0. Would it be dangerous when one HDD fails? I assume if I would do a weekly backup of the system on the mirror partition then I could just simply restore it on recreated root partition.

I suppose the configuration in Gentoo guide is purposed for servers that you need online all the time...but that is not my case.

I would not be afraid to use two separate swap and use kernel for the stripping either, if it crashes crashes completely, but can be easily restored, well at least I hope so  :Wink: 

----------

## jonnevers

you cannot restore RAID0. if one fails, the data is gone.

if you need the option of rebuilding; RAID1 or RAID5 (or 6 or 10,  etc).

----------

## dmpogo

 *gerard82 wrote:*   

> Hi,
> 
> The Gigabyte GA-EX58-UD5 contains a JMB362 chip and not JMB322.
> 
> Gerard.

 

Not according to Gigabyte site and manual

http://www.gigabyte.com/products/product-page.aspx?pid=2958#sp

Of course, JMB322 supports only 2 ports, and, therefore, only RAID 0,1 and JBOD, which are pretty simple. Indeed, most processing (and CPU usage if RAID is in software) is done when parity need to be calculated for parity-using levels.

Interesting, ASUS P6T has two SATA ports on one JMB322 directly, without intermediate chip like Gigabyte SATA. Their manual may be understood as saying that RAID on those two ports can be used directly from BIOS, however whether RAID1 reconstruction can be done automatically without OS is unclear.  

PS. You can always edit old posts if you commited anything by mistakeLast edited by dmpogo on Fri May 28, 2010 4:04 pm; edited 2 times in total

----------

## dmpogo

[quote="vitoriung"]

 *Quote:*   

> Important: The partition you boot from must not be striped. It may not be raid-5 or raid-0.
> 
> Note: On the one hand, if you want extra stability, consider using raid-1 (or even raid-5) for your swap partition(s) so that a drive failure would not corrupt your swap space and crash applications that are using it. On the other hand, if you want extra performance, just let the kernel use distinct swap partitions as it does striping by default. 

 

Excellent point, thanks,  seems no need to raid swap, I forgot about it

Overall, the idea was that with linux software raid, you raid partitions, not disks.

----------

## erik258

I never came back to answer the ext4 performance question. 

I have always been a fan of reiser.  Recently I decided to give ext4 a try.  I tried it on my netbook with a 250Gig seagate drive inside it.  The ext4 filesystem stuttered, not incessantly but every 30 seconds or so when disk access was high.  I switched back to reiser and thngs went as well as ever.  I don't think I'll try ext4 again any time soon.

ext3 is old at this point and development resources, it seems, have gone into ext4, not ext3 improvements.  I would recommend staying away from all ext* filesystems in general; I like reiser for system files which tend to be small, and xfs for filesystems with large files such as database storage and the like.  I only use ext2 for boot partitions and stuff with really small filesystems that don't benefit from journaling, and even for those I'm increasingly using reiser.

OK, now on to raid. 

Raid 5 sucks.  Ask anyone who has real need for high performance - busy servers and the like.  Sure, it works, but it sucks.  http://www.miracleas.com/BAARF/BAARF2.html - it's not just me, it turns out.  

I highly recommend using raid 10 instead.  Raid 10 performs well, provides redundancy, and the mdadm raid 10 implementation is very rebust and goes beyond the typical 1+0 concept.  

You can mix raid types on a set of 2 (or more) disks.  For example, you could stripe your data partitions, mirror a backup partition, and (just for the sake of example) raid 5 another partition, if you had more than 2 disks.  Of course, in this case you'll 'share' performance between all the RAIDs in this case, but if your disk contention is not high that won't matter.  

Finally, I've heard that RAID can benefit from partitions aligned to your stripe size.

----------

## gerard27

You're right dmpogo.

I got that info from "Alternate" a local supplier.

Gerard.

----------

## vitoriung

 *jonnevers wrote:*   

> you cannot restore RAID0. if one fails, the data is gone.
> 
> if you need the option of rebuilding; RAID1 or RAID5 (or 6 or 10,  etc).

 

I know I can't, that is why I would do regular backup of the system to a mirrored file system. However not sure if I can backup the system on the fly.

----------

## krinn

Real hardware raid need dedicated resource (cpu/memory/battery...) to work,so it imply a price, and most raid card cost +~400$, so anyone with a m/b with a raid on it and that m/b have a price < 300$ can simply remove any doubt : must be fakeraid (except if the board manufacturer is fool).

Some board came with real hardware raid on them, but generally they are high price and design for servers.

It's funny but generally, servers board that would need and might use plenty of hdds came with some hardware raid and few sata and desktop ones that don't really need it came with a shitload of hdds connectors

it's cool to have 15 sata connectors, but it might be hard to find a "common" desktop user with a case that could handle 15 hdds  :Very Happy: 

That's just a commercial thing, it makes good to have raid onboard so they create the fakeraid concept. And it was cool to have a board that could handle plenty hdds so they keep adding sata connectors everywhere  :Very Happy: 

I think it's the best way to know if you have fake or hardware raid, price should never lie

----------

## dmpogo

 *krinn wrote:*   

> Real hardware raid need dedicated resource (cpu/memory/battery...) to work,so it imply a price, and most raid card cost +~400$, so anyone with a m/b with a raid on it and that m/b have a price < 300$ can simply remove any doubt : must be fakeraid (except if the board manufacturer is fool).
> 
> Some board came with real hardware raid on them, but generally they are high price and design for servers.
> 
> It's funny but generally, servers board that would need and might use plenty of hdds came with some hardware raid and few sata and desktop ones that don't really need it came with a shitload of hdds connectors
> ...

 

I think you are a notch too negative.   Motherboard raid 5s are pretty useless by the reason you mention,  but raid0 and 1 on a pair of drives have its place in consumer market.   They are also simple, and as the discussion of Jmicron JMB322 chip shows, are implemented practically in hardware, completely on chip. Windows users also get with the motherboard a monitoring/'auto' rebuilding software, which pretty much gives them working stripping/mirroring out of the box,  and a way to copy a harddrive.  I don't think many people outside Linux/BSD community heard of, even less willing to configure,  software raids  :Smile: 

And counting connectors, not just hdds use them.   I have 2 hdd's, DVD, and two external SATA ports, so 5 out 10 SATA ports are used. If I had just 6 from ICH southbridge, I would feel restricted

----------

## vitoriung

Finally,

I got a time to rebuild my RAID to ICH10R port using software RAID. I followed the Gentoo Software Raid guide completely, so I use combination of RAID1 and RAID0 and Ext3 filesystem (Ext2 for certain partitions like /var/tmp).

There are some concerns about ReiserFS and because Red Hat who develops the software for virtualization uses Ext3 by default, I decided to stick with it as well.

Performance seemed to be very well on the beginning. However after a while when I started more virtual machines, stuttering of the guests came back. 

Every time it happens I can see the Hard Drive LED going crazy, so first thing I did is to check the system messages.

Because with this configuration I can use smartd I can see this every 30 mins.:

```

Jun 11 12:30:45 kvm1srv smartd[16941]: Device: /dev/sdb, 6 Currently unreadable (pending) sectors

Jun 11 12:30:45 kvm1srv smartd[16941]: Device: /dev/sdb, 6 Offline uncorrectable sectors

```

So it looks like my hard drive is going away?   :Shocked: 

```

smartctl -a /dev/sdb

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen

Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===

Model Family:     Western Digital Caviar Second Generation Serial ATA family

Device Model:     WDC WD5000AAKS-00V1A0

Serial Number:    WD-WMAWF0686353

Firmware Version: 05.01D05

User Capacity:    500,107,862,016 bytes

Device is:        In smartctl database [for details use: -P show]

ATA Version is:   8

ATA Standard is:  Exact ATA specification draft version not indicated

Local Time is:    Fri Jun 11 13:56:16 2010 BST

SMART support is: Available - device has SMART capability.

SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===

SMART overall-health self-assessment test result: PASSED

General SMART Values:

Offline data collection status:  (0x82) Offline data collection activity

                                        was completed without error.

                                        Auto Offline Data Collection: Enabled.

Self-test execution status:      (   0) The previous self-test routine completed

                                        without error or no self-test has ever 

                                        been run.

Total time to complete Offline 

data collection:                 (8160) seconds.

Offline data collection

capabilities:                    (0x7b) SMART execute Offline immediate.

                                        Auto Offline data collection on/off support.

                                        Suspend Offline collection upon new

                                        command.

                                        Offline surface scan supported.

                                        Self-test supported.

                                        Conveyance Self-test supported.

                                        Selective Self-test supported.

SMART capabilities:            (0x0003) Saves SMART data before entering

                                        power-saving mode.

                                        Supports SMART auto save timer.

Error logging capability:        (0x01) Error logging supported.

                                        General Purpose Logging supported.

Short self-test routine 

recommended polling time:        (   2) minutes.

Extended self-test routine

recommended polling time:        (  97) minutes.

Conveyance self-test routine

recommended polling time:        (   5) minutes.

SCT capabilities:              (0x303f) SCT Status supported.

                                        SCT Feature Control supported.

                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       967

  3 Spin_Up_Time            0x0027   141   140   021    Pre-fail  Always       -       3925

  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       58

  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0

  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0

  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       3000

 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0

 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0

 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       56

192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       42

193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       15

194 Temperature_Celsius     0x0022   108   103   000    Old_age   Always       -       35

196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0

197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       6

198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       6

199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0

200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       4

SMART Error Log Version: 1

No Errors Logged

SMART Self-test log structure revision number 1

No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1

 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS

    1        0        0  Not_testing

    2        0        0  Not_testing

    3        0        0  Not_testing

    4        0        0  Not_testing

    5        0        0  Not_testing

Selective self-test flags (0x0):

  After scanning selected spans, do NOT read-scan remainder of disk.

If Selective self-test is pending on power-up, resume after 0 minute delay.

```

I did the same for sda and it looks fine.

This is my very first experience with the smartd , so I am not sure what to do next. 

I should obviously replace the hard drive and clone it with the dd. 

Hopefully should be working fine this way? 

I have to look into it a little bit later because I'm overwhelmed a bit at the moment, but any advice will be helpful.

----------

## Mad Merlin

If you find that the Offline_Uncorrectable number is increasing over time, then you definitely want to replace the drive. Sometimes you'll get a burst of them in a short period of time and then nothing again for years, in this case, they wouldn't pose (much of) a problem. Either way, replacing the drive is always the safer approach.

----------

## vitoriung

 *Mad Merlin wrote:*   

> If you find that the Offline_Uncorrectable number is increasing over time, then you definitely want to replace the drive. Sometimes you'll get a burst of them in a short period of time and then nothing again for years, in this case, they wouldn't pose (much of) a problem. Either way, replacing the drive is always the safer approach.

 

That actually did not make me happy, because I was hoping the hard drive change will resolve my stuttering problems. 

However the issue must be on hard drive side. I have been investigating all running guests assuming maybe lower RAM on my Linux guest makes them use disk more extensively. But there is nothing wrong when they are running itself, without Windows guest so problem seems to be there.

I have found again the same problem for Windows based guests as I've had before -> IDE drive runs in PIO mode, even though I use workaround, uninstall the IDE controller and restart computer, after 3rd restart issue comes back. 

I have just found - http://winhlp.com/node/10 , even though I'm talking about different issue if it helps I can mark this one as SOLVED.

----------

## Mad Merlin

 *vitoriung wrote:*   

>  *Mad Merlin wrote:*   If you find that the Offline_Uncorrectable number is increasing over time, then you definitely want to replace the drive. Sometimes you'll get a burst of them in a short period of time and then nothing again for years, in this case, they wouldn't pose (much of) a problem. Either way, replacing the drive is always the safer approach. 
> 
> That actually did not make me happy, because I was hoping the hard drive change will resolve my stuttering problems. 
> 
> However the issue must be on hard drive side. I have been investigating all running guests assuming maybe lower RAM on my Linux guest makes them use disk more extensively. But there is nothing wrong when they are running itself, without Windows guest so problem seems to be there.
> ...

 

I didn't mention the performance part of this, it's possible to get (long) hangs from the failing drive as it tries to read/write questionable sectors, and this can translated into stuttering on the host. You could try removing the failing drive from the RAID set and see if the stuttering goes away or not. If it does, you should probably replace the drive, otherwise you can probably just add it back to the set (and let it resync).

Edit: Grrr, the forums were down briefly and ate my original post.

----------

## vitoriung

 *Mad Merlin wrote:*   

> 
> 
> I didn't mention the performance part of this, it's possible to get (long) hangs from the failing drive as it tries to read/write questionable sectors, and this can translated into stuttering on the host. You could try removing the failing drive from the RAID set and see if the stuttering goes away or not. If it does, you should probably replace the drive, otherwise you can probably just add it back to the set (and let it resync).

 

I hope replacing the drive will help and I want to buy a third hard drive as well.

This the first time I am using the software raid, so my experience is zero with it.

You are saying let it "resync", does it mean that I don't need to do anything, just connect unpartitioned drive and it will resync automatically? I guess not, even though that would be nice.

The same wonder is when I add the third drive how difficult will be to build the drive into the array when I use combination of RAID0 and RAID1, but I will hopefully figure that out.

EDIT: I wanted actually ask if it's not better idea to migrate to RAID-5 when having 3 hard drives?

----------

## Mad Merlin

 *vitoriung wrote:*   

>  *Mad Merlin wrote:*   
> 
> I didn't mention the performance part of this, it's possible to get (long) hangs from the failing drive as it tries to read/write questionable sectors, and this can translated into stuttering on the host. You could try removing the failing drive from the RAID set and see if the stuttering goes away or not. If it does, you should probably replace the drive, otherwise you can probably just add it back to the set (and let it resync). 
> 
> I hope replacing the drive will help and I want to buy a third hard drive as well.
> ...

 

For resyncing, there's a little bit you have to do, it goes something like this: http://docs.hp.com/en/5991-7402/ch21s04.html

The array is fully online the whole time and due to the fact that SATA is hot pluggable, you don't even need to power off or reboot to replace the drives.

You can't reshape or resize RAID 0 in place, you have to copy all data off first, then recreate the larger array and copy it back on. Newer versions[1] of mdadm and the kernel support reshaping RAID 1 into RAID 5 directly.

As for RAID 1 vs RAID 5, the performance characteristics are different for the two. RAID 1 can service multiple readers in parallel (one per underlying disk), but RAID 5 will have faster sequential read speeds. Obviously, with RAID 5 you'll have n-1 drives worth of space instead of 1 drive worth of space. It's hard to say which is better in general, but if you don't need the extra space, I would opt for RAID 1 over RAID 5.

[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=517731

----------

## dmpogo

 *Mad Merlin wrote:*   

> 
> 
> As for RAID 1 vs RAID 5, the performance characteristics are different for the two. RAID 1 can service multiple readers in parallel (one per underlying disk), but RAID 5 will have faster sequential read speeds. Obviously, with RAID 5 you'll have n-1 drives worth of space instead of 1 drive worth of space. It's hard to say which is better in general, but if you don't need the extra space, I would opt for RAID 1 over RAID 5.
> 
> 

 

One can also look into RAID10, including just on two disks.

----------

## Mad Merlin

 *dmpogo wrote:*   

>  *Mad Merlin wrote:*   
> 
> As for RAID 1 vs RAID 5, the performance characteristics are different for the two. RAID 1 can service multiple readers in parallel (one per underlying disk), but RAID 5 will have faster sequential read speeds. Obviously, with RAID 5 you'll have n-1 drives worth of space instead of 1 drive worth of space. It's hard to say which is better in general, but if you don't need the extra space, I would opt for RAID 1 over RAID 5.
> 
>  
> ...

 

Yes, RAID 10 is definitely preferable from a performance perspective, I've never tried it with less than 4 disks though. AFAIK, wouldn't 2 disk RAID 10 technically just be RAID 1? Though, come to think of it, you can pick different layouts with RAID 10 for better sequential read speeds...

----------

## dmpogo

 *Mad Merlin wrote:*   

>  *dmpogo wrote:*    *Mad Merlin wrote:*   
> 
> As for RAID 1 vs RAID 5, the performance characteristics are different for the two. RAID 1 can service multiple readers in parallel (one per underlying disk), but RAID 5 will have faster sequential read speeds. Obviously, with RAID 5 you'll have n-1 drives worth of space instead of 1 drive worth of space. It's hard to say which is better in general, but if you don't need the extra space, I would opt for RAID 1 over RAID 5.
> 
>  
> ...

 

That's right, with 2 disks RAID10 differs from RAID1 in layouts.   RAID10,n2 is identical to RAID 1,   while RAID,f2 can give RAID0 sequential read speeds, at the cost of slower writes and, it seems, some speed issues with recovery.  RAID10,o2 does not seem to be that beneficial.  Honestly, I was just setting up my 2 disks (on the same GSATA2 controller) and went for RAID1 after toying with RAID10 idea for a bit.

Here are some perhaps older numbers for RAID10 on three (rather than 2)  disks

http://blog.jamponi.net/2007/12/some-raid10-performance-numbers.html

----------

## vitoriung

 *Mad Merlin wrote:*   

> 
> 
> For resyncing, there's a little bit you have to do, it goes something like this: http://docs.hp.com/en/5991-7402/ch21s04.html
> 
> The array is fully online the whole time and due to the fact that SATA is hot pluggable, you don't even need to power off or reboot to replace the drives.
> ...

 

I only got the third disk, I am following that HP guide, but have a problem with the RADI0 (md4 partition)

```
# mdadm /dev/md4 -f /dev/sdb4

mdadm: set /dev/sdb4 faulty in /dev/md4

# mdadm /dev/md4 -r /dev/sdb4

mdadm: hot remove failed for /dev/sdb4: Device or resource busy

# cat /proc/mdstat

Personalities : [raid0] [raid1] 

md1 : active raid1 sda1[0]

      88256 blocks [2/1] [U_]

      

md3 : active raid1 sda3[0]

      104864192 blocks [2/1] [U_]

      

md4 : active raid0 sdb4[1] sda4[0]

      764742016 blocks 64k chunks
```

I suppose the issue is that I have partitions mounted, however before I will go ahead, one thing is worrying me.

What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?

I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 

That guide works with raid0 with no issue, including restart, so it would suggest that you should be able to do it, I am confused now.

----------

## dmpogo

 *vitoriung wrote:*   

> 
> 
> What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?
> 
> I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 
> ...

 

RAID0 is of course gone completely, if you remove one of the drives.

----------

## vitoriung

 *dmpogo wrote:*   

>  *vitoriung wrote:*   
> 
> What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?
> 
> I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 
> ...

 

How come they have 2 RAID0 partitions here http://docs.hp.com/en/5991-7402/ch21s04.html

and just removing, restarting and adding partitions back, no word about loosing partition or data?

----------

## HeissFuss

 *vitoriung wrote:*   

>  *dmpogo wrote:*    *vitoriung wrote:*   
> 
> What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?
> 
> I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 
> ...

 

That guide looks wrong... it's possible that those commands may work, but I think you need to --stop the raid-0 arrays before removing the drive.  In any case, you're going to lose all the data off of those raid-0 arrays, so you may as well just delete and recreate them with the new drive.

The raid-1 arrays will automatically start resyncing once you add the new partitions to them.

----------

## dmpogo

 *vitoriung wrote:*   

>  *dmpogo wrote:*    *vitoriung wrote:*   
> 
> What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?
> 
> I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 
> ...

 

That seems just wrong.   Their error was on  RAID1 partition (/dev/sdb1) and they seem have been carried away not noticing that other partitions are on RAID0 (or tacitly having assumed that you backed up that data)

----------

## vitoriung

 *HeissFuss wrote:*   

>  *vitoriung wrote:*    *dmpogo wrote:*    *vitoriung wrote:*   
> 
> What happens to md4 partition when I remove sdb4? Isn't that partition destroyed then? It can't work on half size, can it?
> 
> I have virtual machines there and don't have space to back them up anywhere in the case I would loose them. 
> ...

 

it makes sense what you're saying, I've been naive hoping there could be mechanism like all the data will be moved to the first partition in the RAID, assuming whole RAID0 partition is occupied less than 50%, then disconnecting the second partition from the RAID, connecting new disk, creating and attaching new partition to the RAID and everything will be resynced back. Hardly believe I can't do that without loosing data, but now understand it makes sense.

What about to use dd command, clone the faulty disk to the new one, will mdadm recognize that that is different disk even it's exactly the same one? I am just trying to figure out an easiest solution for this.

----------

## vitoriung

 *vitoriung wrote:*   

> 
> 
> What about to use dd command, clone the faulty disk to the new one, will mdadm recognize that that is different disk even it's exactly the same one? I am just trying to figure out an easiest solution for this.

 

That worked well, I cloned the sdb4 partition to a new drive and whole md4 (RAID-0) stayed solid after reboot, despite the fact there were some faulty sectors on that partition. The rest of the partitions just resynced, because they are in mirror.

So I have 2 working disks now, but the whole systems s(t)ucks the same way as before. I am going to suspect my virtual machines at this point, will have to run some tests to see if the disks gets overloaded even without running machines...

----------

## vitoriung

I got this finally resolved. Well I hope I have  :Smile: 

Figured out that my IO scheduler was no configured as CFQ by default. I discovered very useful program called IOTOP, who helped me to recognize processes taking too much IO.

It was one of the VM guests running Suse Linux with Novell OES and GroupWise server. This machine was causing the whole system unstable. Luckily I could have used another great tool called IONICE and give that machine "iddle" (lowest) priority. Results is that VM somehow uses more CPU, but still working fine and host system seems to be stable at the moment.

I finally managed to add a third hard drive and expand RAID-0 partition where VM resides. I got to these figures:

```
#hdparm -tT /dev/md4

/dev/md4:

 Timing cached reads:   16662 MB in  2.00 seconds = 8340.25 MB/sec

 Timing buffered disk reads:  926 MB in  3.00 seconds = 308.65 MB/sec
```

And I have 7 VMs running at the moment, in a situation when nothing is running I am getting those figures close to 9000MB/s for cache and 350MB/s for buffered read. And it's pretty good for IDE system I suppose. Important is that stuttering has finally got to minimum.

So I hope it will be fine finally, so I can leave this thread as SOLVED

Anyway I am glad that I've had to go through this issue, I've learnt so many new things - RAID with mdadm, configuration IO schedulers in kernel and utilities monitoring IO, like iotop and ionice.

----------

