# SATA questions

## fosstux

Hi!

I'm planning to buy a SATA harddisk+controller. What controllers work fine in Gentoo?

Please help.

Thanks.

----------

## AgentMascan

My onboard Highpoint HPT374 controller seems to work just fine.  The earliest kernel I've used with it is 2.6.7.

----------

## Crucis

My onboard si3112 works fine too, with 2.6.9

----------

## pksings

My 3ware works fine, on kernel 2.6.9. Advantage is that the driver is built into and part of the kernel tree already.

I could not get the HighPoint adapter driver to build correctly on 2.6.9. 

The 3ware is a true hardware RAID adapter as the Highpoint and SIS are hardware-assisted software RAID.

PK

----------

## jzono1

My IHC5r intel controller works, my promise tx2 thingie works, both work great.

the IHC5r one works even with old livecds when set to legacy mode.

----------

## UB|K

You should look at the chipsets listed in kernel config (list for a 2.6.8 kernel here):

```
gzcat /proc/config.gz | grep SATA

CONFIG_SCSI_SATA=y

# CONFIG_SCSI_SATA_SVW is not set

# CONFIG_SCSI_SATA_NV is not set

# CONFIG_SCSI_SATA_PROMISE is not set

# CONFIG_SCSI_SATA_SX4 is not set

# CONFIG_SCSI_SATA_SIL is not set

# CONFIG_SCSI_SATA_SIS is not set

CONFIG_SCSI_SATA_VIA=y

# CONFIG_SCSI_SATA_VITESSE is not set
```

For me, VIA VT6420 works fine

----------

## Phk

Sorry, this is not about the current thread, but i bet that you can answer me a simple question...

What device name does the SATA Array (SI3112) has, after booting the kernel?

"/dev/hde" ?

I have the ROOT partition raided. What is the "root=" devicename in grub?

"/dev/hde1"?

thanks.

----------

## fosstux

Normally SATA drives are seen as SCSI harddisks. Therefor it should be /dev/sda...

Hope that helps.

Bye

----------

## lbrtuk

If you have enabled the libata drivers in the scsi section of the kernel config they should pop up as scsi drives. However, if you're trying to use the deprecated ide drivers they'll appear as ide drives.

----------

## Phk

Errrrm.... sda is the first SATA disk and sdb is the second!

so sdc would be the raid?

I getting sooo confused...

----------

## yottabit

Promise S150 TX4 works fine for me. I'm using it to host a software mirror (RAID-1) and a software stripe (RAID-0). The native drives are enumerated as /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. (Then obviously the RAID devices are enumerated as /dev/md0...md3 according to my configuration.)

----------

## lbrtuk

 *Phk wrote:*   

> Errrrm.... sda is the first SATA disk and sdb is the second!
> 
> so sdc would be the raid?

 

"the raid"?

----------

## weingbz

 *yottabit wrote:*   

> Promise S150 TX4 works fine for me. I'm using it to host a software mirror (RAID-1) and a software stripe (RAID-0). The native drives are enumerated as /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. (Then obviously the RAID devices are enumerated as /dev/md0...md3 according to my configuration.)

 

I'm just curious, if they show up as SCSI drives, do hdparm and the smartmontools work? If not, how do you check if the drives are OK? Do SATA drives already have DMA turned on?

----------

## yottabit

From what I've read, SATA is required to auto-negotiate its best settings. (The reason PATA doesn't always do this is because the specifications have been pieced together since IDE was first invented.)

So no need to use hdparm. If you just want to check the health of a disk, use badblocks.

----------

## Phk

 *yottabit wrote:*   

> Promise S150 TX4 works fine for me. I'm using it to host a software mirror (RAID-1) and a software stripe (RAID-0). The native drives are enumerated as /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. (Then obviously the RAID devices are enumerated as /dev/md0...md3 according to my configuration.)

 

Yes, but, what are the partition names? like, the first partition in md0 would be... ... ... "md0p1" (eg) ?

Till now, i've used device-mapper to define them, and then create a new device... But.. can i do this at boot? Define and mount the root partition?

thanks

----------

## yottabit

If you have the proper RAID personalities compiled into the kernel (not modules) the kernel will boot off RAID no problem. There are a few How-to references in the forums (and maybe in Gentoo docs?) that describe how to boot from RAID.

I think the key is in the persistent-superblock option used. Anyhow, here are my configs.

```
hal root # fdisk -l /dev/sda

Disk /dev/sda: 82.3 GB, 82348277760 bytes

255 heads, 63 sectors/track, 10011 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1           5       40162   fd  Linux raid autodetect

/dev/sda2               6          68      506047+  fd  Linux raid autodetect

/dev/sda3              69       10011    79867147+  fd  Linux raid autodetect

hal root # fdisk -l /dev/sdb

Disk /dev/sdb: 82.3 GB, 82348277760 bytes

255 heads, 63 sectors/track, 10011 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           5       40131   fd  Linux raid autodetect

/dev/sdb2               6          68      506047+  fd  Linux raid autodetect

/dev/sdb3              69       10011    79867147+  fd  Linux raid autodetect

hal root # cat /etc/raidtab

# /boot (RAID 1)

raiddev                 /dev/md0

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda1

raid-disk               0

device                  /dev/sdb1

raid-disk               1

# / (RAID 1)

raiddev                 /dev/md1

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda3

raid-disk               0

device                  /dev/sdb3

raid-disk               1

# swap (RAID 1)

raiddev                 /dev/md2

raid-level              1

nr-raid-disks           2

chunk-size              32

persistent-superblock   1

device                  /dev/sda2

raid-disk               0

device                  /dev/sdb2

raid-disk               1

hal root # cat /etc/fstab

# /etc/fstab: static file system information.

# $Header: /home/cvsroot/gentoo-src/rc-scripts/etc/fstab,v 1.14 2003/10/13 20:03:38 azarah Exp $

#

# noatime turns off atimes for increased performance (atimes normally aren't

# needed; notail increases performance of ReiserFS (at the expense of storage

# efficiency).  It's safe to drop the noatime options if you want and to

# switch between notail and tail freely.

# <fs>                  <mountpoint>    <type>          <opts>                  <dump/pass>

# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.

/dev/md0                /boot           ext2            noauto,noatime          0 1

/dev/md1                /               reiserfs        noatime,user_xattr      0 2

/dev/md2                none            swap            sw                      0 0

#/dev/cdroms/cdrom0     /mnt/cdrom      iso9660         noauto,ro               0 0

#/dev/fd0               /mnt/floppy     auto            noauto                  0 0

# NOTE: The next line is critical for boot!

none                    /proc           proc            defaults                0 0
```

----------

## Phk

Errrrm.... That's a raid-1 type!!!

and you haven't posted your grub.conf (which is my main problem!)  :Razz: 

The problem is, raid-1 is mirroring, and you can force grub to simply boot from the first disk.

However, with raid-0, the kernel needs both working.

1) how does the kernel know the cluster size (eg) if it is in /etc/raidtab, and this file is in the raid (root) partition?

2) what's your "root=" option in grub.conf?

Thanks, and sorry once more for makin questions...  :Embarassed: 

----------

## Tiger683

Create a separate /boot partition as raid1 or non-raid for grub/mbr

Then just mount it as /boot in fstab, 

/ can then be /dev/md0 and your grub.conf should have a "root=/dev/md0"

remember to compile you raid level, dm-mod and sata drivers into the kernel.

Cheers,

T

PS: i have Reiser4 under raid0 on all partitions except /boot.

----------

## yottabit

Here is my boot.conf just for info:

```
hal root # cat /boot/grub/grub.conf

default 2

timeout 3

splashimage=(hd0,0)/grub/splash.xpm.gz

title=Gentoo Linux 2.6.9-gentoo

root (hd0,0)

kernel /boot/bzImage-2.6.9 root=/dev/md1

title=Gentoo Linux 2.6.9-gentoo (VPN)

root (hd0,0)

kernel /boot/bzImage-2.6.9-vpn root=/dev/md1

title=Gentoo Linux 2.6.11-mm

root (hd0,0)

kernel /boot/bzImage-2.6.11-mm root=/dev/md1 elevator=deadline
```

----------

## Phk

```
ReiserFs: md0: warning: sh-2006: read_super_block: bread failed (dev md0, block 2, size 4096)

ReiserFs: md0: warning: sh-2006: read_super_block: bread failed (dev md0, block 16, size 4096)

VFS: Cannot open root device "md0" or unknown-block(9,0)
```

 :Sad:  What can this be??

It happens in the kernel loading.. After trying with "root=/dev/md0"

But at least it recognized my Reiser4 partition type.. good?

----------

## Tiger683

you must NOT compile your sata,dm-mod and whatever your chipset features are as modules.

if you do, thats the error that comes up.

i have a 300GB as /dev/md0 and /dev/md1, the first is my / AND reiser4, i had same problems

you are getting when i compiled some drivers needed at boottime as modules.

Oh, and scsi has to be builtin too....

cheers,

T

PS: if that wont help, add a

```

md=0,/dev/sda1,/dev/sdb1

```

to your kernel parameters, given sda1 and sdb1 are the (raid-autodetect-) partitions in your /dev/md0

----------

## Phk

Thank you thank you, i'll do this in a sec....  :Wink: 

brb

----------

## Phk

Well, the problem maintains...

even with md=0,/dev/sdaX,/dev/sdbX...

i think the problem is:

my both disks have 

1) primary NTFS partition

2) Extended.

3) Extended: Linux ROOT Partition (linux raid autodetect)

4) Extended: (Other)

Fdiskin' my /dev/sda gives me 1) and 2)... The system knows "/dev/sda,sda1,sda2"

BUT!!!

Fdiskin' my /dev/sdb gives "no partition table"....

So when i do md=0,/dev/sdaX,/dev/sdbX, i don't know witch X i should try!! I've tried 1, {NULL}, and 3

Please... help me out with this sh**...

----------

## Tiger683

[EDIT]

ok, i read through your post more thoroughly.

your "X" is most probably 3, as your extended partition is also counted as a volume, even though you cant efectively use it, that would be sda2 and sdb2, so the remaining partitions inside the logical start with sda3 and sdb3 respectively

[/EDIT]

Ok, first some major rules:

1) in a raid0, both partitions in raid have to be same size.

2)They better be on corresponding order partitions on both sata disks (so eg. sda2 AND sdb2)

I also had problems with raid over logical partitions(those inside extended), so i ended up making up to 4 primary partitions for raid in md0,md1,md2,md3.

My suggestion:

Get the windoze partition onto some other drive or back it up, make your raid on primary partition.

don't forget you can make upto 4 primary partitions per disk, so you might also try blowing up the extended one, and making two/three pri's and do your raid there. Given your ntfs is the first partition on the disk1, make a same sized one on the second, use it for whatever, then you can make you three md's, starting with md0:

sda2+sdb2=md0

sda3+sdb3=md1

sda4+sdb4=md2

Ill check back as soon as i drop into the forums again, but keep in mind im not a linux-raid guru, so i can only help as far as i know myself......

cheers,

T

----------

## Phk

Man, thanks for the help!

I'l try it like you said, as soon as i get home.

But first, there's a simple doubt i have:

Why does my "fdisk /dev/sdb" shows NO PARTITION TABLE? 

It's normal that linux can't mount the raid, since it doesn't know "sdbX"....

So, if i create all the partitions "by hand" (with fdisk) in each drive (first sda and then sdb), i can do the "md=0,/dev/sda1,/dev/sdb1" trick, but................. What about the BIOS raid? Should i turn it off?

This way i won't see the raid under Windows, but i'll see both disks, isn't it?

Isn't there a way to maintain the BIOS raid active, so i can use a RAID0 "c:\" partition?

Thanks a lot man, so many guys out there, and no-one spent a second helping me  except you!  :Smile: 

----------

## yottabit

Most consumer RAID controllers are by driver only and don't actually have hardware RAID functionality (otherwise the OS would only see one drive--the array--instead of multiple drives). You can keep the BIOS's RAID settings for use with Windows without interfering with Linux.

If your second drive doesn't show any partitions, you might be hosed... Did you say that Windows still boots on the RAID-0 array? (Sorry I'm too lazy to read earlier right now.)

If you're lucky you could recreate the partitions on the second drive EXACTLY as they were before (copy drive 1's settings?) and you'll be back in business... don't forget to set the partition types correctly.

But I've never had to do this, so I'm just saying what I've heard... if the other area of the disk have been written to, you've likely lost some, if not all, of your data.

----------

## Phk

No no, i have several partitions (NTFS, FAT32, Windows root partition) working right here, right now (in windows)

The raid works well in windows (here) and in the gentoo live CD (through "raid0run" with a configured "/etc/raidtab")

But if i re-partition my SDB (and not my MD0), probably the raid will get corrupted!

Maybe if there was a way to do a "software raid" under windows, i could partition my drives manually in linux, create the array, 

and then in windows,

create a software-raid C:\ (using two of the partitions just created)

But then, would grub be able to boot a windows raid-0 partition? 

Because, i have grub running in a pendisk, and it boots my linux kernel. However, i don't know how to make it boot this windows partition, so i just unplug the usb-hdd and reboot. (the BIOS detects the raid and boots from C:\)

----------

## Tiger683

@Phk: BAD NEWS

OOh, you didn't say you have a fakeraid driven by controller's bios.... : /

i assume you don't have a raid controller driving a real hardware raid, otherwise linux would detect it ( im sure you dont if

its a onboard controller)

The fakeraid of your controller works similarly to the linux raid with the exception that controller's bios does the work the linux

kernel does for linux-mdraid. Now, the controller only sees the array it created, and windows only sees the array of the controller

which is also known to the windows driver. Messing around with the single drives under linux is a little dangerous, because the controller

always binds the whole disk into its array (it doesn't do a per-partition raid), and if you , by any chance, manage to screw up

the array layout created by the controller bios (ie. overwrite the superblock with array data), you can kiss your windows goodbye.

Furthermore linux cant handle the controller created array by any means, thus it wont be able to handle your drives at all if they're already fully partitioned, and most probably, if they aren't, even then it will see your disks as fully partitioned, and thus wont be able to handle them.

You have a choice:

1)either you backup your windows, destroy the controller array, and install windows on non-raid. then you will not have any major problems installing linux on md-raid you create.

2)if you want to keep windows on raid, and thus the controller-driven raid, you have to look for another home for your linux.

Even if you will be able to create the partitions for linux-raid, because the controller takes only as much space on disks as it needs for in-array partitions, chances are good that these partitions will overwrite the information written by the controller's bios to mark the end of the fakeraid array

you are using under win, which also results in kissing windows partitions goodbye.

cheers

T

----------

## Phk

Wow... I suspected that, but i didn't want to believe..  :Crying or Very sad: 

Tell me, if I buy a PCI card to do the work,

1) Would this be possible? (having the array working for windows and linux?)

2) Would i get better performance? 

Thanks a lot Tiger!! You are helping me big time!  :Very Happy: 

----------

## lbrtuk

1: Yes.

2: Unlikely. Hardware raid systems are usually disappointingly slow and underperform linux software raid. And also if you use hardware raid you're trusting your data integrity on one card. If the card burns out you have to go and buy another one the same model to retrieve your data. Just hope the manufacturer hasn't discontinued that model.

----------

## yottabit

Hardware arrays drastically vary from type and manufacturer. SCSI arrays can be quite high performance.

And if you use controllers from LSI Logic they have great interoperability. They will generally recognize and use arrays defined on different LSI controller models.

I don't yet have any experience with *hardware* RAID controllers for SATA drives.

----------

## Phk

Nice... =) Guess i'll go shopping...

But first, i'll post an off-topic thread asking user-experience about hardware raid, to know the best systems available  :Wink: 

Thank you all, and, maybe i'll return to this post later to ask how to configure the new RAID Pci card  :Very Happy: 

See us around,

Phk

----------

## Phk

[EDIT]

I have just realised that.... the Love-Sources kernel doesn't have the "dm-raid" patches.... It's normal that i can't mount root at boot...  :Embarassed: 

...

...

Now i just have to find what name will my root partition be in /dev/mapper/??????... Any ideas?

[/EDIT]

See this!!

https://forums.gentoo.org/viewtopic.php?p=2198104

(comments about my issues should be placed there. Thanks.)

----------

## El_Goretto

just avoid the buggy combinaison of Seagate SATA Disks + Sil 311X controllers. Or you'll get frightening poor performance...

----------

