# [SOLVED] Unable to boot  SIL + nv SATA-controller with 6 HDs

## exhuma.twn

The motherboard in question (ASUS A8N SLI Deluxe) has 2 different SATA controllers on board. One is a nvidia controller, the other is a Silicon Image (SIL) controller.

I have 6 disks connected as follows:

1. 80GB harddisk for the OS

2. 300GB Volatile data

3. 300GB Disk 1 of Raid array #1 (mirror)

4. 300GB Disk 2 of Raid array #1 (mirror)

5. 300GB Disk 1 of Raid array #2 (mirror)

6. 300GB Disk 2 of Raid array #2 (mirror)

Both raid arrays are used as Physical Volumes for one large LVM storage area.

The drives are connected as follows:

OS on nv-SATA port #1

Volatile data on nv-SATA #2

Array 1 on nv-SATA #3 and #4

Array 2 on SIL-SATA #1 and #2

If I only connect the OS disc (and adjusting GRUB), the system boots fine. When I connect all discs, then suddenly my OS disc becomes "sde" although it is connected om the first SATA port of the "main" controller. I would assume that it becomes sda again if I would connect it to the SIL controller. But I cannot use this as boot-device in the BIOS  :Sad: 

Having said all that, the really annoying part is that GRUB does not show up at all after connecting all discs. The system stops running after the BIOS listed the drives and the PCI devices. It looks like it should show a "NON SYSTEM DISK OR DRIVE ERROR" message. But this message is BIOS dependant, so I assume this BIOS simply sits there with no output.

Screenshot (this is where it hangs)

However, GRUB is installed on the drive that is selected as first boot device in the BIOS. This I am perfectly certain about as this is the only device with multiple partitions, and it's the only device with less then 100GB  :Wink:  And it works when only that disk is connected.

----------

## BradN

I would install the grub MBR on all disks, and then hopefully one of them would get booted by the BIOS.

It might also be helpful to boot grub from a floppy disk and troubleshoot which drives can be seen at that point.

----------

## exhuma.twn

Ok, GRUB is installed on each drive now. I still get the same error.

Next I tried to boot a linux rescue system from CD. In the options from this CD I also have the option to boot straight into GRUB. But no matter what option I chose from this CD I get the same error. I always drop straight to a frozen screen with a blinking cursor at the bottom.

I will maybe try to install LILO now, and see what that gives me.

Still, I would prefer having grub as bootloader. And somehow it should work, as it was working on another distro  :Neutral: 

----------

## exhuma.twn

Damn. Lilo does the same error.

----------

## exhuma.twn

The problem seems in fact to lie with the SIL controller. As soon as I connect *any* HD onto that controller, the whole system refuses to boot.

If I connect all to the nv controller, it works fine.

However, now I only have my volatile data available, but the important storage area is still missing, as the second LVM physical volume is missing.  :Sad: 

----------

## Rob1n

Have you checked for BIOS updates?

----------

## BradN

It would seem there's some kind of conflict between the BIOS for the two controllers... What happens if you wait until the grub menu to plug in drives to the SIL controller, and then continue booting linux?  (I have to do this sometimes with the add-on IDE on my system with VIA onboard SATA and a Promise EIDE card, otherwise the Promise BIOS locks up, apparently in conflict with the SATA BIOS)

----------

## exhuma.twn

Isn't this kind of "hotplugging" dangerous to the disks?

I will also re-check the BIOS *again* to see if I really did not miss any setting  :Wink: 

----------

## BradN

I've never had things damaged from doing that with PATA drives (the worst that happens is the system locks up), but if you're doing it while it's being written (or with an OS running in general), it may cause data loss... then again I've successfully hotplugged PCI cards on hardware not designed for it (this is more dangerous and I probably wouldn't recommend it)  :Smile: 

At the very least, it's a troubleshooting option to verify that the hardware is working properly but the various BIOSes involved just won't let it boot (alternately, boot the kernel off a floppy disk if you can get it to fit on one)

----------

## exhuma.twn

Finally some results.

Spent one whole day with a mate to figure out what was wrong. Eventually it turned out that it was a problem with Interrupt 13. The device listing was wrong. The solution how I got it working is as follows:

I now have my sytem (boot) HD on the SIL controller (!). And to make it work I had to create a bogus raid device in the SIL controller with that device. I say bogus, because it's only a RAID Array with 1 device  :Wink:  CAREFUL though. The SIL controller stores it's meta-info about the arrays on the disk at the very end. So if you have ANY partitions on the disk that stretch just up to the end of the disk, you should resize it and make some free space at the end of the disk. Otherwise the filesystem might get screwed up (depends on the FS of course). But reducing it's size and leaving some room, should keep you safe. I will not take any responsibility for any loss of data. You have been warned  :Wink:  I still kept some backups using the ever-so-nice "partimage" tool  :Wink:  Just to be on the safe side.

However. Once the disk is marked as Logical Volume in the SIL Bios, the machine boots again.

I now have one more disk on the SIL controller which works normally (no need to put it in a RAID array). The others are running on the NV controll as is. No RAID there as well. Hovever, I have software RAID set-up in Linux. But that has nothing to do with the "hardware" raid supplied by the nv and SIL controller.

----------

## doc.twn

wow man, finally!

it was about time that you fixed that box  :Cool: 

----------

## BradN

Go figure... buggy vendor provided BIOS...

----------

