# Persistent udev hard disk link. Dual hardware environment

## SunHateR

Hello

My hardware configuration is:

Motherboard: Asus M3N-HT Deluxe Mempipe

HDD: 3x1TB SATA2 (2x1TB Stripe RAID for Windows & 1x1TB for Gentoo)

I succesfully installed & configured Gentoo X86_64 system. I have read-write access to windows partitions using device-mapper nvraid and ntfs-3g driver.

Becouse I'm using RAID functionality, the hard disk for Gentoo must be RAID array with 1 hard disk. There is no problem with that.

The hard disks device files are:

for Windows:/dev/mapper/nvidia_fadgbica

/dev/mapper/nvidia_fadgbica1 - C: drive

/dev/mapper/nvidia_fadgbica2 - D: drive

/dev/mapper/nvidia_fadgbica3 - E: drivefor Linux:/dev/mapper/nvidia_iaahhiff

/dev/mapper/nvidia_iaahhiff1 - boot partition

/dev/mapper/nvidia_iaahhiff2 - swap partition

/dev/mapper/nvidia_iaahhiff3 - root partition

/dev/mapper/nvidia_iaahhiff4 - extended partition

/dev/mapper/nvidia_iaahhiff5 - store partition

/dev/mapper/nvidia_iaahhiff6 - NTFS partition (F: drive)The Gentoo system is working properly, but I want to use it in windows using VMware. I have been configured Gentoo to work in both hardware environments, but using /dev/sda* instead of /dev/mapper/nvidia_iaaggiff* device files. In this situation I have no access to windows RAID array in native environment.

The main problem is that /etc/fstab must be different on both environments. I have idea symlinking linux partitions with udev, but I don't know how.For native environment:

/dev/mapper/nvidia_iaahiff*  ->  /dev/gnt*For VMware environment:

/dev/sda*  ->  /dev/gnt*And use /dev/gnt* paths in /etc/fstab.

Looking /etc/udev/rules.d/70-persistent-cd.rules, I tried to do it making /etc/udev/rules.d/80-hdd-symlink.rule with content:

```
SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0", SYMLINK+="gnt", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part1", SYMLINK+="gnt1", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part2", SYMLINK+="gnt2", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part3", SYMLINK+="gnt3", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part4", SYMLINK+="gnt4", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part5", SYMLINK+="gnt5", ENV{GENERATED}="1"

SUBSYSTEM=="block", ENV{ID_HDD}=="?*", ENV{ID_PATH}=="pci-0000:00:10.0-scsi-0:0:0:0-part6", SYMLINK+="gnt6", ENV{GENERATED}="1"
```

 without success.

Anyone can help?Last edited by SunHateR on Fri Feb 19, 2010 8:22 pm; edited 3 times in total

----------

## SunHateR

Temporary I solved the problem, modifying /etc/init.d/checkroot script:

```
...

start() {

    if [ "`/usr/sbin/lspci | grep VMware`" ] ; then

        ln -s /dev/sda /dev/gnt

        ln -s /dev/sda1 /dev/gnt1

        ln -s /dev/sda2 /dev/gnt2

        ln -s /dev/sda3 /dev/gnt3

        ln -s /dev/sda4 /dev/gnt4

        ln -s /dev/sda5 /dev/gnt5

        ln -s /dev/sda6 /dev/gnt6

    else

        ln -s /dev/mapper/nvidia_iaahhiff /dev/gnt

        ln -s /dev/mapper/nvidia_iaahhiff1 /dev/gnt1

        ln -s /dev/mapper/nvidia_iaahhiff2 /dev/gnt2

        ln -s /dev/mapper/nvidia_iaahhiff3 /dev/gnt3

        ln -s /dev/mapper/nvidia_iaahhiff5 /dev/gnt5

        ln -s /dev/mapper/nvidia_iaahhiff6 /dev/gnt6

    fi

...
```

/etc/fstab:

```
/dev/gnt2  none        swap     sw              0 0

/dev/gnt3  /           ext3     defaults        1 1

/dev/gnt1  /boot       ext2     defaults        1 2

/dev/gnt5  /store      ext4     defaults        0 3

/dev/cdrom /mnt/cdrom  auto     user,noauto,ro  0 0

none       /proc       proc     defaults        0 0

none       /dev/shm    tmpfs    defaults        0 0
```

----------

## Mad Merlin

You could also use labels or uuids in your fstab instead of device names. I don't recall if you can use either for your kernel command line (root=foo), but even if not, you can just add a second grub entry or edit the command in grub on the fly.

----------

## SunHateR

Labels & uuids can't help. They are linked to sda* partitions. But if I use /dev/sda* paths in /etc/fdtab, the root patition can't mount in read-write mode. It works only if i use /dev/mapper/nvidia_iaahhiff* paths

```
# ls -l /dev/disk/by-label/

total 0

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 Backup -> ../../sda6

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 Buffer -> ../../mapper/nvidia_fadgbica2

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 Data -> ../../mapper/nvidia_fadgbica3

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 System -> ../../mapper/nvidia_fadgbica1

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 boot -> ../../sda1

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 root -> ../../sda3

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 store -> ../../sda5

# ls -l /dev/disk/by-uuid/

total 0

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 3820C20620C1CB58 -> ../../mapper/nvidia_fadgbica3

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 57f73bc4-1f0b-4c24-b9b6-c69dc69063aa -> ../../sda2

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 5E9C1D849C1D57BB -> ../../mapper/nvidia_fadgbica1

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 6418B0F918B0CB76 -> ../../sda6

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 7ceb8b00-4565-4ef3-85b7-da6e519f8c6c -> ../../sda1

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 983d5cd5-7e90-4d8b-8769-199cea13526a -> ../../sda5

lrwxrwxrwx 1 root root 29 2010-02-21 00:59 A4B6BB07B6BAD94E -> ../../mapper/nvidia_fadgbica2

lrwxrwxrwx 1 root root 10 2010-02-21 00:59 cfc871f9-2117-4683-9c79-5db11acfcb66 -> ../../sda3
```

----------

