# Sata port multiplier, identical HDs and udev [SOLVED]

## pdr

I have a sil3132 as a esata port multiplier to an enclosure with 4 identical western digital 1TB drives. When I first installed it they were created as /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde. I partitioned each with a single partition spanning the drive and created a 2.8TB raid-5 with mdadm at /dev/md1. My /etc/mdadm.conf was configured with:

```
DEVICE /dev/sd[bcde]1

ARRAY /dev/md1 devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
```

Last night had some storms coming through and I:

umount /media1  # this is my mount point for the raid drive

cryptsetup luksClose /dev/mapper/media1  # it is encrypted

# probably should have run mdadm /dev/md1 --stop but did not

shut down the external enclosure.

Later when the storms had passed and it was movie time I turned the enclosure back on. looking in /proc/mdstat my array was gone. ls /dev/sd* showed the reason - my drives had become /dev/sdg, /dev/sdh, /dev/sdi and /dev/sdj (a usb docking station drive had become /dev/sdf). I ended up assembling a new /dev/md2 manually and using it.

So I wanted to fix this with udev. The problem is that if I do udevinfo -a -p on both /dev/sdg (or /dev/sdg1) and /dev/sdh and diff the outputs, the only outputs are the pci bus specs. Is this going to remain the same? Obviously udev's persistent disk rules couldn't distinguish the drives since it linked them to different /dev/sd* entries - so how can I make udev rules for them? The sil3132 card is in the only PCIe slot that I have - will the pci channels remain the same after a reboot?

Since the 4 drives are all "related" in that I want them in my raid array, I actually don't care if a particular physical drive links to a particular /dev/sd*; that is if I end up with a /dev/raid1, /dev/raid2, /dev/raid3 and /dev/raid4 I don't care if they were the same particular drives as last reboot - just that all 4 are the drives in the enclosure. Can I play on that with udev rules? Because I can come up with rules for the drives in the enclosure (eg they all have ATTRS{model}=="WDC WD10EADS-00L" and DRIVERS="sata_sil24"). But I have no idea if/how to come up with a udev rule that means "If it is one of these drives then map it to /dev/raidN, where N is 1-4 and does not exist yet".Last edited by pdr on Sat May 23, 2009 9:33 pm; edited 1 time in total

----------

## Malvineous

As an alternative, given that the Linux RAID system tags each disk with a unique serial number, you should be able to set the partition type on each disk to "Linux RAID autodetect".  This will definitely bring up the array at boot time with no mdadm.conf (I know because that's how my system works, it boots off the software RAID array) but I'm not sure how it would work while the system is running.  I *think* once you've plugged all the disks in you'll have to run mdadm and give it one of the disks (partitions) as a parameter, and then it should do a search and find all the other disks and bring up the array.

----------

## pappy_mcfae

Were the storms very severe? Did you lose power at all? Did you notice spikes that seemed to coincide with the lightning? Do any other electronic devices show any signs of trouble? Do the drives sound as if they're spinning up? Any report from the RAID card as to whether the drives are ready?

Blessed be!

Pappy

----------

## Monkeh

Use UUIDs instead of devices and it won't matter what letter they get assigned.

My current mdadm.conf looks like this (autogenerated): 

```
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=82554e2b:0fcdeb1e:9cf0bf69:a0ec9769

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=3335947a:001e0aff:95b4af67:bcf289d0

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=de6ac5be:b9875e79:783407ca:4ddc21aa
```

All three arrays consist of pairs of identical drives.

----------

## pdr

I was thinking of UUIDs - but looks like I don't get one? When I ls /dev/disk/by-uuid I only see /dev/md1 and /dev/mapper/media1 listed, NOT the drives making up /dev/md1.

I can find the serial number for the drives (hdparm -I, or /dev/disk/by-id, and should be able to use that with ENV{ID_SERIAL} in my udev rule, but have not worked out correctness yet as when I restarted udev I did NOT get my symlink'd /dev/raid1, /dev/raid2, /dev/raid3 and /dev/raid4 like I should have.

----------

## Monkeh

The UUID is contained within the RAID superblock. A simple way to create a suitable mdadm.conf when the array is assembled is:

```
mdadm --detail --scan >> /etc/mdadm.conf
```

----------

## pdr

I really wanted to actually identify and symlink the raid drives, and have done so.

First thing was to identify them for udev purposes, even though they seem to be identical to udevinfo. I finally ended up seeing what the heck udev was doing with them by:

```
udevadm control --log_priority=debug

udevadm control --reload_rules

udevadm trigger
```

I didn't muck with log files in /etc/udev/udev.conf, so debugging was written to /var/log/messages. What I notices for sure is that udev (or one if its rules files) was setting an ID_ATA_COMPAT env variable; for me they looked like:

```

grep -n 'ID_ATA_COMPAT' /var/log/messages

13751:May 23 16:29:15 server udevd-event[1255]: match_rule: set ENV 'ID_ATA_COMPAT=WDC_WD10EADS-00L5B1_WD-WCAU47456521'

13944:May 23 16:29:15 server udevd-event[1279]: match_rule: set ENV 'ID_ATA_COMPAT=WDC_WD10EADS-00L5B1_WD-WCAU47569899'

15287:May 23 16:29:16 server udevd-event[1407]: match_rule: set ENV 'ID_ATA_COMPAT=WDC_WD10EADS-00L5B1_WD-WCAU47586351'

15292:May 23 16:29:16 server udevd-event[1418]: match_rule: set ENV 'ID_ATA_COMPAT=WDC_WD10EADS-00L5B1_WD-WCAU47193107'

23289:May 23 16:29:17 server udevd-event[2108]: match_rule: set ENV 'ID_ATA_COMPAT=WDC_WD800JD-60LSA5_WD-WMAM9WE01602'

```

Lo and behold - the 4 WDC_WD10EADS-... entries are the 4 drives in the raid (the WDC_WD800... is /dev/sda). I was stymied because from the /dev/disk/by-id links that get made I was expecting ID_SERIAL and ID_SERIAL_SHORT to be exported.

In case if these env vars were being exported because of rules higher-numbered than mine, I changed my rules from from 10-pdr.rules to 85-pdr.rules. And added:

```

SUBSYSTEM=="block", ENV{ID_ATA_COMPAT}=="WDC_WD10EADS-00L5B1_WD-WCAU47193107", SYMLINK+="raid1"

SUBSYSTEM=="block", ENV{ID_ATA_COMPAT}=="WDC_WD10EADS-00L5B1_WD-WCAU47456521", SYMLINK+="raid2"

SUBSYSTEM=="block", ENV{ID_ATA_COMPAT}=="WDC_WD10EADS-00L5B1_WD-WCAU47569899", SYMLINK+="raid3"

SUBSYSTEM=="block", ENV{ID_ATA_COMPAT}=="WDC_WD10EADS-00L5B1_WD-WCAU47586351", SYMLINK+="raid4"

```

to it. After reloading the rules and triggering, a listing of /dev now shows:

```

ls -l /dev/raid*

lrwxrwxrwx 1 root root 4 May 23 16:32 /dev/raid1 -> sde1

lrwxrwxrwx 1 root root 4 May 23 16:32 /dev/raid2 -> sdb1

lrwxrwxrwx 1 root root 4 May 23 16:32 /dev/raid3 -> sdc1

lrwxrwxrwx 1 root root 4 May 23 16:32 /dev/raid4 -> sdd1

```

Which is EXACTLY what I wanted. Have not done so yet (am in the middle of some work) but I will change my /etc/mdadm.conf to:

```

DEVICE /dev/raid[1234]

ARRAY /dev/md1 devices=/dev/raid1,/dev/raid2,/dev/raid3,/dev/raid4

```

----------

## Monkeh

I would recommend using the array UUID instead. If udev breaks, your rules will do nothing. The UUID will never break or change.

----------

## pdr

No thanks. If udev breaks so will my Xorg and I wouldn't be able to watch any of my media anyway...

----------

## Monkeh

 *pdr wrote:*   

> No thanks. If udev breaks so will my Xorg and I wouldn't be able to watch any of my media anyway...

 

That's not what I meant. udev likes to change the syntax for things every other update, usually breaking custom rules.

----------

