# mdadm and sysfs

## irbanur

Hello Community!

I'm having a bit of a problem with sysfs naming of devices. I'm buildning a new fileserver using Zotac G43-ITX and there is a strange behaviour when I'm hotswapping drives: the MB seems to do some kind of round-robin when it comes to sysfs. If I remove a disk from a slot and then re-insert it in the same slot it most of the times turns up as another device under /dev. This is not a big problem using udev rules; I've written one that maps the devpath against the "correct" /dev name. Not a biggie. But when I use mdadm to create an array it seems to use the sysfs device name:

```
atoz ~ # ls /dev/sd??

/dev/sda1  /dev/sdc1  /dev/sde1  /dev/sde2  /dev/sde3
```

Note: no sdb1

```
atoz ~ # mdadm -C /dev/md0 -l5 -n3 /dev/sda1 /dev/sdc1 /dev/sdd1

mdadm: array /dev/md0 started.

atoz ~ # cat /proc/mdstat 

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 

md0 : active raid5 sdd1[2] sda1[1] sdb1[0]

      976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
```

Note: sdb1 magically appeared instead of sdc1!

It might seem like a non-problem but if (when) a disk fails I want to know which physical disk to remove from my server so I don't accidently destroy the array online and my applications can't get it's data.

So the question is: Is there any way to force a sysfs devpath to map to a specific kernel name in /sys?

my udev rules file for mapping the disks.

```
DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sd?", NAME="sda"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sd?/sd??", NAME="sda%n""

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sd?", NAME="sdb"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sd?/sd??", NAME="sdb%n""

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0/block/sd?", NAME="sdc"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0/block/sd?/sd??", NAME="sdc%n""

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sd?", NAME="sdd"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sd?/sd??", NAME="sdd%n"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0/block/sd?", NAME="sde"

DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host4/target4:0:0/4:0:0:0/block/sd?/sd??", NAME="sde%n"

```

----------

## BradN

It's probably not the suggestion you're looking for, but physically labelling the drives with their slot numbers in mdadm's output can help avoid problems reassembling the array if you mis-organize drives as well as giving you the needed info in this case.  The fact that you'll look at the number when pulling out the drive will make you want to double check  :Wink: 

----------

## irbanur

True, not the answer I was looking for.  :Wink:   But I used a similar solution for the disk<->slot problem:

udev rule:

```
DEVPATH=="/devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sd?", SYMLINK="raiddisks/sda_%k"
```

In the case I need to replace a drive, I just look in /dev/raiddisks to see the mapping between the kernel name (used in mdstat) and device name (used in /dev).

Still, it's annoying. And there is a part deux in this case:

I'm trying to replicate the behavior of a NAS (Synology?) that one of my colleagues told me about:

Online resizing of the array itself (switching to larger drives).

Let's say I have a raid-5 array consisting of 4*500Gb drives and I've run out of space. It would be neat if I just could:

1. Pop a 2Tb drive in slot 1.

2. Let my perl daemon detect the new drive and add it to the array (already written).

3. Wait for resync (s*it load of hours).

4. Pop a 2Tb drive in slot 2.

.. and so on until all four drives are replaced. And then issue "mdadm --grow /dev/md0 --size=max" and wait for resync for the remaining 1.5Tb for each drive.

The problem here is that if I remove sda without issuing "mdadm -f /dev/sda1 && mdadm -r /dev/sda1" the growing of the array won't work if the new drive in slot1 turns up as sdg, since I can't remove sda from the array (without stopping the array). Although it's just a nice-to-have feature would be really-really-nice-to have since it would minimize my efforts when upgrading the array (it happens every 12 months or so in the rate I'm going now).

----------

