# Persistent block device naming

## Jimini

Hallo everyone,

I currently switched from CentOS to Gentoo again, and there is nothing to regret - almost. ;)

The system contains 10 HDDs and 2 SSDs. The HDDs are connected to two RAID controllers, while the SSDs are connected directly to the mainboard. I would like to use the SSDs as sda and sdb - under CentOS, this was the default case.

However, Gentoo seems to recognize all 10 HDDs first and the SSDs afterwards - this leads to the problem, that with every new HDD in my RAID, I have to rewrite my initrd-Skript, which boots the system from the SSDs.

According to https://forums.gentoo.org/viewtopic-t-1053170.html, this possibly can be fixed by creating udev rules. But CentOS seems not to have used any kind of special udev rules - or at least I could not find any. Thus, the big question is: how can I establish a persistent naming of my drives?

I use Gentoo kernel 4.14.52, OpenRC 0.34.11 and eudev-3.2.5.

Best regards and thanks in advance,

Jimini

----------

## NeddySeagoon

Jimini,

There are lots of ways, all equally correct. They all involve not using kernel device names to achieve the desired effect.

Filesystems all have a property called Universal Unique IDentifier, or UUID.

Partitions  all have a property called PARTion Universal Unique IDentifier, or PARTUUID.

You can use either.

The kernel understands PARTUUID, so you may write root=PARTUUID=  on the kernel command line.

If you have an initrd containing the mount command root=UUID= works too.

Either PARTUUID or UUID can be used in /etc/fstab.

```
# now ssd

UUID=cf559dbe-81bb-45b7-bbdd-0bcdc81e066b      /               ext4            noatime,discard,user_xattr          0 1
```

There are some dirty hacks to do with module load order but that sets up a race condition waiting to happen, so it may work for one kernel and change in another.

For random things, the disk-by-* symbolic links work too but not in fstab, since fstab may be used before udev has created the symlinks.

```
/sbin/blkid
```

will tell you about your block devices.

----------

## Jimini

Thank you for your reply, NeddySeagoon (once again :) ).

I do not care that much about partitions, since most of my disks are used in a RAID6 one bare disks - /dev/sda, /dev/sdb and so on. I also do only have mountpoints using my mapped devices like /dev/md0, /dev/md1 etc.

Thus, UUID or PARTUUID are unfortunately no solution for my problem.

Maybe sketching my setup helps to clarify a bit.

internal ports:

- SSD1

- SSD2

RAID controller 1:

- HDD1

...

- HDD5

RAID controller 2:

- HDD6

...

- HDD10

At the moment, the disks on the RAID controllers are recognized first, that's why drives on the internal ports are named /dev/sdk and /dev/sdl.

This is not a problem, until I add some - let's say two - disks to the RAID controllers: after the next reboot, the drives on the internal ports will be /dev/sdm and /dev/sdn and my initrd-script, which build a RAID1 from /dev/sdk and /dev/sdl, throws an error. So, with every additional HDD, I have to change my setup.

Kind regards,

Jimini

----------

## P.Kosunen

Any help from /dev/disk/by-* links?

----------

## NeddySeagoon

Jimini,

Raid doesn't matter, except with no partitions, PARTUUID is not useful. 

Raid sets have a UUID that you feed to mdadm to assemble the raid.

```
$ sudo mdadm -E /dev/sda5

/dev/sda5:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 5e3cadd4:cfd2665d:96901ac7:6d8f5a5d

  Creation Time : Sat Apr 11 20:30:16 2009

     Raid Level : raid5
```

and in the initrd

```
# spinney /boot

/sbin/mdadm --assemble /dev/md125 --uuid=9392926d-6408-6e7a-8663-82834138a597
```

that finds the raid members wherever they are.

The UUID is stored with the raid metadata. (That's two different raid sets in those examples)

Having assembled your raid sets using the raid UUID, the filesystems can be mounted by UUID.  

P.Kosunen,

There is a drawback with /dev/disk/by-* links. They are created by udev and may not be available when localmount runs, so its a really bad idea to have them in /etc/fstab.

----------

## Jimini

NeddySeagoon, thank you for the hint with "--uuid". Thus, I changed my initrd script and am now waiting for the next reboot :)

Kind regards,

Jimini

----------

## Jimini

Hello everyone,

I just stumbled upon my initial question again. First of all - defining the boot device by its UUID works fine and reliable. And my "problem" is not really a problem - I am just interested in how to solve it :)

But I still have the situation, that the HDDs, which are attached to the RAID controllers, are initialized before the two SSDs, which are attached to the SATA ports directly.

This leads to the following naming scheme:

/dev/sda -> HDD

/dev/sdb -> HDD

...

/dev/sdk -> SSD

/dev/sdl -> SSD

When I add an additional disk to the HDD RAID, it becomes /dev/sdm of course - but only until the next reboot. Then, /dev/sdk becomes /dev/sdl and /dev/sdl becomes /dev/sdm.

Of course I could use UUIDs instead of the "short device names", but UUIDs are simply compliacted to handle.

So I am still looking for a way to have my two SSD named as /dev/sda and /dev/sdb.

One possible solution would be to have the low level drivers for the RAID controllers implemented as kernel modules, which should be loaded after the SATA drivers. But this would require to allow module loading...

Another way could be the usage of the existing initrd - is it possible to include drivers here and to have them loaded in a defined order?

Kind regards and thanks in advance,

Jimini

----------

## NeddySeagoon

Jimini,

The problem with kernel modules in the initrd is that the initrd changes with every kernel build.

To my mind that's a mess. My initrd, built in April 2009 (now 11 years old) still works well.

You only have very limited control over module lead order anywhere, the only safe split is build in and modules.

Trying to use the kernel dynamically assigned names by tricking the kernel to assigning them in the order you want is always going to be fragile. 

What's the problem with UUID?

You copy it out of blkid when you need it, not use it from memory :)

If you need something memorable, use the filesystem label.

Its your problem to keep filesystem labels unique in your namespace. I don't like that as its a real mess when you reuse a label accidentally.

Mixing UUIDs and filesystem labels is safe. Come to think of it, that's what I actually do.

I use UUIDs or PARTUUIDs for things that rarely change and labels for USB sticks.

----------

## szatox

 *Quote:*   

>  One possible solution would be to have the low level drivers for the RAID controllers implemented as kernel modules, which should be loaded after the SATA drivers. But this would require to allow module loading... 

 Other options are:

- identify filesystms on those disks by labels (arbitrary strings which you can make human-friendly, unlike uuids)

- use lvm and identify logical devices by group and volume name (device mapper paths)

Just make sure not to use those 2 together for a single logical disk. As soon as you create a snapshot, uuid is not unique anymore.

----------

## NeddySeagoon

szatox,

 *szatox wrote:*   

> ,As soon as you create a snapshot, uuid is not unique anymore.

 

I keep forgetting that :(

----------

