# HP Proliant Microserver

## cpr

I am in the process of setting up an HP Proliant Microserver for a small office. I have a problem with device names:

- I have upgraded the BIOS and made a 40GB SSD disk (sole 3.5 in a dual case that I replaced the DVDR with) the 1st and only boot device

- I have a working kernel that boots fine

- I also have currently 2x2TB SATA drives, they are recognized correctly

BUT when I remove one of them, the SSD devices changes from currently /dev/sdc to something else -- with the usual catastrophy.

Is an initramfs my only option? The Microserver has no hotplugable controller, but makes HD changes easy with 4 bays. Of course I could post stickers to the other bays reminding me of /etc/fstab and/boot/grub/grub.conf -- but that does not sound right.

I plan to use that machine for some mostly-idling VirtualBox guests such as Bacula director & storage daemon; an Alfresco document management instance (Java software) and probably an OpenLDAP server as a replication master.

Right now I have 4GB of RAM, after an upgrade I might  put an Firebird-ERP-Database on it.

So fairly "infrastructure"-like to have it bail in the heat of a business week just because of a HD change...

----------

## Telemin

Hi,

You can choose to mount by UUID or LABEL instead of by device node, use blkid to get the partition Ids.  The fstab syntax is like this:

```

UUID="eeed5310-d745-463d-9e39-12969816c9f6"    /      ext4      noatime      0 1 

LABEL="bootpartition"    /boot      ext2      noatime      0 2 

```

You can also specify the root line to the kernel with the same syntax

```

kernel /vmlinuz-xxxxx root=UUID=eeed5310-d745-463d-9e39-12969816c9f6 

or

kernel /vmlinuz-xxxxx root=LABEL=bootpartition

```

-Telemin-

----------

## NeddySeagoon

cpr,

IF you want to mount root by UUID or Lable, you need an initrd as the kernel does onderstad either. It needs the mount userspace tool.

As you only use 2 of the 5 drive bays, you could put the SSD in a drive bay so that it was always /dev/sda and still have one bay spare.

----------

## cpr

Ah, thanks. I was under the impression that I needed an initramfs for these mountpoints. (I was 'away from sysadmin stuff' for ~4 years, just getting back into it. UDEV was fairly new then... *g*)

Looking further into the uuid stuff I found ~ # ls /dev/disk/by-id/ shows even the non-partitioned "raw" drives. Will test these when it comes to create ZFS pool(s).

Is it just my impression or did the device naming become fairly "random" with Linux in the last years?

----------

## cpr

My reply above was before I saw NeddySeagon's response. I will consider my options...

I would very much like to keep the OS away from the bays, so that I could soon fill the remaining two bays. (As ZFS suggests to use full drives, not partitions, and I want to have both JBOD _and_ "secure" storage.)

Well, I think I consider myself lucky because the error has not occurred YET!  :Wink:  Probably a Post-It note on the bays will do! *G*

----------

## chithanh

With the default BIOS, I think the optical drive bay connector is driven by pata_atiixp, while the hdd bays are driven by sata_ahci. So if you make the latter a module and the former built-in, then the SSD should always be /dev/sda.

This however doesn't work if you installed the hacked BIOS to switch all SATA ports to ahci.

----------

## NeddySeagoon

cpr,

IDE drives under the old drivers had fixed names accoring to interface and master/slave positions.

Under the SCSI stack, device names are allocated in kernel device discovery order. That may not be the same as BIOS device discovery order.

If you neet to boot with a random number of block devices connected under the SCSI stack and root is not on the first one, or you don't always know what the  'first' one will be, you need and initrd so you can use LABEL or UUID  (UUID preferred) in grub.conf for the real_root= parameter.

----------

## cpr

@NeddySeagoon: I see, my hardware-knowhow is CLEARLY determinded by the fact that I was a poor student until 2006 -- supercheapish hardware all the time. Now a few years into job life and I have to a) catch up and b) finally can afford >1GB RAM!  :Smile:  Thanks for the SCSI-hints, will do more research.

@chithanh: Ah, thanks for that, here is dmesg:

```
helm ~ # dmesg | grep pata

[    1.285754] pata_atiixp 0000:00:14.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17

[    1.285865] pata_atiixp 0000:00:14.1: setting latency timer to 64

[    1.286832] scsi4 : pata_atiixp

[    1.287305] scsi5 : pata_atiixp

helm ~ # dmesg | grep sata

helm ~ # dmesg | grep ahci

[    1.279666] ahci 0000:00:11.0: version 3.0

[    1.279697] ahci 0000:00:11.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19

[    1.279856] ahci 0000:00:11.0: irq 41 for MSI/MSI-X

[    1.279945] ahci 0000:00:11.0: AHCI 0001.0200 32 slots 4 ports 3 Gbps 0xf impl SATA mode

[    1.280066] ahci 0000:00:11.0: flags: 64bit ncq sntf ilck pm led clo pmp pio slum part 

[    1.281593] scsi0 : ahci

[    1.282513] scsi1 : ahci

[    1.283306] scsi2 : ahci

[    1.284122] scsi3 : ahci

```

scsi5 must be the eSATA port.

And indeed my .config has

```
helm linux # grep -e 'SCSI=y' -e'ATII' .config

CONFIG_SCSI=y

CONFIG_PATA_ATIIXP=y
```

Will test that tomorrow, because on first glance 

```
-*- SCSI device support
```

 is not easily "modular" (no [ ] or < > around that entry)

----------

## chithanh

I meant that you can make sata_ahci a module.

----------

## cpr

You mean it's THAT easy?!?

(It's well past midnight and I read your reply on the smartphone -- had to get up and try it. It WORKS (of course)!)

Have been cold-swapping the bays with great pleasure!

```
helm linux # diff .config .config.default #a copy after emerge gentoo-sources

(…)

941c953

< CONFIG_SATA_AHCI=m

---

> CONFIG_SATA_AHCI=y
```

----------

## cpr

http://wiki.gentoo.org/wiki/HP_Proliant_Microserver

Good night && thanks for all the fish!

----------

## cpr

Not sure where to put my question, so I rather bump my old thread:

I just re-started the server after some months and updated world (including the kernel from 3.16 to 3.2.12).

During the boot process I see

 *Quote:*   

> error: zfs-fuse failed to start

 

I read that zfs-fuse got its last rites. Originally I installed as per version january2012 of http://wiki.gentoo.org/wiki/ZFS which used zfs-fuse.

Now I understand that the new&shiny sys-fs/zfs package is being blocked by my currently installed sys-fs/zfs-fuse.

My question is: Can I unmerge zfs-fuse and 'emerge' zfs like any blocked package?

Of am I wiping my filesystem information by doing so?

----------

## chithanh

When switching from zfs-fuse to zfs, you may need to adjust fstab (or your initramfs if you mount ZFS during boot).

For further questions best open a new thread.

----------

## cpr

Thanks for the reply!

If it is "just" fstab I might just go ahead -- zfs is only providing storage, nothing system-related.

----------

## cpr

 *cpr wrote:*   

> I might just go ahead

 

Yes, unmerged zfs-fuse and emerged zfs.

Rebooted and ran 

```
zpool import zpool1
```

(zpool1 was the name of my pool.)

Also ran an 

```
zpool upgrade zpool1
```

 and am now on zfs version 28!  :Smile: 

----------

