# Root over NFS: Sharing the hosts OS??

## bastibasti

Hi,

I'm preparing a little NFS root setup, however this time I dont want seperate systems from a fileserver, instead the nfs server's own OS should be used. 

The clients will not emerge something, only use the executables. 

Is it achievable with the genkernels initrd? How do I force any temporary folder to be tmpfs?

----------

## NeddySeagoon

bastibasti,

You have always been able to share /usr as its easy to make /usr read only.

Sharing a live / is much harder as the machines identity is there. You would not want to share its /dev /sys /proc and a few other locations.

You may be able to share a copy of / as long as you take great care with the static elements, like security files.

The host ssh key will be common too ... or if its missing on the shared root, will change every time sshd is started.

I've never used genkernel ... will you PXE boot, so your machines can be diskless, or have a /boot on the individual machines?

What do you mean by   *bastibasti wrote:*   

> How do I force any temporary folder to be tmpfs?

 

/tmp in tmpfs is a matter oy setting up /etc/fstab.

/home, for users is harder.  You need to make the shared /home an overlay filesysem, so that users can make changes in RAM only. 

The LiveCD and LiveDVD do this sort of thing ... you can 'edit' their root filesystems but changes are only in RAM.

The LiveDVD even allows users to run emerge.

----------

## bastibasti

The nfs clients will use a usb stick to boot the kernel, as I cannot control the dhcp in this environment.

The idea was to use aufs or something over the (ro) nfs root??

----------

## szatox

I have done that and it made a pretty good proof of concept.

In fact, I'm about to set up a binhost this way (controller + distcc workers) sharing the same system.

Several tricks I've been using were:

1) runlevels. Master node booted into different runlevel than workers, so they could share the same configuration and only invoke the parts they needed

2a) Root on aufs backed by readonly NFS and writable tmpfs. Overlayfs is probably a better pick now, since it comes with the mainline kernel (and yes, it does work). Bonus point: NFS being read-only was enforced by NFS server. The master could still update it, which means new software automagically propagated to the workers. AUFS's readonly mode did allow that. There was realy-readonly mode for running OS from images that will definitely not change when you're using them.

2b) Stuffing that system image into RAM instead, so your slaves only need your master node for bootstrapping. If the master goes down afterwards... Well, shit happens, but slaves don't care. You can safe quite a bit of bandwidth this way, possibly allowing you to scale the whole thing out by a factor of dozens, without being bothered by disk IO or network throughput. Obviously, this way you can't update software without rebooting slaves. However, this imitation may be desirable too.

3) PXE boot. Pretty obvious. Of course, you can use other means to boot your OS as long as your machines ask the network for their identity rather than kick the door open and introduce themselves with static configuration.

4) Automagic identity generation. DHCP ensured there were no IP clashes. I just replaced /etc/hostname with a script that would check machine's IP and add the last octet to a predefined name prefix.

5) Avahi would announce successful boot to the local network. Master node can pick it up and delegate some tasks.

Now, back then, when I did it, fast eternet clearly was a bottleneck for this setup. 2 machines booting in parallel were able to choke it so the rest failed to get past PXE phase.

1Gb eternet obviously pushes the limit a bit further, but it may be a good idea to actually export a (compressed) squashfs instead of a ready to use filesystem. It offers decent compression ratio, so you have less data to be read from HDD and less data to transfer over the network. Finally, it's also easier to stuff it into RAM, should you decide to bootstrap independent, diskless machines over network.

 *Quote:*   

> Sharing a live / is much harder as the machines identity is there. You would not want to share its /dev /sys /proc and a few other locations.
> 
> You may be able to share a copy of / as long as you take great care with the static elements, like security files. 

 NFS mount doesn't cross mountpoints, so your /dev, /proc, /sys and similar stuff are safe.

Regarding ssh keys etc, I just decided not to care. From my point of view all those machines were parts of a single system, even though it was distributed. Being a single system contained in its own local network I found no harm in sharing both, host's keys and user's keys, so every machine was able to login via ssh to any other machine using the same shared private key. You know, security vs usability trade-off. In my case security wasn't the biggest concern. Your mileage may vary.

----------

