# Shutdown with root on RAID

## at

I am trying to understand how shutdown sequence should happen if I have root filesystem on LVM on RAID.

It seems that to shut them down, lvm and mdadm would need to be executed, but before that root would need to have been unmounted...

I cannot imagine /sbin/halt doing that - how would it know about all the idiosyncrasies of a particular system?

Or does it simply never get to be properly shutdown in the hope that all the disks have been synced and re-mounted read-only?

----------

## m0p

I remember reading that deactivating LVM volumes doesn't actually change anything on the disk, and I assume the same is true with mdadm. I assume that as long as your partitions are sync'd and mounted ro/unmounted, it doesn't matter. Don't take my word for it, I've never worked with either tool, I just remember reading that somewhere about LVM at least.

----------

## Goverp

My desktop machine has its rootfs on software RAID5.  It's not auto-assembled (I pass kernel parameters at boot defining the array) but it would work pretty much the same if it were.  I don't use LVM.  I am using OpenRC/baselayout-2.  I added mdraid to the boot runlevel.  I suspect it's irrelevant, as the kernel's already assembled the array, but I've not tried running without it. 

At shutdown, when mdraid runs unsurprisingly it complains that root is still in use.  However, the shutdown then continues happily.  My guess is that the array gets closed properly as a result of the sync and remount as R/O that AFAIR come at the end of the shutdown process.

I've not noticed any problems such as resynchronizing or recovery when starting up the system in the year since I set it up.  I boot the system about once a day.  Conversely, I have noticed that if I pull the plug without entering shutdown (I don't remember which updates left my system with a dead console, but it's happened a few times in the last year  :Sad:  ), the next restart works OK but the system is very sluggish for 30 or 40 minutes while it resynchronizes the array.

I've vaguely considered tinkering with the rc- dependencies for mdraid to try to make it run nearer the end of the shutdown process, but I suspect that is pointless.  While it might be possible to explicitly close the array cleanly, if there was a problem, I would want to write an error message somewhere, and for that to work, rootfs would have to be mounted, and then mdraid wouldn't run...

[EDIT] Got the service name wrong: it's mdraid, not mdadm

----------

## frostschutz

If I remember correctly the only place where you might need to actually shut down LVM is an obscure clustered setup. On a local single independent box, it's okay to remount read only, sync and cut power.

----------

## at

Perhaps, I agree about unimportance of shutting down LVM. However, in my opinion, stopping RAID is a completely different story.

During shutdown, I get the following errors:

```
 [ !! ]

 * Closing Luks A ...

Device base is busy.

 [ !! ]

 * Stoping RAIDs ...

mdadm: failed to stop array /dev/md231: Device or resource busy

Perhaps a running process, mounted filesystem or active volume group?
```

then a little later:

```
[ ok ]

 *   Unmounting /usr ...

 *   in use but fuser finds nothing

 [ !! ]

 * Shutting down the Logical Volume Manager

 *   Shutting Down logical volumes  ...

  LV lvm-a/root in use: not deactivating

  LV lvm-a/usr in use: not deactivating

 [ !! ]

 *   Shutting Down volume groups  ...

  Can't deactivate volume group "lvm-a" with 2 open logical volume(s)

 [ !! ]
```

And finally, when I restart, I always get a corrupted root file system.

I suppose I should mention that there is no entry in the log saying that root has been remounted read-only. At any rate, I don't think it would have succeeded since it even fails to unmount (or remount ro) /usr partition.

----------

## frostschutz

Well, there is no reason why it should fail for /usr, so that's the first problem you have to fix.

The LVM script should say that it can't deactivate root and the group with 1 open volume(s) but that is a non-critical error. It's the same for me (encrypted root on lvm, no raid).

I must admit that I never used RAID on Gentoo, since I use it on Desktop only, whereas RAID is server / nas only for me. So I'm not familiar with Gentoo's mdadm scripts.

----------

## at

Yes, thank you for your reply.

I agree, the first problem I should look into is not being able to remount /usr read-only.

----------

