# linux does not unmount /dev/md0 on reboot automaticaly. why?

## e3k

i have got a raid 1 disk system installed via mdadm. after adding the second disk and a full sync i did try to restart and boot from the second hdd. during the shutdown i have got errors that /dev/md0 is still used. what can i do about it?

---

i have / mounted on /dev/md0

---

mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?

mdraid failed to stop

---

the setup is 2 hdds in raid 1 (mirror). size 2tb. MBR with 3 primary partitions 1.boot 2.swap 3. /

only the partitions 3. are in raid.

----------

## Roman_Gruber

I got quite similar error with my luks/lvm/ext4 rootfs with eudev.

I just ignore that. I remember similar messages since I use linux. Is kinda an old issue that the box complains that stuff is still mounted when shut it down or reboot process. Usually it waits to complete those and than goes on 

Why does it do that, was your question?

Well there are still data which needs to be written to the disc, and I assume it is just a warning, that this is not yet completed.

----------

## NeddySeagoon

e3k,

After that message, you should see something about root being remounted read only.

That means the filesystem will be clean, which is what is important.

Provided that fsck does not run every boot, root is being mounted read only prior to shutdown.

Its possible that the screen is blanked before you get to see the message.

----------

## e3k

 *NeddySeagoon wrote:*   

> e3k,
> 
> After that message, you should see something about root being remounted read only.
> 
> That means the filesystem will be clean, which is what is important.
> ...

 

the remount ro comes later. never the less you have a running process which i guess is killed by -9. as this process 'mdraid' is taking care of the 2 hdds i do not want it to be killed by -9.

it is more of a question how to cleanly shut mdraid down.

----------

## frostschutz

With / on anything (md, luks, lvm, ...) it just means it's busy until the very end... it can only be re-mounted read-only, the underlying storage layers can not be stopped or cleaned up.

That's usually not a problem because with the read-only remount, all writes are through and done and "properly shutting off" does not make anymore changes on disk so there is no difference.

Or that's the theory anyways; of course there is always some possibilities, for example your RAID could be in mid-reshape so there are writes under the hood; OTOH the RAID is supposed to be able to handle that.

If for whatever reason you must have a 100% clean shutdown to cover even the moste obscure corner cases, what you need is a "shutdownfs" that works like initramfs, i.e during shutdown you pivot root to a rootfs structure in memory, have it take over your init system, have it umount the now no longer needed / (pivoted to /mnt/root) partition and perform all the final cleanup steps.

But that's just not usually done because it's not really necessary in practice.

----------

## e3k

 *frostschutz wrote:*   

> With / on anything (md, luks, lvm, ...) it just means it's busy until the very end... it can only be re-mounted read-only, the underlying storage layers can not be stopped or cleaned up.
> 
> That's usually not a problem because with the read-only remount, all writes are through and done and "properly shutting off" does not make anymore changes on disk so there is no difference.
> 
> Or that's the theory anyways; of course there is always some possibilities, for example your RAID could be in mid-reshape so there are writes under the hood; OTOH the RAID is supposed to be able to handle that.
> ...

 

i guess the only thing that needs to be done is to shutdown the mdraid process before 'remounting ro' but i am not an expert for openrc.

further here is some reading from google: http://unix.stackexchange.com/questions/202620/proper-way-to-stop-software-raid-before-system-shutdown

the answer sounds quite clear what should i do to try it?

----------

## frostschutz

 *e3k wrote:*   

> i guess the only thing that needs to be done is to shutdown the mdraid process before 'remounting ro'

 

If you forcibly stopped a md device, before the filesystem on it was remounted readonly, then the filesystem would be inconsistent.

That's worse than what you have now. (Or do you actually have any filesystem consistency issues?)

 *Quote:*   

> the answer sounds quite clear what should i do to try it?

 

That's about a raid in an external enclosure, so it's a bit different from your situation I guess. The answer is not wrong but "Unmount all filesystems on the array." simply can't be done if the filesystem in question is the root filesystem mounted in /.

----------

## e3k

 *frostschutz wrote:*   

>  *e3k wrote:*   i guess the only thing that needs to be done is to shutdown the mdraid process before 'remounting ro' 
> 
> If you forcibly stopped a md device, before the filesystem on it was remounted readonly, then the filesystem would be inconsistent.
> 
> That's worse than what you have now. (Or do you actually have any filesystem consistency issues?)
> ...

 

there is no inconsistency reported. do i understand it correctly that this is only a message where linux tries to shutdown mdraid which refuses to go down and then / is remounted ro with mdraid still running (like NeddySeagoon said)?

if this is how it runs on your boxes without problem maybe i should also not care about it.

----------

