# Umont Root raid device

## j5

Hi everyone,

   I had a little problem in my new Firewall server: (Software RAID-1 on /)

   When i ran the shutdown process, it runs /lib/rcscripts/addon/raid-stop.sh, and then it fails with Device or Resource Busy..

  I've tried to put a ps aux; lsof before the mdadm command, but i had just found kernel processes, udevd the the /sbin/rc command...

   I've also killed udevd before the shutdown process but the error persists. Then i made some research and Slackware for example doesnt execute any raidtop /dev/md0 or mdadm -S /dev/md0 on the shutdown process.

   How i temporary solved the problem? rm /lib/rcscripts/addon/*  :Smile: . But very strange, maybe a design problem ? IMHO, that script shoudn't be in /lib... maybe some script libraries, but not a shutdown process itself.

  Sorry my bad english, with best regards...

---

Giovanni F. Moser Frainer

----------

## R!tman

I get the same problem:

```
 * Unmounting filesystems ...                                                [ ok ]

 * Shutting down RAID devices (mdadm) ...

mdadm fail to stop array./dev/md1: Device or resource busy                   [ !! ]

 * Remounting remaining filesystems readonly ...                             [ ok ]
```

No wonder it is busy when it wants to shut down the raid devices before unmounting the filesystem mounted on them. 

Can raid devices be shut down at all, if the filesystems are only remounted readonly and not completely unmounted?

Simply removing the files you mentioned seems to be a nasty hack. Isn't there a proper solution to this?

EDIT: I believe this did not occur before the latest version of pam (0.78-r2).

----------

## richard.scott

I had the same problem and also noticed I had both madam and raidtools installed. 

I uninstalled mdadm and found the problem went away.

I gues its a one or the other kind of thing?

----------

## R!tman

 *richard.scott wrote:*   

> I had the same problem and also noticed I had both madam and raidtools installed. 
> 
> I uninstalled mdadm and found the problem went away.
> 
> I gues its a one or the other kind of thing?

 

But I would like to keep mdadm....

There must be a better way!

----------

## richard.scott

Have a look at this bug report https://bugs.gentoo.org/show_bug.cgi?id=83821

they suggest that both mdadm and raidtools should not be installed together.

----------

## R!tman

 *richard.scott wrote:*   

> Have a look at this bug report https://bugs.gentoo.org/show_bug.cgi?id=83821
> 
> they suggest that both mdadm and raidtools should not be installed together.

 

Unfortunately, I have only mdadm installed. But what I find very strange is this:

```
qpkg -f /lib/rcscripts/addons/raid-stop.sh
```

This does not belong to any program. Neither to mdadm nor to raidtools.

I found that out by only installing one of the programs at a time and running the command above. Pretty strange...

EDIT: As the files in /lib/rcscripts/addon/ did not belong to any program, I simply deleted the whole directory. The error message is gone now, but I am not convinced this is the correct way of doing it.

----------

## richard.scott

You could try unmerging both packages (if any are still installed) and re-emerging mdadm and see if that updates the portage database of who own's what.

My problem seems to be because the root partition has not bee unmounted when it try's to stop the raid-array. I can't see any major problem with not having the raid-stop.sh script as each boot seems to be ok with the filesysem being checked at boot up.

----------

## R!tman

I believe I know the cause of the problem now  :Smile: :

On my computer the files raid-start.sh and raid-stop.sh are, for obvious reasons, not in /lib/rcscripts. They are in /lib64/rcscripts.

I believe that was the cause of the error message. Now, I also get this:

```
# qpkg -f /lib64/rcscripts/addons/raid-start.sh

sys-fs/mdadm *
```

Probably somehow these files got into /lib instead of /lib64, I believe during the first gentoo installation, and for earlier pam versions, it maybe did not matter that these files were there as there were no error messages .

I consider this solved  :Smile: . Thanks, richard.scott!

----------

## R!tman

Well, maybe not solved  :Sad: .

Although when I emerge mdadm I get this

```
>>> Merging sys-fs/mdadm-1.11.0 to /

--- /etc/

--- /etc/init.d/

>>> /etc/init.d/mdadm

>>> /etc/mdadm.conf

--- /usr/

--- /usr/share/

--- /usr/share/doc/

>>> /usr/share/doc/mdadm-1.11.0/

>>> /usr/share/doc/mdadm-1.11.0/ANNOUNCE-1.11.0.gz

>>> /usr/share/doc/mdadm-1.11.0/TODO.gz

>>> /usr/share/doc/mdadm-1.11.0/INSTALL.gz

--- /usr/share/man/

--- /usr/share/man/man4/

>>> /usr/share/man/man4/md.4.gz

--- /usr/share/man/man5/

>>> /usr/share/man/man5/mdadm.conf.5.gz

--- /usr/share/man/man8/

>>> /usr/share/man/man8/mdadm.8.gz

--- /sbin/

>>> /sbin/mdadm

--- /lib64/

--- /lib64/rcscripts/

>>> /lib64/rcscripts/addons/

>>> /lib64/rcscripts/addons/raid-stop.sh

>>> /lib64/rcscripts/addons/raid-start.sh

>>> Regenerating /etc/ld.so.cache...

>>> sys-fs/mdadm-1.11.0 merged.

>>> Recording sys-fs/mdadm in "world" favorites file...
```

there are these two files in /lib/rcscripts/addons. Both do not belong to mdadm.

Simply deleting the directory gets rid of the error message...

Is that an error in the ebuild?

----------

## richard.scott

ah, this explains things a little more:

https://bugs.gentoo.org/show_bug.cgi?id=99611

In answer to why we now see the errors this bug says:

 *Quote:*   

> it's always done that
> 
> the difference is that the older version used to pipe the errors to /dev/null
> 
> and ignore the result
> ...

 

----------

## richard.scott

are you using a 64bit chip?

I don't get the /lib64 directory listed in my setup. I get this:

```
>>> Completed installing mdadm-1.11.0 into /var/tmp/portage/mdadm-1.11.0/image/

>>> Merging sys-fs/mdadm-1.11.0 to /

--- /etc/

--- /etc/init.d/

>>> /etc/init.d/mdadm

>>> /etc/mdadm.conf

--- /lib/

--- /lib/rcscripts/

--- /lib/rcscripts/addons/

>>> /lib/rcscripts/addons/raid-stop.sh

>>> /lib/rcscripts/addons/raid-start.sh

--- /usr/

--- /usr/share/

--- /usr/share/doc/

>>> /usr/share/doc/mdadm-1.11.0/

>>> /usr/share/doc/mdadm-1.11.0/ANNOUNCE-1.11.0.gz

>>> /usr/share/doc/mdadm-1.11.0/TODO.gz

>>> /usr/share/doc/mdadm-1.11.0/INSTALL.gz

--- /usr/share/man/

--- /usr/share/man/man4/

>>> /usr/share/man/man4/md.4.gz

--- /usr/share/man/man5/

>>> /usr/share/man/man5/mdadm.conf.5.gz

--- /usr/share/man/man8/

>>> /usr/share/man/man8/mdadm.8.gz

--- /sbin/

>>> /sbin/mdadm

>>> Regenerating /etc/ld.so.cache...

>>> sys-fs/mdadm-1.11.0 merged.

>>> Recording sys-fs/mdadm in "world" favorites file...
```

----------

## R!tman

I use the "multilib" use flag. Maybe that is why I have some libs in lib64.

And for the bug report: Do I understand this correct that the error was always there, only it was output to /dev/null?

EDIT: The "WONTFIX" is a little disturbing regarding the bug report  :Sad: .

----------

## Rusty1973

I had the same Problem, that my md1 / wouldn't unmount on shutdown.

I just unemerged mdadm and left raidtools on the system and it works.

Now i only have the Problem Disabling barriers, not supported by the underlying device XFS mounting for filesystem :md2

----------

## mgbowman

I just recently started exploring the wonderful world of software RAID and have been experiencing the same issues (along with the similar error at bootup). Am I to understand this correctly that the communitity doesn't want to do anything about this? I agree with R!tman with the WONTFIX status being disturbing and know this is a minor issue, but it's just not right.

--mgb

----------

## SweepingOar

Any action on this issue? I've got the same error on shutdown.

```
Stopping syslog-ng ... [ok]

Deactivating swap ... [ok]

Unmounting filesystems ... [ok]

Shutting down RAID devices (mdadm) ... [ok]

mdadm: stopped /dev/md0

mdadm: fail to stop array /dev/md1: Device or resource busy [!!]

Remounting remaining filesystems readonly ... [ok]
```

----------

## MaDDeePee

Since Raidtools is gone, i need a solution.

I dont like error messages...

----------

## andretti

Actually why do we need to (or should) stop raid devices before powering the machine off?

----------

## honeymak

i have lvm2 on raid1

/ resides on lvm2

/dev/md0 - consists of /dev/sda1 /dev/sdb1 - /boot - ext3 - raid1

/dev/md1 - consists of /dev/sda2 /dev/sdb2 - swap - raid1

/dev/md2 - consists of /dev/sda3 /dev/sdb3 - / - jfs on lvm2 on raid1

i didn't try the hard way (pipe error to /dev/null)

i see error msg like u guys when shutting down

after a reboot, i can't vgchange/vgdisplay, thus, it does mean something not clean

i just telinit 1

then jfs_fsck on /

it has something to replay

thus it means something not really clean, though it seems not really fs level (every reboot, checkroot reports / is clean)

after that fsck,

telinit 3

i can vgdisplay normally

thus, that busy thing cannot work in /

is this really a bug?

raidtools or pipe to /dev/null is safe? it's a doubt

maybe for the meantime, don't make anything on / ?

just rsync? ohoh

 :Rolling Eyes: 

----------

## stuorguk

Any updates to this?  I am using Raid1 with Reiser4.  Every time I shutdown, the root partition is not being unmounted, and causes problems on boot up.  :Sad: 

----------

## ferg

I've always seen this error, regarding my root partition.  I don't see a problem with the error message though.  Remember it's still unmounting the filesystem stored on the RAID array and remounting it Read Only, it's just not stopping the RAID array where the filesystem is stored.  As long as the filesystem is unmounted then all is well. 

I guess if the error message really annoys you then do as Richard.Scott points out about earlier incarnations, and edit the file to pipe the error to /dev/null.

Cheers

Ferg

----------

## stuorguk

It's more than just am error message.  Every other boot fails, as the drive was not unmounted cleanly.  The kernel loads, and then halts until you do a Ctrl-D to restart it again.  Second time around it's ok, until the next reboot.

----------

## ferg

 *stuorguk wrote:*   

> It's more than just am error message.  Every other boot fails, as the drive was not unmounted cleanly.  The kernel loads, and then halts until you do a Ctrl-D to restart it again.  Second time around it's ok, until the next reboot.

 

Hi Stuorguk,

I was talking about the original poster's question.  Do you have exactly the same issue?

I'm not that familar with Reiserfs, but with Ext2/3 filesystems you only get the CTRL-D single user option when they're are fs errors caused by the unclean shutdown.  Otherwise it just checks the filesystem before mounting it rw.  Do you always get this is a Resierfs filesystem is not umounted cleanly?

I would reiterate that stopping a raid array is different and much less serious than umounting the filesystem on the raid array.

Cheers

Ferg

----------

## snailhead

I propose this for raid-stop.sh

```
[ -f /proc/mdstat ] || exit 0

# Stop software raid with mdadm if not mounted on root

mdadm_conf="/etc/mdadm/mdadm.conf"

[ -e /etc/mdadm.conf ] && mdadm_conf="/etc/mdadm.conf"

if [ -x /sbin/mdadm -a -f "${mdadm_conf}" ] ; then

   ebegin "Shutting down RAID devices (mdadm)"

   for i in `ls -t /dev/md` ; do

      mounted=`mount | egrep "md${i} on / "`

      if [ -n "${mounted}" ] ; then

         echo /dev/md${i} is mounted on root and should *not* be stopped

         ret=0

      fi

      if [ -z "${mounted}" ] ; then

         active=`cat /proc/mdstat | egrep active | egrep md${i}`

         ret=0

         if [ -n "${active}" ] ; then

            output=$(mdadm -S /dev/md${i})

            ret=$?

         fi

      fi

      [ ${ret} -ne 0 ] && echo "${output}" && return=$?

   done

   eend ${return}

fi

```

----------

