# Software RAID - strange bootup message?

## devzero_DE

Hey,

I dont know if this is the right place for my post but I hope so.

I created two raid1 and one raid5 with mdadm.

```
dev0 ~ # cat /etc/mdadm.conf 

ARRAY /dev/md3 level=raid1 num-devices=2 UUID=d01af3da:4fd73b7c:540e0b0b:116c5ecf

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d943efe5:95e14857:70a20340:82cebf07

ARRAY /dev/md5 level=raid5 num-devices=3 UUID=3566b48e:4634ba25:1febb160:dc90a47c
```

The arrays are up and working after reboot - so theres no problem.

No problem? There is something strange on bootup.. 

```
* Starting up RAID devices (mdadm) ...     [!!]

mdadm: No arrays found in config file
```

What config file? At this point, the Arrays are already up.

----------

## poncio

A look at the /etc/init.d/mdadm script suggests that it is looking for a config file (maybe in /etc/conf.d/) but I could not find it in my install.

Having said that, I also run software raid without the mdadm script with no problems.

----------

## devzero_DE

The mdadm init script isn't started and isn't linked to default or boot.

I think the mdadm package is only used to manage the raids or for stauts mails.

I don't know which scripts prints this error.

----------

## poncio

No script that I could find generates that error.

The mdadm command is the only who can output that error message.

Could you post at what point in the boot process you get the error?

----------

## piwacet

Hi.  I have this problem too.  I have 2 sata disks, with RAID1 on /md1 (sda1 and sdb1), swap on the next 2 partitions, and then / on md3 (sda3 and sdb3).  Since upgrading to the mdadm-2.5* series, I get this error at boot, at this time:

Finalizing UDEV configuration

Mounting devpts @ /dev/pts...

Starting up RAID devices...

mdadm: no arrays found in config file

checking root filesystem

My /etc/mdadm.conf has:

```
ARRAY /dev/md3 level=raid0 num-devices=2 UUID=485d0c26:ff42708b:d5d7cc49:33228f77

ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d22154e5:10c12275:3f5723c9:5c82df05
```

As far as I can tell things still work correctly.

Any thoughts?

----------

## troymc

This is one of those niggling little messages that look bad but really mean nothing.    :Confused: 

Basically mdadm is scanning your config file, and cannot find any arrays that have not already been started.

Why? Because the kernel code has already started all the arrays at boot.

You can duplicate this error by running mdadm --assemble --scan at the command line.

```

# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] 

md1 : active raid0 sdb4[1] sda4[0]

      97707136 blocks 64k chunks

      

md0 : active raid1 sdb1[1] sda1[0]

      136448 blocks [2/2] [UU]

      bitmap: 0/17 pages [0KB], 4KB chunk

unused devices: <none>

# 

# cat /etc/mdadm.conf 

DEVICE partitions

ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1

ARRAY /dev/md1 devices=/dev/sda4,/dev/sdb4

MAILADDR root@localhost

# 

# mdadm --assemble --scan

mdadm: No arrays found in config file

```

Don't Panic!  :Shocked: 

My arrays are fine, they're just already running.

troymc

----------

## piwacet

Thanks!

----------

## R!tman

Isn't there a way to at least get rid of the error message? 

Really strange, I have no idea what starts this in the first place.

----------

## intgr

 *R!tman wrote:*   

> Isn't there a way to at least get rid of the error message?

 

Sounds like this would do it:

```
rc-update del mdadm default
```

 *Quote:*   

> Really strange, I have no idea what starts this in the first place.

 

Yeah, the kernel does RAID autodetection these days.

----------

## R!tman

 *intgr wrote:*   

>  *R!tman wrote:*   Isn't there a way to at least get rid of the error message? 
> 
> Sounds like this would do it:
> 
> ```
> ...

 

No, that's not it. mdadm is not started during boot! Feels strange now too?

----------

## DNAspark99

I have the same message displayed at boot - still no fix?

also, although all arrays seem fine:

```

Personalities : [raid1] 

md1 : active raid1 sdb1[1] sda1[0]

      56128 blocks [2/2] [UU]

      

md2 : active raid1 sdb2[1] sda2[0]

      1003968 blocks [2/2] [UU]

      

md3 : active raid1 sdb5[1] sda5[0]

      15004608 blocks [2/2] [UU]

      

md4 : active raid1 sdb6[1] sda6[0]

      15004608 blocks [2/2] [UU]

      

md5 : active raid1 sdb7[1] sda7[0]

      47078336 blocks [2/2] [UU]

      

unused devices: <none>
```

I do have this somewhat unsettling message in dmesg output:

```

md: Autodetecting RAID arrays.

md: autorun ...

md: considering sdb7 ...

md:  adding sdb7 ...

md: sdb6 has different UUID to sdb7

md: sdb5 has different UUID to sdb7

md: sdb2 has different UUID to sdb7

md: sdb1 has different UUID to sdb7

md:  adding sda7 ...

md: sda6 has different UUID to sdb7

md: sda5 has different UUID to sdb7

md: sda2 has different UUID to sdb7

md: sda1 has different UUID to sdb7

md: created md5

md: bind<sda7>

md: bind<sdb7>

md: running: <sdb7><sda7>

raid1: raid set md5 active with 2 out of 2 mirrors

md: considering sdb6 ...

md:  adding sdb6 ...

md: sdb5 has different UUID to sdb6

md: sdb2 has different UUID to sdb6

md: sdb1 has different UUID to sdb6

md:  adding sda6 ...

md: sda5 has different UUID to sdb6

md: sda2 has different UUID to sdb6

md: sda1 has different UUID to sdb6

md: created md4

md: bind<sda6>

md: bind<sdb6>

md: running: <sdb6><sda6>

raid1: raid set md4 active with 2 out of 2 mirrors

md: considering sdb5 ...

md:  adding sdb5 ...

md: sdb2 has different UUID to sdb5

md: sdb1 has different UUID to sdb5

md:  adding sda5 ...

md: sda2 has different UUID to sdb5

md: sda1 has different UUID to sdb5

md: created md3

md: bind<sda5>

md: bind<sdb5>

md: running: <sdb5><sda5>

raid1: raid set md3 active with 2 out of 2 mirrors

md: considering sdb2 ...

md:  adding sdb2 ...

md: sdb1 has different UUID to sdb2

md:  adding sda2 ...

md: sda1 has different UUID to sdb2

md: created md2

md: bind<sda2>

md: bind<sdb2>

md: running: <sdb2><sda2>

raid1: raid set md2 active with 2 out of 2 mirrors

md: considering sdb1 ...

md:  adding sdb1 ...

md:  adding sda1 ...

md: created md1

md: bind<sda1>

md: bind<sdb1>

md: running: <sdb1><sda1>

raid1: raid set md1 active with 2 out of 2 mirrors

md: ... autorun DONE.

md: Loading md3: /dev/sda5

md: couldn't update array info. -22

md: could not bd_claim sda5.

md: md_import_device returned -16

md: could not bd_claim sdb5.

md: md_import_device returned -16

md: starting md3 failed
```

md3 is my / (root) partition... what's with this error, failure to start md3? Is this just a convoluted way of saying it's 'allready started' ?

----------

## intgr

 *DNAspark99 wrote:*   

> 
> 
> ```
> md: Loading md3: /dev/sda5
> 
> ...

 

errno 22 is EINVAL (invalid argument) which usually indicates something dodgy in the (kernel) code. Looking at the source, it doesn't look like anything critical, but there are a lot of paths that can end up with this error.

The other error, 16 is EBUSY which indicates (suprisingly) that the device is busy. If it's your root partition, have you specified root=/dev/sda5 instead of root=/dev/md3 by any chance? This could be the source for both warnings.

It's odd that md3 actually starts after those messages.

Edit: A Google search indicates that these messages will appear if you have "md=3,/dev/sda5,/dev/sdb5" in your kernel arguments with embedded-superblock md partitions - the RAID will be started once from those parameters and subsequently by the md autodetection (which fails).

----------

## DNAspark99

Aaah, yes, my grub.conf contains:

```
kernel /vmlinuz root=/dev/md3 md=3,/dev/sda5,/dev/sdb5
```

Trimming out the md=3,...  removes the errors as well. Thanks.

----------

## pota

```
* Starting up RAID devices (mdadm) ...     [!!]

mdadm: No arrays found in config file
```

this message is generated by raid_start.sh, if you don't want this just remove a 'ARRAY /dev/mdX...' line from mdadm.conf, be warned however that without configuration you can't start your RAID with 'mdadm -As /dev/mdX'...

----------

## 96140

Thanks, I just got this weird message myself, despite the fact that my RAID1 array is obviously working; checking /proc/mdstat confirms that everything is healthy. We need a smarter relationship between the kernel's autodetection and the rc script, methinks.

----------

## neysx

I guess you could remove raid from RC_VOLUME_ORDER in /etc/conf.d/rc. I haven't tried and I'm not sure what problems could occur as a result of not calling mdadm -S to stop your raid when shutting down. / cannot be shut down anyway and I've never had any trouble.

Another way is to edit /lib/rcscripts/addons/raid-start.sh to make it call mdadm -As only when some devices defined in /etc/mdadm.conf are not active yet

```
# Start software raid with mdadm (new school)

if [[ -x /sbin/mdadm && -f /etc/mdadm.conf ]] ; then

        devs=$(awk '/^md[[:digit:]]+ : active/{activ["/dev/" $1]=1} /^[[:space:]]*ARRAY/ {if (!activ[$2])print $2 }' /proc/mdstat /etc/mdadm.conf)

        if [[ -n ${devs} ]] ; then

                create_devs ${devs}

                ebegin "Starting up RAID devices (mdadm)"

                output=$(mdadm -As 2>&1)

                ret=$?

                [[ ${ret} -ne 0 ]] && echo "${output}"

                eend ${ret}

        fi

fi
```

The only change is on the awk line

Hth

----------

## 96140

 *neysx wrote:*   

> I guess you could remove raid from RC_VOLUME_ORDER in /etc/conf.d/rc. I haven't tried and I'm not sure what problems could occur as a result of not calling mdadm -S to stop your raid when shutting down. / cannot be shut down anyway and I've never had any trouble.

 

Removing it from VOLUME_ORDER sounds a little edgy; I'd prefer to let someone else try it out before me. I already get some interesting messages about not being able to stop /dev/md1 (/boot) on shutdown, whether or not it's mounted. Of course, then a second later it tells me everything is fine & shut down, and then shutdown continues as normal. Again, there's something weird in the kernel auto-RAID stuff vs. the init script-provided RAID.

Good to know that md2 (/) can't be shut down anyway; makes sense. If I ever see a related message, now I'll know why.  :Smile: 

----------

## Aysen

 *neysx wrote:*   

> / cannot be shut down anyway and I've never had any trouble.

 Oh, so is this why I always get a

```
mdadm: failed to stop array /dev/md3 Device or resource busy
```

error on a reboot/halt, yet my array is perfectly healthy (at least I believe so   :Laughing:  )?

I'm also getting that mdadm error during boot, good to know it's nothing wrong.

This thread's been very informative for me, thank you.

----------

## tgh

mdadm does not require a config file if you use the newer RAID types.  It puts a special RAID ID on each drive so that it knows which drive goes with which.  However, if you really want to create a config file...

```
mdadm --detail --scan >> /etc/mdadm.conf
```

----------

## Aysen

 *tgh wrote:*   

> mdadm does not require a config file if you use the newer RAID types.  It puts a special RAID ID on each drive so that it knows which drive goes with which.  However, if you really want to create a config file...
> 
> ```
> mdadm --detail --scan >> /etc/mdadm.conf
> ```
> ...

 I knew I didn't actually need that config file, but the HOWTO I followed to set up mdadm for the first time instructed to create it (exactly the way you wrote), so I just did it. Now, after removing that file, I don't get those errors anymore.

Thank you!

----------

## 96140

Note to all: you can live without the config file and/or init script if and only if you have set your RAID partition type to fd (Linux RAID autodetect); this is the only way you can get the kernel to do its autodetect/setup magic.

----------

## overkll

 *Aysen wrote:*   

>  *tgh wrote:*   mdadm does not require a config file if you use the newer RAID types.  It puts a special RAID ID on each drive so that it knows which drive goes with which.  However, if you really want to create a config file...
> 
> ```
> mdadm --detail --scan >> /etc/mdadm.conf
> ```
> ...

 

Works here too.  Only one problem - the MAILADDR variable in /etc/mdadm.conf.

So I added

```
-m root@mydomain
```

to MDADM_OPTS in /etc/conf.d/mdadm to insure I still get notification messages from the mdadm monitor daemon.

----------

## avieth

This is a little unrelated, but after starting that mdadm initscript and getting the reported warning (no arrays found) the init process proceeds to check my filesystems.

EVERY single time I boot it outputs at least 10 lines of text, and pauses on 'replaying journal...' for about 5-8 seconds. This is really annoying, as I like a speedy boot. Any ideas why my reiserfs filesystem is doing this?

----------

## b00zy

I have a RAID setup like this...

```
/dev/md1: /boot

/dev/md2: /swap

/dev/md3: /
```

At the end of the shutdown sequence, /dev/md1 and /dev/md2 are unmounted. However, an error is returned when unmounting /dev/md3 "Device or resource busy."

Everything in boot and normal operation works fine other than that error.

----------

## b00zy

 *Aysen wrote:*   

>  *neysx wrote:*   / cannot be shut down anyway and I've never had any trouble. Oh, so is this why I always get a
> 
> ```
> mdadm: failed to stop array /dev/md3 Device or resource busy
> ```
> ...

 

Is that true? Is there absolutely no way to quell this error?

----------

## Aysen

 *b00zy wrote:*   

> Is that true? Is there absolutely no way to quell this error?

 Read my previous post.

----------

## nixnut

merged above post here.

----------

## richard.scott

 *b00zy wrote:*   

> Is that true? Is there absolutely no way to quell this error?

 

You could edit the following file:

```
/lib/rcscripts/addons/raid-stop.sh
```

and comment out the following line:

```
[[ ${ret} -ne 0 ]] && echo "${output}"
```

This is the line that outputs the error text to screen.

----------

## jexxie

 *b00zy wrote:*   

> I have a RAID setup like this...
> 
> ```
> /dev/md1: /boot
> 
> ...

 

Do you have your dump/pass set to 0 in your /etc/fstab for all your partitions?

----------

## richard.scott

 *jexxie wrote:*   

> Do you have your dump/pass set to 0 in your /etc/fstab for all your partitions?

 

No I don't.... will that help??

----------

