# Dmesg: Mdamd / invalid raid superblock [SOLVED]

## iandoug

Hi

After upgrading kernel to 4.4.6 I noticed the below in dmesg, I rebooted to previous kernel and it also appears.

Don't know WHEN it appeared first, I don't often read dmesg, but it wasn't always there.

Question: is it something I need to worry about? Or how do I fix? Could not find relevant answer in forums.

Drives are sda = boot, root, plain ext3, and swap.

sdb1,sdc1 == md1 == 2 mirrored drives with /home

Drives work fine, nightly health check is clean.

```

[    2.051108] scsi 1:0:0:0: Direct-Access     ATA      WDC WD10EARX-00P AB51 PQ: 0 ANSI: 5

[    2.051824] sd 1:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)

[    2.051840] sd 1:0:0:0: Attached scsi generic sg0 type 0

[    2.051999] ata3.00: configured for UDMA/133

[    2.052093] scsi 2:0:0:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5

[    2.052311] sd 2:0:0:0: Attached scsi generic sg1 type 0

[    2.052320] sd 2:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)

[    2.052432] sd 2:0:0:0: [sdb] Write Protect is off

[    2.052433] sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00

[    2.052472] sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

[    2.052962] ata4.00: configured for UDMA/133

[    2.053038] scsi 3:0:0:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5

[    2.053229] sd 3:0:0:0: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)

[    2.053243] sd 3:0:0:0: Attached scsi generic sg2 type 0

[    2.053346] sd 3:0:0:0: [sdc] Write Protect is off

[    2.053348] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00

[    2.053399] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

[    2.056264] sd 1:0:0:0: [sda] 4096-byte physical blocks

[    2.056489] sd 1:0:0:0: [sda] Write Protect is off

[    2.056686] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00

[    2.056696] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

[    2.059775] scsi 8:0:0:0: CD-ROM            HL-DT-ST BD-RE  BH14NS40  1.00 PQ: 0 ANSI: 5

[    2.059831]  sdb: sdb1

[    2.060189] sd 2:0:0:0: [sdb] Attached SCSI disk

[    2.071972]  sdc: sdc1

[    2.072529] sd 3:0:0:0: [sdc] Attached SCSI disk

[    2.078934] usb 9-2: new low-speed USB device number 2 using ohci-pci

[    2.081296] sr 8:0:0:0: [sr0] scsi3-mmc drive: 24x/48x writer dvd-ram cd/rw xa/form2 cdda tray

[    2.081655] cdrom: Uniform CD-ROM driver Revision: 3.20

[    2.081998] sr 8:0:0:0: Attached scsi CD-ROM sr0

[    2.082044]  sda: sda1 sda2 sda3

[    2.082100] sr 8:0:0:0: Attached scsi generic sg3 type 5

[    2.082835] sd 1:0:0:0: [sda] Attached SCSI disk

[    2.083074] md: Waiting for all devices to be available before autodetect

[    2.083277] md: If you don't use raid, use raid=noautodetect

[    2.098233] md: Autodetecting RAID arrays.

[    2.114311] md: invalid raid superblock magic on sdb1

[    2.114515] md: sdb1 does not have a valid v0.90 superblock, not importing!

[    2.134034] md: invalid raid superblock magic on sdc1

[    2.134269] md: sdc1 does not have a valid v0.90 superblock, not importing!

[    2.134483] md: Scanned 2 and added 0 devices.

[    2.134710] md: autorun ...

[    2.134902] md: ... autorun DONE.

```

```

/dev/md1:

        Version : 1.2

  Creation Time : Mon Feb 27 23:07:41 2012

     Raid Level : raid1

     Array Size : 1953512400 (1863.01 GiB 2000.40 GB)

  Used Dev Size : 1953512400 (1863.01 GiB 2000.40 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent

    Update Time : Mon Jun 13 00:00:54 2016

          State : clean 

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

           Name : livecd:1

           UUID : 80a5e99c:fb79aebb:0aaafca1:e0c57dbf

         Events : 999

    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       33        1      active sync   /dev/sdc1

```

Thanks, Ian

----------

## NeddySeagoon

iandoug,

```
[    2.098233] md: Autodetecting RAID arrays.

[    2.114311] md: invalid raid superblock magic on sdb1

[    2.114515] md: sdb1 does not have a valid v0.90 superblock, not importing!

[    2.134034] md: invalid raid superblock magic on sdc1

[    2.134269] md: sdc1 does not have a valid v0.90 superblock, not importing! 
```

That message has been there for as long as you have had raid autodetect in your kernel.

The key phrase here is "does not have a valid v0.90 superblock". That's correct.

```
/dev/md1:

        Version : 1.2 
```

 The raid superblock is actually Version 1.2

Its a harmless warning.  You probably have mdadm in a runlevel and that will assemble your raid set.

----------

## iandoug

Hi Neddy

Thanks for the reassurance  :Smile: 

Not sure where mdadm gets loaded, I don't see it in the list but maybe it's camouflaged.

```

trooper ian # rc-update show

           bacula-dir |      default                 

            bacula-fd |      default                 

               binfmt | boot                         

             bootmisc | boot                         

           consolekit |      default                 

                cupsd |      default                 

                devfs |                       sysinit

                dmesg |                       sysinit

                 fsck | boot                         

             hostname | boot                         

              hwclock | boot                         

              keymaps | boot                         

            killprocs |              shutdown        

    kmod-static-nodes |                       sysinit

             lighttpd |      default                 

           lm_sensors |      default                 

                local |      default                 

           localmount | boot                         

             loopback | boot                         

              modules | boot                         

             mount-ro |              shutdown        

                 mtab | boot                         

                mysql |      default                 

             net.eth0 |      default                 

               net.lo | boot                         

             netmount |      default                 

                nginx |      default                 

           ntp-client |      default                 

                 ntpd |      default                 

              php-fpm |      default                 

       postgresql-9.3 |      default                 

               procfs | boot                         

            pure-ftpd |      default                 

                 root | boot                         

                samba |      default                 

            savecache |              shutdown        

             sendmail |      default                 

                 sshd |      default                 

                 swap | boot                         

            swapfiles | boot                         

               sysctl | boot                         

                sysfs |                       sysinit

            syslog-ng |      default                 

         termencoding | boot                         

         tmpfiles.dev |                       sysinit

       tmpfiles.setup | boot                         

                 udev |                       sysinit

              urandom | boot                         

           vixie-cron |      default                 

               xinetd |      default       

```

I actually checked dmesg because I saw a message about some group.sh (or something along those lines) not running/loading, but can't find it in dmesg now.

oh wait, maybe I need to reboot to the new kernel to see... let me try that.

Thanks, Ian

----------

## iandoug

Message about a file  ????-cgroup.sh not found somewhere in the ???/rc.d/??? path

Message is not in dmesg.

Thanks, Ian

----------

## NeddySeagoon

iandoug,

Well, mdadm is assembling your raid sometime before localmount runs or /home would not be mounted.

Its clearly not the kernel. mdadm --assemble can be run as a command.  It need not be a service.

The mdadm service will send email about state changes in the raid. e.g. a drive being kicked out. 

If you have an initrd, mdadm can be called there.  I have to do that on my main box as root is on lvm on  top of raid5. 

By way of example, my dmesg contains, to bring up my /boot raid1

```
[    3.332964] md: md125 stopped.

[    3.334219] md: bind<sdb1>

[    3.334786] md: bind<sdc1>

[    3.335432] md: bind<sdd1>

[    3.336054] md: bind<sda1>

[    3.336931] md/raid1:md125: active with 4 out of 4 mirrors
```

You should find something similar in dmesg too.

----------

## iandoug

```

trooper ~ # dmesg | grep raid

[    1.791536] md: raid1 personality registered for level 1

[    2.002526] md: If you don't use raid, use raid=noautodetect

[    2.022397] md: invalid raid superblock magic on sdb1

[    2.044121] md: invalid raid superblock magic on sdc1

[   11.123138] md/raid1:md1: active with 2 out of 2 mirrors

```

This stuff is all one step away from black magic for me...  :Smile: 

Thanks, Ian

----------

