# What version of RAID metadata should I use ? [SOLVED]

## robdd

Hello All,

My 1Tb drive started giving errors, so I've gone out and bought two WD 2Tbyte Green disks, and I have tried to set up a RAID1 setup for my data partition. But it hasn't worked. I've tried searching these forums, and Googling, but my problem is that there's *too much* information, and some of it is surely out of date, but I don't which is right and which is wrong (However, as any Gentoo addict knows, the latest bleeding edge stuff is ALWAYS the best !).

I have set up sda with sda = /, sda2 = swap, sda3 = /usr, sda4 = extended, and sda5 = /work, with partition type FD (RAID autodetect). Ditto with sdb. I plan to backup /and /usr to sdb occasionally, so I can always boot off sdb if sda is sick. The sda5/sdb5 partitions are going to be my RAID1 mirrored partitions. I wasn't sure what version of raid metadata to use, but figured later MUST be better, so went with the default:

```

mdadm --create --verbose /dev/md5 --level=1 --raid-devices=2 /dev/sd[ab]5

```

although I did experiment on a single disk first with

```

mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb1 /dev/sdb2

```

and that seemed to work fine. Anyway,  I went with the first mdadm command, created an ext3 filesystem on /dev/md5, mounted it on /work, and restored some of my data. It took a long while (5-6hours) to sync the two disks, after which I shutdown for the day.

That was yesterday - now I boot up, and /work won't mount. I have a look in /dev, and there's a /dev/md127 instead of the /dev/m5 that I had yesterday, so no wonder mount can't find it. Then I grep dmesg for 'md', and I get (plus some other stuff..):

```
chook ~ $ grep md dmesg.log 

[    1.344124] md: raid1 personality registered for level 1

[    1.344195] md: raid10 personality registered for level 10

[    1.344269] md: faulty personality registered for level -5

[    1.397721] md: Waiting for all devices to be available before autodetect

[    1.397796] md: If you don't use raid, use raid=noautodetect

[    1.398094] md: Autodetecting RAID arrays.

[    1.423077] md: invalid raid superblock magic on sdb5

[    1.423159] md: sdb5 does not have a valid v0.90 superblock, not importing!

[    1.445624] md: invalid raid superblock magic on sda5

[    1.445707] md: sda5 does not have a valid v0.90 superblock, not importing!

[    1.445793] md: Scanned 2 and added 0 devices.

[    1.445871] md: autorun ...

[    1.445940] md: ... autorun DONE.

[    2.457455] mdadm: sending ioctl 1261 to a partition!

[    2.457457] mdadm: sending ioctl 1261 to a partition!

[    2.457459] mdadm: sending ioctl 1261 to a partition!

[    2.457566] mdadm: sending ioctl 1261 to a partition!

[    2.457567] mdadm: sending ioctl 1261 to a partition!

[    2.457569] mdadm: sending ioctl 1261 to a partition!

[    2.457652] mdadm: sending ioctl 1261 to a partition!

[    2.457653] mdadm: sending ioctl 1261 to a partition!

[    2.457655] mdadm: sending ioctl 1261 to a partition!

[    2.457656] mdadm: sending ioctl 1261 to a partition!

[    2.460909] md: bind<sdb5>

[    2.598271] md: bind<sda5>

[    2.599303] md/raid1:md127: active with 2 out of 2 mirrors

[    2.599318] md127: detected capacity change from 0 to 1974625845248

[    2.612865]  md127: unknown partition table

chook ~ $ mdadm -V

mdadm - v3.1.4 - 31st August 2010

chook ~ $ uname -a

Linux chook 3.2.12-gentoo #2 SMP Mon Sep 17 21:58:49 EST 2012 x86_64 Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz GenuineIntel GNU/Linux

chook ~ $ cat /proc/mdstat 

Personalities : [raid1] [raid10] [faulty] 

md127 : active (auto-read-only) raid1 sda5[0] sdb5[1]

      1928345552 blocks super 1.2 [2/2] [UU]

      

unused devices: <none>

```

And now there's no ext3 partition found on /dev/md127 ?? What the....

So my questions are:

1) What version of RAID metadata should I use ?

2) Should I be using RAID autodetect on bootup (which implies I should be using v 0.90 ?) ?

3) Should I care about setting up/editing /etc/mdadm.conf ?

I really would like to only set up the RAID partition and restore my data ONCE more   :Very Happy: 

So any sage advice on what the *current* best-practice for RAID is would be much appreciated.

Regards from "Rob the Confused".

----------

## NeddySeagoon

robdd,

Here are the rules.

For booting with grub1 /boot must be raid version 0.90.  You can use kernel autoassembly if you like but grub won't care as the raid is not assembled when it runs at boot time.  This also means that /boot must be raid1 or not raided.

kernel auto assembly only works with raid metadata 0.90. Thats a make your mind up limitation if you want autoassembly.

If root is on raid, it must be assembled before you attempt to mount it. This means that if you use metadata 1.2, you must use and initrd to assemble root.

Further, udev requires that /usr and /var are mounted before it starts.  Thats not an issue if they are on /  (root) but if you want separate /usr and/or /var, its a further complication.

The main limitation with metadata 0.90 is that you can only have 28 elements in a raid set.

Kernel raid autoassembly is going away "soon" as its a depreciated feature. Its been like that for over two years now.

Your data and filesystem will work if you use mdadm -A ... to assemble it first.  DO NOT use -C

-A Assembles an existing raid set. -C creates a new one, destroying anything that is there.

Changing metadata version destroys your filesystems.

You will hate those WD 2Tbyte Green disks.  I have 5 in a raid 5 set and I have had two replaced under warranty in less than a year already.

Read up about the idle3 timer and use the WD utility to turn it off. Parking the heads every 8 seconds is just silly.

----------

## robdd

Hi Neddy,

Thanks very much for the swift and helpful reply. I thought I'd persist with the version 1.2 metadata, and by simply adding one line to my previously virgin mdadm.conf file the RAID1 array has sprung back into life:

```
ARRAY /dev/md5 devices=/dev/sda5,/dev/sdb5

```

But I guess these lines in the mdadm.conf file are a bit misleading:

 *Quote:*   

> # mdadm will function properly without the use of a configuration file,
> 
> # but this file is useful for keeping track of arrays and member disks.
> 
> 

 

And thanks not so much for the news about the WD Green drives   :Rolling Eyes:  . I had already heard from a work colleague *after* I had bought them, that I should expect problems because when the drives went to sleep the raid driver thought one drive had failed, and kicked it out of the array. So now I can expect unreliability as well. Perfect ! When I bought the drives I was told they had a 3 year warranty, so before installation I wrote the purchase date and length of warranty on the drive case with a marker pen. In retrospect that was a good move !

So now I will read up on the idle3 timer, and see what I need to do to make the disks work in a reasonably sane manner.

The 1Tbyte drive that failed was a Samsung, and the Green lemons are WD, so maybe I'll try Seagate disks next. But it's probably just an indication of how quality deteriorates as costs are squeezed. I really do wonder how anyone in the manufacturing and sales chain can be making money on a 1Tbyte drive when it sells for around A$100.

Best Regards,

Rob.

----------

## depontius

 *robdd wrote:*   

> Hi Neddy,
> 
> And thanks not so much for the news about the WD Green drives

 

From what I've heard, this kind of thing is pretty much what you can expect across the board for drives these days.  In the case of Caviar Green it's the spindown, which at least can be defeated.  I've also read of some sort of error policy common in the TB-range drives that can confound RAID, and that you need to buy "RAID-ready" drives, which are of course more expensive.

I haven't done any searching on this lately, though around the end of the year I hope to be rebuilding my servers, and will need to understand this.  I would hope that somewhere someone in RAID-land has this information summarized, along with workarounds wherever possible.

----------

## DaggyStyle

 *NeddySeagoon wrote:*   

> 
> 
> The main limitation with metadata 0.90 is that you can only have 28 elements in a raid set.
> 
> Kernel raid autoassembly is going away "soon" as its a depreciated feature. Its been like that for over two years now.
> ...

 

so this means that initrd will be shoved down our throats the same a initramfs?

----------

## NeddySeagoon

DaggyStyle,

initrd is the old name for initramfs.  I have no idea how soon, "soon" is.

My systems use separate /usr, /var and root, all on lvm (even on one drive systems) on raid, so with ~arch installed everywhere and udev>171, I've had an initrd for a while.

I don't mind the initrd concept too much - I don't like the idea of it being set up by black magic, so when it breaks, I have no idea how to fix it.

My initrds are all kernel independent, build once in the life of the machine affairs, all built by hand.

depontius,

Its one of the timeouts you are talking about.  Hitachi drives (today at least) don't have that problem.

----------

## dmpogo

Green drives are simply not designed to work in continuous 24/7 environment - their power saving is too agressive,  some also now come with variable rotation speed which

you hardly want in RAID array.  They are OK for external storage (although some enclosers may have issues loosing them as well).  Really drives live longer, less often they are stopped or started.

For raid system stick to drives which are from 'enterprise' line up, whether WD or Seagate - they are not much more expensive.

----------

