# mdadm raid 1 keeps getting deleted [solved]

## johnklug

I seem to be going in circles with my new RAID being wiped from my new disks.

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

Personalities : [raid0] [raid1] 

md0 : active raid1 sdb[1] sda[0]

      3906887488 blocks super 1.2 [2/2] [UU]

      bitmap: 0/30 pages [0KB], 65536KB chunk

md127 : active raid1 sdd3[1] sdc3[0]

      29294400 blocks [2/2] [UU]

md3 : active raid0 sdd4[1] sdc4[0]

      2856581760 blocks 64k chunks

md125 : active raid1 sdd1[1] sdc1[0]

      128384 blocks [2/2] [UU]

I then used parted to create GPT partitions.

What happens is when I reboot, something has corrupted the disk it seems (or the raid was never really written to disk):

# mdadm --examine /dev/sda

/dev/sda:

   MBR Magic : aa55

Partition[0] :   4294967295 sectors at            1 (type ee)

localhost operator # mdadm --examine /dev/sdb

/dev/sdb:

   MBR Magic : aa55

Partition[0] :   4294967295 sectors at            1 (type ee)

localhost operator # mdadm --assemble /dev/sda /dev/sdb

mdadm: device /dev/sda exists but is not an md array.

localhost operator # mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

mdadm: /dev/sda appears to be part of a raid array:

       level=raid0 devices=0 ctime=Wed Dec 31 18:00:00 1969

mdadm: partition table exists on /dev/sda but will be lost or

       meaningless after creating array

mdadm: Note: this array has metadata at the start and

    may not be suitable as a boot device.  If you plan to

    store '/boot' on this device please ensure that

    your boot-loader understands md/v1.x metadata, or use

   ...

Then it starts syncing again:

localhost ~ # cat /proc/mdstat

Personalities : [raid0] [raid1] 

md0 : active raid1 sdb[1] sda[0]

      3906887488 blocks super 1.2 [2/2] [UU]

      [>....................]  resync =  3.2% (125130304/3906887488) finish=382.6min speed=164700K/sec

      bitmap: 30/30 pages [120KB], 65536KB chunk

And on the next reboot, the raid is lost again.  Note that the old partition based raids on the old disks still work, and /dev/md3 is raid0.Last edited by johnklug on Thu Jul 28, 2016 3:39 am; edited 1 time in total

----------

## NeddySeagoon

johnklug,

What command do you use to start parted?

----------

## frostschutz

Storage is like ogres... err, onions... whatever. Storage works in layers. Each piece of storage (each block device) to its own layer.

RAID is a layer.

Partition table is a layer.

LVM is a layer.

LUKS encryption is a layer.

Filesystem is a layer.

As long as you respect the layers, things work fine. You're trying to put two different things into the same layer. Doesn't work at all.

You can put RAID on /dev/sda, or a partition table on /dev/sda, but not both.

I suggest you put a partition table on /dev/sda, with partitions that gives you a new layer /dev/sda1, /dev/sda2, ...

Then you put RAID on /dev/sda1 which gives you a new layer /dev/md0

Then you can put LUKS, LVM, or whatever you want on /dev/md0 and ... finally a filesystem on /dev/VG/LV. Something like that.

Sometimes you have to do some cleaning before you can put something new on it. `wipefs` can remove old metadata for you.

----------

## Goverp

IMHO it's silly to have both partitions and LVM.  I'd start with full-device RAID.

----------

## johnklug

The 1st partition is UEFI.  It has a partition for the sake of UEFI.  I could not figure out how to get it to boot with LVM on the whole disk.

I will try creating two raids. They were partitioned with the parted command.

There is no encryption.

LVM is more flexible than partitions.

----------

## johnklug

Looks like the problem is solved.  It apparently does not work to put MD Raid1 on a whole disk, then parttiion the MD device with GPT.  GPT and MD must conflict

So it does work to partition both disks the same with GPT, then put MD Raid1 on the individual partitions across the disks.  So I have one raid for the EFI parition, and a 2nd for LVM.  *****

And both Raids are automatically assembling at boot.

====== Addendum =====

***** Note that I later discovered UEFI partitions will not be recognized by the BIOS if they have RAID.  So I created one UEFI partition on each disk without RAID, ran the genkernel install onto both disks UEFI partition, and have separate names for each disk in the UEFI BIOS.   Example:

grub2-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=gentoodisk2 --recheck

In this case the bootloader ID gentoodisk2 is entered into the UEFO BIOS as a boot parameter.Last edited by johnklug on Sun Aug 14, 2016 3:49 am; edited 1 time in total

----------

## frostschutz

 *johnklug wrote:*   

> GPT and MD must conflict

 

Not really, but things can be misinterpreted... for 0.90/1.0 metadata, GPT partition table would be at the start of the disk (like regular GPT) and most things would simply not assume it's supposed to be part of the MD instead; or alternatively, for 1.1/1.2 metadata, if the RAID goes to the very end of the disk, the backup GPT at the end of the MD would also be at the end of the disk (again like regular GPT) and again some programs might not expect that to be part of the MD instead of the disk. There were several discussions on the linux raid mailing lists recently losing their "full disk raid" to such things, thanks to some partitioner or other "fixing" the GPT for you (overwriting MD metadata in the process).

Always put a partition table on your disks. Then use partitions for your MDs/LVMs/LUKS/filesystems/… (again each in its own layer)

----------

