# Raid5 with NVRaid - final disk size

## remix

i think i setup 4 1GB disks in Raid5 array correctly using mdadm --create --verbose /dev/md1 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

my disk space looks correct in fstab

```
livecd ~ # fdisk /dev/md1

The number of cylinders for this disk is set to 732571872.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/md1: 3000.6 GB, 3000614387712 bytes

2 heads, 4 sectors/track, 732571872 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

Disk identifier: 0x0db7b0d3

    Device Boot      Start         End      Blocks   Id  System

/dev/md1p1               1        8193       32770   83  Linux

/dev/md1p2            8194     1056770     4194308   82  Linux swap / Solaris

/dev/md1p3         1056771    14163971    52428804   83  Linux

/dev/md1p4        14163972   536870911  2090827760   83  Linux

Command (m for help): 

```

however, my df shows only 2GB

```
livecd ~ # df -h

Filesystem            Size  Used Avail Use% Mounted on

tmpfs                 880M  333M  548M  38% /

/dev/sr0              2.7G  2.7G     0 100% /mnt/cdrom

/dev/loop0            2.6G  2.6G     0 100% /mnt/livecd

udev                   10M  148K  9.9M   2% /dev

cachedir              4.0M  160K  3.9M   4% /mnt/livecd/lib64/splash/tmp

tmpfs                 880M  6.1M  874M   1% /mnt/livecd/lib64/firmware

tmpfs                 880M     0  880M   0% /mnt/livecd/usr/portage

/dev/md1p3             50G  4.5G   43G  10% /mnt/gentoo

/dev/md1p1             31M  395K   29M   2% /mnt/gentoo/boot

/dev/md1p4            2.0T   33M  2.0T   1% /mnt/gentoo/home

```

how come it's only 2.0T and not  2.946 T ?

```
livecd ~ # dmraid -s

*** Set

name   : nvidia_fbddhhfc

size   : 5860575360

stride : 128

type   : raid5_ls

status : ok

subsets: 0

devs   : 4

spares : 0

```

am i doing it wrong?

----------

## jawsdaws

If your doing a Raid-5, you'll lose some space to the parity data.  I think what you have is about right for a Raid5.

----------

## remix

oh, i thought it would be

1TB space

1TB space

1TB space 

1TB parity

=  3TB

if it's 2TB, then wouldn't it be either RAID1 or RAID0 ?

----------

## remix

i found this is dmesg

```
md: bind<sda>

md: bind<sdb>

md: bind<sdc>

md: bind<sdd>

raid5: device sdc operational as raid disk 2

raid5: device sdb operational as raid disk 1

raid5: device sda operational as raid disk 0

raid5: allocated 4270kB for md1

raid5: raid level 5 set md1 active with 3 out of 4 devices, algorithm 2

RAID5 conf printout:

 --- rd:4 wd:3

 disk 0, o:1, dev:sda

 disk 1, o:1, dev:sdb

 disk 2, o:1, dev:sdc

 md1:RAID5 conf printout:

 --- rd:4 wd:3

 disk 0, o:1, dev:sda

 p1 p2 p3 p4

 disk 1, o:1, dev:sdb

 disk 2, o:1, dev:sdc

 disk 3, o:1, dev:sdd

md: recovery of RAID array md1

md: minimum _guaranteed_  speed: 1000 KB/sec/disk.

md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.

md: using 128k window, over a total of 976762496 blocks.

```

does this look right?

----------

## jawsdaws

I'm sorry.  I read that wrong.  Thinking you had 3, 1TB drives.  Nevermind me...

----------

## Mad Merlin

I'm confused, why are you using both mdadm and fake RAID (dmraid)? You only need one or the other, preferably the former.

Also, it's unusual to partition a RAID device with mdadm, as mdadm can RAID partitions instead of just whole disks, so you can mix RAID levels if desired (and if you're booting from it, required... /boot must be RAID 1).

----------

## remix

i honestly didn't know what i was doing and if dmraid -ay really did anything.

i guess i could start all over. i have only found a few guides on raid that helped. would you be nice enough to share the steps i should take?

i'll start by unmounting them from livecd. then i guess i could fdisk and destroy all partitions?

so now what, was this line correct?: mdadm --create --verbose /dev/md1 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

how do i create a RAID1 for boot partition?

thanks for your help.

----------

## cyrillic

The other thing that was not mentioned is that MSDOS partition tables only support up to 2TB.

If your drive (or array) is bigger than that, then you will have to use parted instead of fdisk to create such large partitions.

----------

## HeissFuss

If you're not planning on booting Windows, don't use the BIOS raid (dmraid) at all.

Also, you don't want /boot on a raid-5 or raid-0 device.  I suggest you make your first partition on the drives maybe 64MB and put them all in a raid-1 (2 active, 2 spare).  That way you can actually boot off of them.  Then use the rest of the disk for one large md5 and put LVM on top of it.  LVM will give you more flexibility if you decide later on to make /var, /tmp or /usr/portage different file systems.  Please note however that you may want to also make / it's own raw raid device rather than an lvm one if you're not going to use genkernel.  Otherwise it may be a pain to get the initrd to boot to LVM properly.

First, after disabling BIOS raid, you should wipe out the partition tables and first section of the disks to get rid of the dmraid formatting.

Be sure to check that you are picking the correct devices to wipe and format before running any of the following.

```

dd if=/dev/zero of=/dev/sda bs=1M count=1

dd if=/dev/zero of=/dev/sdb bs=1M count=1

dd if=/dev/zero of=/dev/sdc bs=1M count=1

dd if=/dev/zero of=/dev/sdd bs=1M count=1

sync

```

(All partitions should be type fd)

/dev/sda1 - 64MB

/dev/sda2 - rest of disk

/dev/sdb1 - 64MB

/dev/sdb2 - rest of disk

/dev/sdc1 - 64MB

/dev/sdc2 - rest of disk

/dev/sdd1 - 64MB

/dev/sdd2 - rest of disk

```

# Make the two md raid devices

mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 --spare-devices=2 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

mdadm --create --verbose /dev/md2 --level=5 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2

# Make the LVM volume (raid5vg can be any name you want, as long as you are consistent later on)

pvcreate /dev/md2

vgcreate raid5vg /dev/md2

# Make your LVs for your filesystems

lvcreate -n swap -L+1G raid5vg

lvcreate -n root -L+30G raid5vg

lvcreate -n home -L+2.7T raid5vg

```

This will give you

/dev/md1 - 64MB - /boot

/dev/raid5vg/swap - 1GB - swap

/dev/raid5vg/root - 30GB - /

/dev/raid5vg/home - 2.7TB - /home

With ~200GB to play with for new future LVs or to extend existing ones, or to create snapshots.  You can see how much space is left in your VG with 'vgs'.

If you do make / an LV, make sure to build your genkernel with lvm and mdraid support.  If / is raid5, you'll still need mdraid support to boot.

Hope this helps.

----------

