# convert RAID-5 to RAID-6 with kernel>=2.6.31 & mdadm>=3.1

## dogshu

Hi all!  The newest versions of the linux kernel and mdadm support converting between RAID modes:

http://lwn.net/Articles/358682/

After a recent disk failure and subsequent wait for a replacement disk from newegg led me to go almost a week with no redundancy on my RAID-5 array, I decided to convert to RAID-6.  Here is my story, which is based on this detailed post by Neil Brown, the mdadm maintainer:

http://neil.brown.name/blog/20090817000931

I am converting a 7 disk RAID-5 array to an 8 disk RAID-6 array.  First of all I made sure I was running the right kernel and mdadm version.  I compulsively upgrade my kernel, so I was already running 2.6.32.3.  Gentoo is still on mdadm 3.0, so I typed "=sys-fs/mdadm-3.1.1" into /etc/portage/package.keywords/mdadm and emerged the new version.

To add the 8th disk to my existing RAID-5 array, I ran:

mdadm --add /dev/md3 /dev/sdh3

this added /dev/sdh3 as a hot spare. Then, to convert this to a RAID-6 array, I ran:

mdadm --grow /dev/md3 --level=6 --raid-devices=8 --backup-file=/nfs/media/tmp/md3.backup

I had tried to run the --grow command without the --backup-file argument, as Neil's post seems to say that a backup file is not necessary when a hot spare is present. But mdadm wasn't having it, it told me:

mdadm level of /dev/md3 changed to raid6                                             

mdadm: /dev/md3: Cannot grow - need backup-file                                      

mdadm: aborting level change

With the --backup-file argument everything seems to be working fine. Here's the relevant part of my /proc/mdstat:

```

md3 : active raid6 sdh3[7] sdg3[6] sdf3[5] sde3[4] sda3[0] sdb3[2] sdc3[3] sdd3[1]

      120052224 blocks super 0.91 level 6, 256k chunk, algorithm 18 [8/7] [UUUUUUU_]

      [====>................]  reshape = 23.8% (4763648/20008704) finish=131.3min speed=1934K/sec
```

My next step is to convert my 4 terabyte /dev/md5 to a RAID-6 array. Neil said in is post that the process of converting from RAID-5 to RAID-6 is "very slow."  He wasn't kidding... at the rate that /dev/md3 is converting, I estimate that it will take 4.5 days to convert my /dev/md5 to RAID-6.

----------

## xibo

Hi

It shouldn't be that slow. I converted from raid5 to taid6 when going from 5 disks to 6, each disk had 1TB and for nowadays average specs ( hitachi ultrastar ak10000(a) ), and it was reshaping at about 10-12MB/s per disk... took about a day in time and a processor core in load.

Newer kernels have mdraid multithreading support, though i haven't tryed that so far.

EDIT: if backing up all data on the raid before leveling it is an option, you might want to backup anything and tweak a bit on the chunk/block sizes used by mdraid - the performance gain can be quite noticable

----------

## Mad Merlin

 *xibo wrote:*   

> EDIT: if backing up all data on the raid before leveling it is an option, you might want to backup anything and tweak a bit on the chunk/block sizes used by mdraid - the performance gain can be quite noticable

 

I've wondered about this before. How should one pick chunk sizes for maximum performance? (Also, could you quantify noticeable?)

----------

## drescherjm

I am getting even worse raid reshape conversion from raid 5 to 6 on a fast machine with a ~50GB array that normally resyncs in less than 5 minutes..

```
md1 : active raid6 sdj3[8] sdb3[9](S) sda3[10](S) sdc3[6] sde3[7] sdf3[5] sdk3[1] sdi3[4] sdl3[2] sdm3[3] sdh3[0]

      51447424 blocks super 0.91 level 6, 64k chunk, algorithm 18 [9/8] [UUUUUUUU_]

      [===>.................]  reshape = 15.5% (1142784/7349632) finish=110.1min speed=938K/sec

```

The thing is that cpu usage is low.

```
top - 02:48:30 up 54 min,  1 user,  load average: 1.67, 1.81, 1.46

Tasks: 154 total,   1 running, 153 sleeping,   0 stopped,   0 zombie

Cpu(s):  0.1%us,  0.9%sy,  0.0%ni, 77.5%id, 21.2%wa,  0.0%hi,  0.2%si,  0.0%st

Mem:   4053472k total,  1251924k used,  2801548k free,    88196k buffers

Swap:  3180840k total,        0k used,  3180840k free,   192636k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

 8229 root      20   0     0    0    0 S    2  0.0   0:15.37 md1_raid5

 8230 root      20   0  5008  960  160 D    1  0.0   0:12.87 mdadm

 8022 root      20   0     0    0    0 D    1  0.0   0:06.02 usb-storage

 8236 root      20   0     0    0    0 S    1  0.0   0:03.68 md1_reshape

 6112 nagios    20   0 27404 1636 1040 S    0  0.0   0:00.11 nrpe

    1 root      20   0  3940  672  564 S    0  0.0   0:00.66 init
```

I think the issue is that I choose a pen drive as the backup device. Also I have a spare and it would not let me to proceed without the backup file.

Like the OP I am not worried about this array but the big one. Which will be > 14TB when it grows. At this rate that may take several weeks..

----------

## drescherjm

BTW, the big drive took a few days with a regular hard drive as the location for the backup. 

From this and other testing I believe if you want fast reshape you must increase the size of the array otherwise you will need a backup-file. It also appears that the entire array gets reshaped by copying the entire array through this backup-file. An SSD would have probably sped this process up by at least 10x but this would have used quite a few write/erase cycles on the SSD to copy TB of data.

----------

