# convert stripe to raid5? (i think i finally got it)

## bunder

anyone know if it's possible to convert a two disk stripe into a four disk raid5?  all disks are the same make/model, done with mdadm.

i'm guessing "not without losing your data", but i'm hoping to retain the data, and not have to buy a 500gb drive just to make a temporary backup.

thanks

----------

## NeddySeagoon

bunder,

I assume you mean a two disc raid0 to a four disk raid5 ?

Not directly. You can make the raid5 in degraded mode, which only needs 3 drives but you can't take one out the the raid0 set directly.

If we are talking kernel raid0 and its less than half full, you may be able to do some data juggling to go from raid0 to single drive and put the freed drive into the raid5 set.

----------

## Cyker

 *bunder wrote:*   

> anyone know if it's possible to convert a two disk stripe into a four disk raid5?  all disks are the same make/model, done with mdadm.
> 
> i'm guessing "not without losing your data", but i'm hoping to retain the data, and not have to buy a 500gb drive just to make a temporary backup.
> 
> thanks

 

Theoretically you can, but not the way I suspect you mean (i.e. converting the RAID0 to RAID5 then expanding it onto the 2 extra drives).

The way to do it the way I know of would go something like this: (Disclaimer; I haven't tried this!)

First you need to clear out the 'new' two drives and create a RAID5 array on them with something like:

```
mdadm --create /dev/md1 --level=5 -n 2 /dev/hdc1 /dev/hdd1
```

Obviously, this is the time to add things like --chunk=128 or whatever if you want to tweak the underlying properties of the RAID array.

Once that's done, hopefully you will have a degraded RAID5 array that you can you can format the partition or LVM it or whatever, e.g. ...

```
mke2fs -j -O dir_index -L cyServRAID5 -v -b 4096 -E stride=16 /dev/md1
```

(Obviously tweak the settings here; If you used --chunk=128, you'll want --stride=32 here, and if you change the -b blocksize, then you need to calculate the stride yourself. Or use whatever is appropriate for xfs/reiser or lvm etc.!)

...and copy your RAID0 to; Do that - mount it, copy over, TEST THAT YOU CAN READ IT BACK.

--- Start of Danger Zone!  :Shocked:  ---

Then you need to zap the existing RAID0:

```
mdadm --stop /dev/md0
```

Clear it out from the mdadm.conf if you are using it, then set up the freed disks as you normally would prepare them for addition to an array, if needed.

From there it's the standard --add and very lengthy and nail-biting --grow command to expand the array:

```
mdadm /dev/md1 --add /dev/hdc1

mdadm --grow /dev/md1 -n=3 --backup-file=/tmp/mdadm.grow.backup -z max 

mdadm /dev/md1 --add /dev/hdd1

mdadm --grow /dev/md1 -n=4 --backup-file=/tmp/mdadm.grow.backup -z max
```

Or, if you feel brave, this might work too:

```
mdadm /dev/md1 --add /dev/hdc1 /dev/hdd1

mdadm --grow /dev/md1 -n=4 --backup-file=/tmp/mdadm.grow.backup -z max
```

NB: Either way, the --backup-file MUST point to a file that isn't on the array, so e.g. if your /tmp is on the array, you will need to specify a different place.

After that finishes (It will take hours, possibly days!  :Shocked: ), you just need to resize the FS to fit into the new space.

With ext3, it is easy:

```
resize2fs -p /dev/md1
```

The -p just displays a completion status; ith no other args it just defaults to expanding the FS to the full size of the partition.

If you have anything like reiser or xfs, or some exotic LVM thing then you'll have to read up on it yourself, but I'm sure it's not much harder  :Smile: 

Normally I preach about making backups, but I figure if you could you wouldn't be asking how to do it this way...  :Sad: 

----------

## bunder

 *Cyker wrote:*   

> Theoretically you can, but not the way I suspect you mean (i.e. converting the RAID0 to RAID5 then expanding it onto the 2 extra drives).

 

actually, the way i was thinking of doing it was convert the raid 0 to a degraded raid 4 (no parity disk), add the two disks and then convert it to raid 5...  but now that i know you can make a raid5 with two disks, i might be able to do that instead, skipping a step.  

my current config looks like this:

 *Quote:*   

> DEVICE /dev/hdc1 /dev/hdd1
> 
> DEVICE /dev/hdc2 /dev/hdd2
> 
> ARRAY /dev/md0 level=raid0 num-devices=2 devices=/dev/hdc1,/dev/hdd1
> ...

 

all four drives are WD2500JB's and i only plan on retaining one partition on the current array, which contains 395gb of mythtv storage... 

dunno, i'm tired and don't know how to finish this post...   :Laughing:    yeah.   :Laughing: 

----------

## Cyker

Huh, 4 drives? I only count 2 there??

The problem with what you want to do is that I don't think mdadm can do direct RAID level conversions (Can it??).

For the --grow mode, it specifically says that the --level param is not supported in the man page...

----------

## bunder

 *Cyker wrote:*   

> Huh, 4 drives? I only count 2 there??
> 
> The problem with what you want to do is that I don't think mdadm can do direct RAID level conversions (Can it??).
> 
> For the --grow mode, it specifically says that the --level param is not supported in the man page...

 

the other two drives are still in another machine.   :Wink: 

i don't know about conversions between levels and/or grow...  that's why i'm here asking i guess.   :Laughing: 

----------

## NeddySeagoon

bunder,

Take the rad0 set(s) and copy the data to a new single drive. 

Validate the copy as the next step is destructive. The single drive will become one of the raid5 later.

Using the drives from the raid0 set and the other new drive, make a degraded raid5 set with 3 drives. mdadm will hate you for this as it will detect the existing raid0 set.

Copy the data you moved to the single drive to to the degraded raid5. Validate the copy again as the next step is destructive.

Add the single drive to the raid5 set. When its synched, you have a full strength raid5.

----------

## Cyker

 *bunder wrote:*   

>  *Cyker wrote:*   Huh, 4 drives? I only count 2 there??
> 
> The problem with what you want to do is that I don't think mdadm can do direct RAID level conversions (Can it??).
> 
> For the --grow mode, it specifically says that the --level param is not supported in the man page... 
> ...

 

Ah, okay.  :Smile: 

Anyway, I am led to believe that mdadm currently doesn't support changing RAID levels on-the-fly, although there is a possibility it may be added in the future.

Ironically, there is a utility called "raidreconf" which CAN convert between RAID levels but I don't know if you can use it with mdadm RAIDs as it is for the much older raidtab-based arrays.

So right now it looks like your options are:

a) Back up onto a big external 500GB HD and then create the array on the 4 disks (Safest, most expensive if you don't have a 500+GB USB disk), or

b) Create the degraded array etc. in my earlier post. (Costs nothing but lots of time, much more risky)

Neddy's suggestion would be a better and safer bet, but it has the problem that 395GB data won't fit on a 250GB drive  :Laughing: 

----------

## bunder

 *Cyker wrote:*   

> Neddy's suggestion would be a better and safer bet, but it has the problem that 395GB data won't fit on a 250GB drive 

 

yeah, neddy and i were talking about that on irc.    :Laughing: 

but i was thinking about the degraded raid5 on 2 drives though...  neddy was saying something about "for every four bytes you lose a bit", and that i should be able to fit the data on the new array...  so i might actually do that after all.  the machine is on a UPS, so as long as it doesn't go out or the drives don't clunk, i'm good.  :Very Happy: 

a raid calculator was telling me in the end that 4x210GB=580GB of usable space, so that leaves enough room for now.  i can use the other 4x40GB=111GB for storing backups.  :Very Happy: 

still don't know when i want to do it though.

thanks

----------

## Cyker

Well my thinking was that RAID5 capacity is all drives minus 1, so in your case if you were going for 210GB partitions for each element, it should be about 630GB in total?

(I'm running 4x1TB drives and have about 3TB of usable space).

In RAID5 degraded mode with 2 drives, you should have about 420GB to play with, albeit slowly, and when you add the third drive you'll still have 420GB, but with the redundancy, and then the final drive should push it up to 630GB.

I'd leave it 'til you have a free weekend 'tho!  :Wink: 

----------

## NeddySeagoon

Cyker,

Thats exactly right.

----------

## bunder

damn, why isn't anything on the internet ever right?   :Laughing: 

as long as i know it'll fit.  that's all that matters.   :Very Happy: 

i should have checked this before, but the current partition size is 228gb...  i'll have to unload some space and shrink the array, which should be doable, i got a spare 60 and some space on my laptop, so despite taking a while it's still looking doable.

cheers

----------

## bunder

so after pulling my hair out to find out that mdadm only loaded one of my arrays, i'm stuck here...

 *Quote:*   

> prescott ~ # cat /proc/mdstat
> 
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
> 
> md5 : active raid0 hda2[0] hdb2[1]
> ...

 

why is it rebuilding?  i haven't even put filesystems on the new arrays yet.   :Confused: 

----------

## Mad Merlin

 *bunder wrote:*   

> so after pulling my hair out to find out that mdadm only loaded one of my arrays, i'm stuck here...
> 
>  *Quote:*   prescott ~ # cat /proc/mdstat
> 
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
> ...

 

It needs to initialize the contents on the disks. The same thing happens when you initialize any array with mdadm that has redundancy (RAID 1, 4, 5, 6, and 10).

Edit: You can create a filesystem and start using the array immediately, however.

----------

## bunder

 *Mad Merlin wrote:*   

> It needs to initialize the contents on the disks. The same thing happens when you initialize any array with mdadm that has redundancy (RAID 1, 4, 5, 6, and 10).
> 
> Edit: You can create a filesystem and start using the array immediately, however.

 

oh okay, i thought i broke it.   :Laughing: 

edit: yeah, something looks screwy here...

 *Quote:*   

> prescott mnt # df -h
> 
> Filesystem            Size  Used Avail Use% Mounted on
> 
> /dev/md4              431G  392G   18G  96% /mnt/md4
> ...

 

193G?  that's nowhere near the size we mentioned before...   :Confused: 

 *Quote:*   

> prescott mnt # mdadm --detail /dev/md0
> 
> /dev/md0:
> 
>         Version : 0.90
> ...

 

----------

## think4urs11

correct me if i'm wrong but you've now a 2disk raid0 and a 2disk raid5 and the later one cannot be easily expanded by additional disks, correct?

Wouldn't it be easier to create the new raid5 in degraded mode already?

the following fully untested...

With that you could add the 2 former raid0 disks to the raid5 array after copying content to it. Something like 'mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 missing missing /dev/sdc1 /dev/sdd1' should do the trick and when content is copied break raid0 and do 'mdadm /dev/md0 --add /dev/sda1'...

----------

## bunder

 *Think4UrS11 wrote:*   

> Wouldn't it be easier to create the new raid5 in degraded mode already?

 

i thought that's what i was trying to do.   :Confused: 

edit: oh wait, i needed to add "missing missing"?   :Shocked: 

double edit: no, <nevermind i got it>

----------

## eccerr0r

Interesting that mdadm supports degenerate 2-disk RAID5's  :Surprised: 

Anyway if I'm reading this correctly (2-disk RAID0 to 4-disk RAID5) why couldn't you (or did you? or was this already mentioned?):

1. create an reshapable, new "3"-disk RAID5 array with one missing disk (degraded 3-disk RAID5) using /dev/null or something as the "faulty" disk with the two new disks, and make a new filesystem on it.  You should see [UU_] or something like that in mdstat, and your original [UU] for the RAID0.  Test that you can actually use the new array, it should work just fine as it waits for another member.

2. copy RAID0 to the degraded RAID5.  These two should be the exact same size so it should fit without any trouble, and you need not delete anything.

At this point you have redundant copies of your data.  Make sure you diff, as next step is destructive and you'll lose your redundancy temporarily.

3. disassemble the raid 0 and wipe it.  Take one of the disks and introduce it to the degraded 3-disk RAID5 as redundancy, and allow it to complete rebuild.  You now have a 3-disk redundant RAID5 system [UUU].

4. and finally, obviously, mdadm --grow to reshape the last disk onto the RAID5.  Then resize your filesystem.

Done, No additional disks needed, your 2-disk RAID0 is now a 4-disk RAID5.  However I would still have that 500G disk handy just in case something goes wrong, but then again you were playing with fire with a RAID0 anyway...

----------

## think4urs11

 *bunder wrote:*   

> edit: oh wait, i needed to add "missing missing"?  

 

bingo

----------

## bunder

finally got it going last night, but i still didn't have enough space.

woke up to this:

 *Quote:*   

> cp: writing `../md0/1056_20080321115900.nuv': No space left on device
> 
> 

 

luckily it's only a small handful of files, and i won't even mention that i accidentally blew away my movie partition.   :Embarassed: 

 *Quote:*   

> md1 : active raid5 hdd2[1] hdc2[0]
> 
>       78236416 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
> 
> md0 : active raid5 hdd1[1] hdc1[0]
> ...

 

(how come they're backwards? eg: d then c)

at any rate, i'll have to find a spot for some of these files.   :Confused: 

edit/update:  i think we're good.   :Very Happy: 

 *Quote:*   

> # cat /proc/mdstat
> 
> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
> 
> md1 : active raid5 hdb2[3] hda2[2] hdd2[1] hdc2[0]
> ...

 

----------

