# Best way to expand a software RAID5 Array?

## Kai Hvatum

I've got an 8x160GB RAID5 Array which I'm now looking to expand. I absolutely do not trust the RAID5 expansion tool out there which proports to expand RAID5 arrays without loss of data so don't even dare suggest that! I tried it once and everything was wiped out.  :Embarassed: 

I'll put everything in list form to make this as painless as possible.  :Smile: 

 *Setup: wrote:*   

> 
> 
> Eight 160GB Seagate IDE Drives
> 
> Two PCI-64/PCI-X IDE Controllers
> ...

 

 *Requirements: wrote:*   

> 
> 
> The finished product must present one contigious filesystem and mountpoint.
> 
> This must be a software solution - for various reaons I cannot use a hardware RAID card.  
> ...

 

For various reasons I chose XFS earlier, and I believe it is exapandable although someone please correct me if I'm wrong. Basically I just want to expand this filesystem by another 500GB or so. Depending upon the suggestions I recieve here I'll decide which drives to buy.

----------

## ecosta

Hi Kai,

I'm a little confused.  You say you want to expand your current RAID but don't want to use expansion tools, which I can agree with.  If as you mention, you want just the 1 partition of ~ 1.6T, the only way I can think of is a full backup / re-installation.

You must have a good reason to have your whole system on just the one partition but damn, that FS check after 30 boots must hurt  :Wink:   If you did change your mind about the one FS, then LVM2 would be of great help and you would just create a new RAID5 parallel to this one... but something tells me you know that.

Oh well, botom line in my opinion for your specs is backup then re-install (with more than you need so you don't do it again in 6 months  :Wink:   Maybe an image of your system too.

Ed.

----------

## ejmiddleton

The problem with XFS is that you can't shrink it, but it does let you resize online.  If you want a continuous filesystem you need to use lvm2.  How you move to lvm2 will depend on your setup.  The easies way is to setup a software raid on your new drives add the array to an lvm2 volume group.  Create your partition. copy your old files to the new partition.   Get this partition setup to boot (do this before erasing your original root partitions).  You will need a initrd to boot from lvm2 on raid.  Add your old raid array to the volume group.  Whether this is all possiable will depend on how much space you have used.

If you have used more space then you will be adding then you will have to work in two stages.  First create a volume group with the hard drives without raid.  This will give you more space.  Then copy the files to the lvm2 partition on the non-raid lvm2.  Then add the old raid to the lvm2 group and remove the new drives from the lvm2 group (you will have to move the files back to your old raid using pvmove).  Then create a raid on your new drives, then add them to the lvm2 group.

It is a lot easier if you setup the lvm2 from the start.  If you had done that you could have just created the new raid array and added it to the lvm2 volume group then resized your partition.

----------

## Kai Hvatum

 *ejmiddleton wrote:*   

> The problem with XFS is that you can't shrink it, but it does let you resize online.  If you want a continuous filesystem you need to use lvm2.  How you move to lvm2 will depend on your setup.  The easies way is to setup a software raid on your new drives add the array to an lvm2 volume group.  Create your partition. copy your old files to the new partition.   Get this partition setup to boot (do this before erasing your original root partitions).  You will need a initrd to boot from lvm2 on raid.  Add your old raid array to the volume group.  Whether this is all possiable will depend on how much space you have used.
> 
> If you have used more space then you will be adding then you will have to work in two stages.  First create a volume group with the hard drives without raid.  This will give you more space.  Then copy the files to the lvm2 partition on the non-raid lvm2.  Then add the old raid to the lvm2 group and remove the new drives from the lvm2 group (you will have to move the files back to your old raid using pvmove).  Then create a raid on your new drives, then add them to the lvm2 group.
> 
> It is a lot easier if you setup the lvm2 from the start.  If you had done that you could have just created the new raid array and added it to the lvm2 volume group then resized your partition.

 

Ah yes! Sadly I was lazy and didn't feel like dealing with LVM2 at the time I setup this RAID. It was also a few years ago and I didn't completely trust LVM because it seemed to still be under heavy development at the time. 

Now I've gotten eight 200 GB Seagate hard drives. Thankfully I don't need to deal with setting up an initrd because I have a seperate 40 GB Gentoo drive which I backup regularly just incase it fails. This makes maintance on the RAID much easier than having a jumble of files spread about the partition. So, according to your sage advice I should:

First:      Partition these drives into a Software RAID 5 Array.

Second: Add this Array to an LVM2 volume group. 

Third:    Transfer all of the data over from my old 8x160 Array to this new LVM2 volume group.

Fourth:  Clear the old 8x160 Array and add it to the LVM2 volume group also. 

Fifth:     Bring the whole thing back onto the Network by Monday Morning - *Crosses Fingers* 

Does that sound good?

Oh yeah, how can I add my extra IDE and SATA cards to the server without screwing up my already existing drive order and confusing my MD setup? Should I just throw everything in there and sort it out the drive letters afterward. Or can I somehow control how Linux will assign drive numbers and avoid getting downtime on the Array. I'm using a mixture of PCI-x IDE cards and the motherboard's Native IDE Controller (Tyan Tiger Dual Athlon-MP).

PS: If anyone wants pictures of this cheap-IDE storage beast just give me a say.  :Smile: 

----------

## Kai Hvatum

Bump ^^   :Very Happy: 

----------

## number_nine

 *Kai Hvatum wrote:*   

> PS: If anyone wants pictures of this cheap-IDE storage beast just give me a say. 

 

Yeah, I'm kind of curious what this all looks like  :Smile:   Pictures aren't necessary, but what case/enclosure are you using to house all those drives?  Do you have a rack system?

I'm slowly building a similar storage beast myself.  I got the Chenbro SR10769 server case, and am fairly happy with it.

Also, what SATA card are you using?  I'm in the market for a 4 (or more) port SATA card that has mature open source drivers.

Sorry I don't have any useful advice for you---I found this thread because I just started learning about raid, lvm, etc.

----------

## JeffBlair

OK, I know this is a really old thread, but, better then starting a new one, right?  :Smile: 

OK, right now I have 2 drives in my server. A 160Gig boot drive sda(1/2/3), and a 2T storage drive sdb(1).

I'm looking to get 2 more of the storage drives, and make a RAID5 array. Right now the storage has XFS on it since I know you can expand it, and it works good for big files like movies.

I'm sure I'm going to have to backup/restore all the info on the storage drive, but I have a couple of questions before I do that.

-Can I make a RAID0 with just 1 drive, and then later convert it to RAID5?

-What FS type should I make it? LVM?

- If I understand it right, I'd add the LVM disk's sdb/c/d into a RAID5, then format as XFS, or is there a better one that I can maybe shrink online as well?

So it'd work something like this:

  Create 1 LVM partition on all disks

  Create RAID5 with all 3 disks

  Format partition with XFS on new RAID5 drive.

sdb/c/d ->LVM->RAID5->xfs?

Thanks for the help.

----------

