# Two 500GB HDs as 1TB volume: LVM or RAID 0?

## Satoshi

Hello, guys

So, I've been meaning to buy two 500GB HDs, but I want it to be seen as a 1TB volume.

I was thinking of RAID 0, but it has been brought to my attention that LVM might be better.

I asked this on #gentoo and I gathered that:

In case one disk fails, the partition will probably be too messed up anyway, so the safety aspect is pretty similar: I'll lose everything should one drive fail.

RAID 0 will at least give me better performance, that, even if very slight, is still a step up from LVM with no performance increase at all.

But, well, I want to gather more information about it before making my choice: what do you guys think? LVM or RAID 0?

Why two 500GB and not one 1TB drive, you ask? Because these are 2.5" drives, and it's way cheaper buying two 500GB units than one 1TB unit (in fact, that's the case even with some 3.5" models).

I'm not sure, but most likely my mobo won't have RAID capabilities, so this will be a software RAID.

----------

## eccerr0r

LVM: if you don't care about speed or perhaps you want to make your 1T VM to a 1.5T VM with 1/3 the MTBF of one drive... and potentially slightly easier recovery of data.

RAID0: If you want speed.

Not much to it really.

----------

## cach0rr0

 *Satoshi wrote:*   

> 
> 
> Why two 500GB and not one 1TB drive, you ask? Because these are 2.5" drives, and it's way cheaper buying two 500GB units than one 1TB unit (in fact, that's the case even with some 3.5" models).
> 
> 

 

how much are 1TB drives going for down in Brazil? 

Just curious. Up here there is a small difference in price between the two (something like $65 for 1TB Samsung, $50 or thereabouts for 500GB)

----------

## Satoshi

 *cach0rr0 wrote:*   

>  *Satoshi wrote:*   
> 
> Why two 500GB and not one 1TB drive, you ask? Because these are 2.5" drives, and it's way cheaper buying two 500GB units than one 1TB unit (in fact, that's the case even with some 3.5" models).
> 
>  
> ...

 

1TB: R$150-R$250

500: R$100-R$200

Yes, not much of a difference. But it's a different picture with 2.5-inch drives. Those are around R$250 500GB and, well, I can't even FIND 1TB notebook HDs, but the 750GB ones are around R$450, almost double the price of the 500GB ones.

(As of this writing, 1 US dollar is worth 1.6 Brazillian reais.)

----------

## madchaz

I'd go with lvm and tell it to stripe the data. (you can do that when you create a logical volume, check the man pages for lvcreate)

You get the same performance boost as stripped raid0 with more convinience. 

And to those who say raid0 is faster, only if you stripe it instead of making it linear. So both end up with the same result.

----------

## Satoshi

 *madchaz wrote:*   

> I'd go with lvm and tell it to stripe the data. (you can do that when you create a logical volume, check the man pages for lvcreate)
> 
> You get the same performance boost as stripped raid0 with more convinience. 
> 
> And to those who say raid0 is faster, only if you stripe it instead of making it linear. So both end up with the same result.

 

Could you explain to me what kind of "convenience" is that? I also heard people claiming it was more "manageable", but what does being more manageable and more convenient mean?

----------

## NeddySeagoon

Satoshi,

It only matters if you intend to partition the logical volume further.

LVM by defualt will give you the same as linear raid0.  Raid0 will be striped by defualt and LVM can do that too.

You can also partition both logical systems further.

LVM scores here as you can grow and shrink the partitions dynamically, provided you choose filesystems that support those operations too.

----------

## Satoshi

 *NeddySeagoon wrote:*   

> Satoshi,
> 
> It only matters if you intend to partition the logical volume further.
> 
> LVM by defualt will give you the same as linear raid0.  Raid0 will be striped by defualt and LVM can do that too.
> ...

 

Oh, so in RAID I can't, for instance, create a new partition whenever I want it? That's seriously bad for me. If it means I can't even have different partitions inside the RAID volume, then it's a definite no-go for me.

Also, which do you think is easier to create/maintain? I actually have never messed around with RAID, and I didn't even know LVM existed until like two days ago.

Good resources on LVM (since it seems that's what I'll use) are welcome. Specially the striping part.

Also, what file systems support that? This is strictly Linux-only setup, so I guess file systems aren't an issue (in fact, I can't wait to get rid of my old FAT32 data partitions I used to use for sharing with Windows).

----------

## 1clue

IMO, I would use LVM no matter what, the only decision would be whether to put RAID0 under it.

So far you have given no compelling reason to add RAID that I can see.

LVM not only lets you add new volumes, you can do so while the system is using the rest of the volumes.  And even more, you can resize the volume while it's mounted provided the filesystem type allows resizing.

----------

## Satoshi

I think you guys made a compelling case towards LVM. However, I see that the boot partition can't be in a Logical Volume, so what strategy should I use for it?

Can I have one partition off of one disk outside the LVM serving as /boot and the rest as the LV?

EDIT: According to http://www.gentoo.org/doc/en/lvm2.xml, yes, I can.

It also suggests I keep the root partition outside the LV. That's cool with me. In fact, that would be one hell of an oportunity to get a 64GB+ SSD to use as my root partition. Too bad they are so damned expensive  :Sad: 

----------

## 1clue

I have 4 identical drives.  /boot is RAID-1 on all 4 drives, so any of them can be /boot.  In my understanding, grub uses just one anyway, since it doesn't really understand RAID.  As long as your /boot is non-striped you're good.

Then I have the drives split into pairs, one volume group per pair.  I tried to set things up so that most of the time the disk accesses would be from opposing volume groups, to help reduce wait time.  Not sure how I would tell if I'm successful, but it seems quick enough.

So what I have is like this:

sd*1 is 512m, RAID autodetect.

sd*2 is SWAP, NOT RAID.  This is a problem if you actually use swap and the computer hibernates.

sd*3 is LVM autodetect, containing the rest of the drive.

----------

## 1clue

Sorry, an addendum to that.

/boot is RAID 0 EXT2 (no lvm), because there is no growing it.

On the other hand, everything managed by lvm2 is a partition type which can be grown while active.  That means no ext3, for example.

My strategy is to make the logical volumes as small as I can, and grow them when I need to.

----------

## NeddySeagoon

1clue

```
/boot is RAID 0
```

that sounds broken.

For grub and a raid1 /boot you must use raid superblock version 0.9.  That is no longer the mdadm default.

Raid superblock version 1.x is in a different ocation on the volume an prevents grun from booting the system.

Root can be a logical volume but your need an initrd to run LVM before root can be mounted. If you have a natural aversion to initrds, root must be outside the LVM.

----------

## 1clue

Oops!  You're right, it's RAID 1.  Mirrored, not striped.

Are you saying my boot partition is going to stop working sometime?  It works fine right now.

----------

## NeddySeagoon

1clue,

Your /boot is fine. However, very recently, the mdadm default raid superblock version was changed from 0.9 to 1.2.

To use grub on a newly created raid1 /boot mdadm must be explicity told to make the raid set with version 0.9 raid superblocks.

Existing raid sets are not affected.

Another wrinkle for new raid users, is that the kernel only supports raid auto assembly for version 0.9 raid superblocks.

For root on raid, you either need kernel auto assembly or an initrd to run mdadm to assemble root.

Existing raid sets will continue to work until the kernel drops raid auto assembly, there we will all have to move to mdadm in an initrd.

----------

## 1clue

To quote the alien dudes from Mars Attacks:  "Ack!  AckACKACK!"

And this is all better than what we had in what way?

FWIW though, if you're using RAID 1 then it doesn't really matter that the kernel can auto-assemble right?  It finds a bootable partition and boots from it, whether it's part of a RAID array or not.  RAID only applies when you update the boot configuration in some way, at which point it uses RAID to ensure that everything is modified the same way.

Or am I totally off base here?

----------

## NeddySeagoon

1clue,

Thats exactly right.

Auto assemble (or not) is only an issue for root in raid.

----------

## Mad Merlin

I have a strong aversion to LVM, as for all the features that sound useful, I rarely find myself in need of them. However, I do always find LVM overly complex, and that's a regular price to pay.

Anyways, I find mdadm does a better job of RAID than LVM does, so that's what I'd have do RAID. You can still use LVM on top of RAID if you wish.

----------

