# Combining two hard drives

## G-LiTe`

Okay, I'm getting a second 120 gig HD soon next to my current 80 gig. Now I'll probably format the second HD as one big reiserfs filesystem and I'd like to combine that partition with my current root partition so it'll recognize it as one spread over both HDs. I know this is possible with LVM and I've done some googling around.

I'm running 2.6.0-test9 atm, though. Which uses the new(?) device mapper or whatever it's called which is another layer below LVM. (or, LVM 2 specifically)

I was just wondering if there were other, better alternatives and how safe LVM 2 is. Is there anything I should watch out for?

Because the primary purpose of this box is desktop usage, I was also wondering if this is actually a good idea. Will it slow down performance in any way? I play a bunch of games, if that has anything to do with it.

I'd like to hear any advice on this subject, because I have absolutely zero experience with it.  :Smile: 

Googling around didn't turn up much except for some mailinglist discussions mostly involving developement. So I decided to ask here...

anyone?  :Smile: 

----------

## bmichaelsen

 *Quote:*   

> combine that partition with my current root partition so it'll recognize it as one spread over both HDs.

 

how about software RAID-0 ? It will even make disc access faster ...

More info about RAID ....

Greetz, Björn

----------

## rommel

well raid0 is fine and all but you would loose 40 gigs of hd since it would size the array based on the smaller of the two drives... i think you should try spanning or job maybe.. but lvm is suppose to be very nice... a little difficult to setup cuz it requires an initrd to boot it but then there  is a pkg in portage that automates it... lvmuser-tools or something.

----------

## CheshireCat

I'm using linux 2.6.0-test9 with reiserfs on LVM2 right now.  I have a 160GB and an 80GB drive, although the 80GB is not part of my volume group (yet).  I'm only using 100GB right now, with the rest set aside for migration to reiser4 when it's stable enough to trust it.  The LVM2 package in portage doesn't come with a tool for making an initrd, but it's pretty easy to do yourself, you just need to make an image with the tools and libraries you need, and write a linuxrc script that will initialize LVM2.  It can exit after this, and just let the kernel mount the real root.

If you want, I can put a copy of my initrd online, either just an archive of it (if you want to modify it yourself) or the actual image, which is pretty much ready-to-use.  I'm working on making it work w/ initramfs next, but I believe right now that you need to patch your kernel to run anything on the initramfs.  Once it works, though, you won't need any ramdisk or extra filesystem drivers for it, and the "image" will just be a cpio archive.

As far as performance goes, I haven't seen any issues with normal usage.  I doubt it will seriously impact performance in any games you play.

----------

## Kesereti

 *rommel wrote:*   

> well raid0 is fine and all but you would loose 40 gigs of hd since it would size the array based on the smaller of the two drives... i think you should try spanning or job maybe.. but lvm is suppose to be very nice... a little difficult to setup cuz it requires an initrd to boot it but then there  is a pkg in portage that automates it... lvmuser-tools or something.

 

I think that you're thinking of RAID1...unless I'm mistaken, both Linear mode and RAID0 (striped) give you the full capacity of the drives in the array, with the difference being that Linear mode just sticks the drives end-to-end, while RAID0 splits file writes (and as such, reads) between the two drives to enhance performace.  RAID1 (mirrored) gives you the capacity of one of the two drives, but gives you fault tolerance since a copy of every file is kept on each drive.

----------

## CheshireCat

I didn't think raid0 striping could work w/ drives of different sizes.  Btw, LVM can be configured to do striping as well...

----------

## Kesereti

 *CheshireCat wrote:*   

> I didn't think raid0 striping could work w/ drives of different sizes.  Btw, LVM can be configured to do striping as well...

 

It can't, I don't think...I just noticed that he had two different drive sizes =P  But Linear mode will work just fine, and accomplish what he's looking for (combining the two into one single monolithic filesystem) ... I don't know if there's any difference in performance between that or LVM, though, having never used Linear RAID or LVM myself ^_^

----------

## CheshireCat

I've never used any RAID, but LVM2 seems fast enough.  I can't imagine that linear mapping from a logical volume to a volume group is a lot of work, one the config is loaded it amounts blocks a-b are on device 1, bocks c-d are on device 2, etc.  The main benefits of LVM are probably flexibility, and snapshots.  Snapshots are great for backups - they basically make an existing volume copy-on-write (requiring free space for writes in the volume group, of course), and give you a "frozen" copy of the filesystem that you can mount for your backup.  You have to make sure the FS is in a consistent state when you take the snapshot, but doing this temporarily while a snapshot is set up is much less painful than having a volume unmounted or read-only for backup...

----------

## nbensa

 *rommel wrote:*   

> well raid0 is fine and all but you would loose 40 gigs of hd since it would size the array based on the smaller of the two drives... 

 

Whaaaaaaaaat!!!????... Hey, I'm running raid0 on different sized HD and I get the combined size of both...

```

$ cat /etc/raidtab

raiddev         /dev/md/0

        raid-level              0

        nr-raid-disks           2

        persistent-superblock   1

        chunk-size              32

        device                  /dev/discs/disc1/part1

        raid-disk               0

        device                  /dev/discs/disc0/part10

        raid-disk               1

$ df

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/md/0             47345568  18595104  28750464  40% /home

$ sudo /sbin/fdisk -l /dev/discs/disc1/disc

/dev/discs/disc1/part1               1        3720    29880868+  fd  Linux raid autodetect

$ sudo /sbin/fdisk -l /dev/discs/disc0/disc

/dev/discs/disc0/part10            324        2498    17470656   fd  Linux raid autodetect

```

----------

## Barkotron

Hmm. I haven't used Linux software RAID yet, but if you're getting the combined size of both disks using different sized disks, then it's either a) not doing the RAID properly or b) not reporting it properly.

RAID 0, using two disks of differing sizes, will ONLY give you twice the size of the smaller disk. E.g. disk 0=40GB, disk 1=60GB, combined size of RAID 0 array = 80GB.

It can't do anything else - RAID 0 is striping the data across the two disks for faster access. What's going to happen if it tries to write on the "extra" 20GB on disk 1? Is it just going to throw the stripe data (which would have been on disk 0 if it was bigger) at /dev/null? Is it going to somehow magically revert to doing normal, un-striped writes to just that section of disk? Whatever, I wouldn't want to be trusting data to a RAID 0 array that acted in an excitingly unpredictable fashion like that. I would suggest, nbensa, that you have a good look at what is actually happening with your drives and make sure that you aren't losing anything.

Edit: oops, looks like I should have read how Linux RAID works - http://tldp.org/HOWTO/Software-RAID-HOWTO-2.html says that it will write normally to the "extra" bit.

----------

## CheshireCat

I was thinking... in my case (and probably a lot of others), my larger drive is also fast.  I don't think anything will support this sort of mapping right now, but what if you had larger stripes on the larger drive, or interleaved stripes between drives at a ratio other than 1:1?  Think something like first 4MB on 1, next 2MB on 2, etc.  This wouldn't work with software RAID (at least not in a way compatible with anything else), but it might not be too hard to fit into LVM...

----------

## G-LiTe`

Wow... Thanks for all the responses.  :Very Happy: 

I've read up a bit on RAID and, well, linear mode software RAID seems to be the same as LVM.

There's actually another problem, the 80 gig HD still has a windows partition (of about 20 gig) that I'm forced to use occasionally.

I know it's possible to keep that partition with LVM, because LVM uses partitions too, not the entire drive, right? Is that possible with RAID too?

And if so, I'll have to ask the inevitable question again of: which one is better?  :Very Happy: 

 *CheshireCat wrote:*   

> I'm using linux 2.6.0-test9 with reiserfs on LVM2 right now.  I have a 160GB and an 80GB drive, although the 80GB is not part of my volume group (yet).  I'm only using 100GB right now, with the rest set aside for migration to reiser4 when it's stable enough to trust it.  The LVM2 package in portage doesn't come with a tool for making an initrd, but it's pretty easy to do yourself, you just need to make an image with the tools and libraries you need, and write a linuxrc script that will initialize LVM2.  It can exit after this, and just let the kernel mount the real root.
> 
> If you want, I can put a copy of my initrd online, either just an archive of it (if you want to modify it yourself) or the actual image, which is pretty much ready-to-use.  I'm working on making it work w/ initramfs next, but I believe right now that you need to patch your kernel to run anything on the initramfs.  Once it works, though, you won't need any ramdisk or extra filesystem drivers for it, and the "image" will just be a cpio archive.
> 
> As far as performance goes, I haven't seen any issues with normal usage.  I doubt it will seriously impact performance in any games you play.

 

I would appreciate it if you could put your initrd online, because I have no experience with setting up those either.  :Smile: 

What I actually plan on doing right now (if I'm going to LVM way) is partition the new HD as one large LVM partition and create a swap and root partition on it. Move everything from the old hd to it and delete the partitions on the old hd, replace them with another lvm partition. Combine the two lvm partitions and enlarge the root partition.

I actually wanted to set the old hd to be slave but windows probably won't like that.

Does that sound any good?  :Smile: 

Another question: right now I've got a /boot, root and swap partition. Can grub boot from the /boot partition if it's managed by LVM or RAID?

Thanks again for all the advice, keep it coming.  :Wink: 

----------

## CheshireCat

Okay, put up two files.  http://user.pa.net/~dbblm/initrd.gz is a gzipped romfs filesystem, and should be usable as long as you have romfs and tmpfs built into your kernel.  You don't need to decompress it, that should happen automatically at boot time.  http://user.pa.net/~dbblm/initrd.tar.gz is a tarball of the files in the initrd, in case you want to modify it.  AFAIK grub can't deal with LVM logical volumes, I'm using a tiny ext2 partition for /boot.

If you're interested in hacking it, everything in the initrd is built against uClibc, a small C library originally developed for embedded systems.  You can get the library, and a ready-made development environment for it, at http://www.uclibc.org.  The devlopment environment is very helpful, it's an ext2 image that you can mount on loopback and chroot into, with everything built against uclibc.  The other stuff in the initrd is mostly busybox.  The LVM tools needs some minor edits to make them compile against uClibc.

----------

## G-LiTe`

 *CheshireCat wrote:*   

> Okay, put up two files.  http://user.pa.net/~dbblm/initrd.gz is a gzipped romfs filesystem, and should be usable as long as you have romfs and tmpfs built into your kernel.  You don't need to decompress it, that should happen automatically at boot time.  http://user.pa.net/~dbblm/initrd.tar.gz is a tarball of the files in the initrd, in case you want to modify it.  AFAIK grub can't deal with LVM logical volumes, I'm using a tiny ext2 partition for /boot.

 

Sound great! Thanks  :Smile: 

I'll have to recompile my kernel then but that shouldn't be a problem. I'm currently using a 7mb ext2 partition too for /boot. I'll just keep that.  :Smile: 

 *CheshireCat wrote:*   

> If you're interested in hacking it, everything in the initrd is built against uClibc, a small C library originally developed for embedded systems.  You can get the library, and a ready-made development environment for it, at http://www.uclibc.org.  The devlopment environment is very helpful, it's an ext2 image that you can mount on loopback and chroot into, with everything built against uclibc.  The other stuff in the initrd is mostly busybox.  The LVM tools needs some minor edits to make them compile against uClibc.

 

I've heard of that, yes. I'll take a look, even if it's just out of interest.  :Smile: 

Thanks again.  :Very Happy: 

----------

## G-LiTe`

Okay, looks like initrd doesn't like me. I get errors like:

```
VFS: Unable to mount root fs on unknown-block(0,0)
```

ROMFS and TmpFS are compiled in, initrd is too. I put the initrd line in grub. I am able to mount it manually. I don't really see what's going wrong.

I've tried several root= parameters on the commandline, such as root=/dev/vg/0, root=fe00, root=/dev/rd/0, root=100, none work.

CheshireCat: Can you show me the lines in grub.conf you use? Or any other idea what is going wrong?

Google doesn't turn anything up. I've asked in several places but nobody can answer me.  :Confused: 

It's not even running the /linuxrc script, there's no output at all. Grub does find the initrd, however. And so does the kernel. (It tells me that just before it panics)

I'd really like to get this to work. It's really frustrating...  :Confused: 

----------

## marchino

 *CheshireCat wrote:*   

> Okay, put up two files.  http://user.pa.net/~dbblm/initrd.gz is a gzipped romfs filesystem, and should be usable as long as you have romfs and tmpfs built into your kernel.  You don't need to decompress it, that should happen automatically at boot time.  http://user.pa.net/~dbblm/initrd.tar.gz is a tarball of the files in the initrd, in case you want to modify it.  AFAIK grub can't deal with LVM logical volumes, I'm using a tiny ext2 partition for /boot.
> 
> 

 

 :Cool:   really helpful! It made me upgrade my kernel to 2.6 with a LVM root partition! Thanks Chesirecat!

----------

## CheshireCat

G-LiTe`: I had some difficulties like this when I first switched to LVM.  If you specify a root device on the kernel command line, the kernel seems to ignore the initrd.  Two questions:  if you use the initrd without a root= paramater, does the linuxrc script run?  And, if so, does it panic with "unable to mount root fs" after the linuxrc script finishes?

If this is happening, here's what you need to do.  Set up grub.conf with the initrd, without any root= parameter on the kernel command line, then use rdev on the kernel that you'll be booting with LVM.  This is a little tricky, the easiest way to do this is to have LVM set up already (you need to boot from your old root with LVM installed, and with the new hard drive in the machine).  You can then use the following:

```
rdev /dev/volumegroup/logicalvolume /path/to/kernel/image
```

This will set the block device that the kernel image attempts to mount as root after linuxrc finishes.  As an alternative, you can use a comma-separated pair of integers to specify the major and minor numbers of the device, for example "254,0" for the first logical volume of the first volume group.

For reference, here's my grub.conf:

```
default 0

timeout 2

splashimage=(hd0,0)/grub/splash.xpm.gz

title=Linux

root (hd0,0)

kernel /bzImage-2.6.0-test9-smng-reiser4 vga=0x30c pci=noacpi

initrd /initrd.gz
```

Good luck!

----------

## G-LiTe`

 *CheshireCat wrote:*   

> G-LiTe`: I had some difficulties like this when I first switched to LVM.  If you specify a root device on the kernel command line, the kernel seems to ignore the initrd.  Two questions:  if you use the initrd without a root= paramater, does the linuxrc script run?  And, if so, does it panic with "unable to mount root fs" after the linuxrc script finishes?
> 
> If this is happening, here's what you need to do.  Set up grub.conf with the initrd, without any root= parameter on the kernel command line, then use rdev on the kernel that you'll be booting with LVM.  This is a little tricky, the easiest way to do this is to have LVM set up already (you need to boot from your old root with LVM installed, and with the new hard drive in the machine).  You can then use the following:
> 
> ```
> ...

 

I got it working, it wasn't the root= parameter.  :Smile: 

The problem was romfs... guess it just didn't like it. I redid it in ext2 (and upgraded some stuff on it while I was at it  :Wink: ) and it worked flawlessly.  :Smile: 

(Actually, I just upgraded uclibc and devmapper... busybox and lvm failed to compile but your copy worked fine anyways, even with the newer libraries)

Thanks for all the help.  :Smile:  That new hd sure is almost silent. Was looking out the window and didn't notice it was done booting already.  :Very Happy: 

----------

## CheshireCat

 *G-LiTe` wrote:*   

> 
> 
> I got it working, it wasn't the root= parameter. 
> 
> The problem was romfs... guess it just didn't like it. I redid it in ext2 (and upgraded some stuff on it while I was at it ) and it worked flawlessly. 
> ...

 

Great to hear that you got it working!  If you're using it with ext2, you can take out the stuff with the tmpfs mount on /var in the linuxrc script.  This is only necessary because LVM expects a writable /var, and romfs is a read-only filesystem.

----------

