# HOWTO: Mount / in RAM and load apps instantly

## thebigslide

I figured this out based on this post: 

So you want to mount / in RAM for a super-speedy system?

Here's what you need to make your gentoo FLY

/usr must be on it's own partition

/home must be on it's own partition if it's large or you use it for storage

/root must be on it's own partition if you're putting anything big in it

/var on it's own partition (so we don't fill up the RAM drive with logs/portage cache)

an empty directory called /newroot

You must have a partition to store the tarballs on (I use the same partition that ends up being /root) and it can't be /usr.

Maybe use the partition that was / during the install

computer must have a spare 176MB of RAM or so. 

(Depends how much you want to load into RAM)

You need to have ramdisk, initial ramdisk, loopback device support in the kernel, not as modules.  

These choices can be found under block devices, which is under device drivers.

The amount of performance boost in order of magnitude, by which is loaded into RAM seems to be

/usr/lib

/lib

/usr/bin

/bin

/usr/sbin & /sbin 

Step 1

Install as normal

Step 2

generate the tarballs that will populate our RAM drives

put this in /sbin so you can run it should you update your system (make sure STORE is mounted first if applicable!):

```

echo /sbin/update-balls >> /etc/conf.d/local.stop

chmod +x /sbin/update-balls

cat /sbin/update-balls

##############

#!/bin/sh

CURRDIR=`/bin/pwd`

STORE="root"

cd /

#Exclude anything that's on it's own partition here 

tar cfp ${STORE}/fs.tar * --exclude=usr/* --exclude=root/* --exclude=home/* \

        --exclude=proc/* --exclude=sys/* --exclude=tmp/* --exclude=var/*  \

        --exclude=opt/*

cd /usr/

# rm -fr /usr/bin /usr/sbin /usr/lib

# cp -a /usr/.bin /usr/bin

# cp -a /usr/.sbin /usr/sbin

# cp -a /usr/.lib /usr/lib

cd bin && tar cfp /${STORE}/usr_bin.tar *

cd ../sbin && tar cfp /${STORE}/usr_sbin.tar *

cd ../lib && tar cfp /${STORE}/usr_lib.tar * 

# rm -fr /usr/bin /usr/sbin /usr/lib

# mkdir /usr/bin /usr/sbin /usr/lib

cd $CURRDIR

```

Step 3

Now we have to make an initrd to perform the population of our RAM drive before we load init:

```

mount /boot #If necessary

touch /boot/initrd 

dd if=/dev/zero of=/boot/initrd bs=1024k count=8 

losetup /dev/loop0 /boot/initrd 

mke2fs /dev/loop0 

```

Now we have loop0 mounted as the initrd.  Time to populate it:

```

mkdir /mnt/initrd 

mount /dev/loop0 /mnt/initrd 

cd /mnt/initrd 

mkdir etc dev lib bin proc new store 

touch linuxrc etc/mtab etc/fstab 

chmod +x linuxrc 

for I in sh cat mount umount mkdir chroot tar; do cp /bin/$I bin/; done

cp /sbin/pivot_root bin/

```

We need a /newroot directory to hold the initrd after the system's booted.

```
mkdir /newroot
```

Now we have to copy the libraries that each of these binaries needs.  You can determine this a la: 

```

ldd /bin/sh

 linux-gate.so.1 =>  (0xffffe000)

        libdl.so.2 => /lib/libdl.so.2 (0xb7fe2000)

        libc.so.6 => /lib/tls/libc.so.6 (0xb7eca000)

        /lib/ld-linux.so.2 (0xb7feb000)

```

 means we need /lib/libdl.so.2 /lib/tls/libc.so.6, lib/ld-linux.so.2

Here's what I needed in total:

```

ls -R lib

lib:

ld-linux.so.2  libblkid.so.1  libdl.so.2  libuuid.so.1  tls

lib/tls:

libc.so.6  libpthread.so.0  librt.so.1

```

Please check each of your binaries in case you need something I don't.  Then we need to write the linuxrc script that does the dirty work:

```

cat /mnt/initrd/linuxrc

################

#!/bin/sh

export PATH=/bin

STOREDEV=/dev/hda10

STORE=/store

ROOTSIZE=128m

# Get kernel CMDLINE

mount -t proc none /proc

CMDLINE=`cat /proc/cmdline`

umount /proc

mount $STOREDEV $STORE

# Mount root and create read-write directories

mount -t tmpfs -o size=$ROOTSIZE none /new/ > /dev/null 2>&1

cd /new/ && tar xpf $STORE/fs.tar > /dev/null 2>&1

umount $STOREDEV

# Pivot root and start real init

cd /new

pivot_root . newroot

exec chroot . /bin/sh <<- EOF >dev/console 2>&1

exec /sbin/init ${CMDLINE}

EOF

```

Once that's done, we need to make the device nodes that this will use:

```

mknod /mnt/initrd/dev/console c 5 1 

mknod /mnt/initrd/dev/null c 1 3 

mknod /mnt/initrd/dev/hda b 3 0 

mknod /mnt/initrd/dev/hda4 b 3 4

mknod /mnt/initrd/dev/hda10 b 3  10

```

You only need the nodes for the mounts that the linuxrc script uses (see /usr/src/linux/Documentation/devices.txt)

And that's it for the initrd 

```
umount /mnt/initrd
```

Step 4 

Modify /etc/init.d/localmount 

```
start() {

        USRBINSIZE=32m

        USRSBINSIZE=2m

        USRLIBSIZE=256m

        # Mount local filesystems in /etc/fstab.

        ebegin "Mounting local filesystems"

        mount -at nocoda,nonfs,noproc,noncpfs,nosmbfs,noshm >/dev/null

        eend $? "Some local filesystem failed to mount"

        ebegin "Mounting RAM filesystems"

        mount -t tmpfs -o size=$USRBINSIZE none /usr/bin > /dev/null 2>&1

        mount -t tmpfs -o size=$USRSBINSIZE none /usr/sbin > /dev/null 2>&1

        mount -t tmpfs -o size=$USRLIBSIZE none /usr/lib > /dev/null 2>&1

        cd /usr/bin && tar xpf /root/usr_bin.tar > /dev/null 2>&1

        cd /usr/sbin && tar xpf /root/usr_sbin.tar > /dev/null 2>&1

        cd /usr/lib && tar xpf /root/usr_lib.tar > /dev/null 2>&1

        eend $? "Some RAM filesystems did not mount"

```

Step 5

Modify the bootloader

```

cat /boot/grub/grub.conf

################

timeout 3

default 0

# For booting GNU/Linux from an existing install (rescue)

title  Gentoo

root (hd0,0)

kernel /bzImage root=/dev/ram0 rw init=linuxrc video=vesafb:ywrap,pmipal,1024x768-16@70

initrd /initrd

```

Step 6

If you find that /usr/lib is too big to make a reasonable RAM drive, perhaps move some things to /usr/local/lib/ and link them, eg:

```

cd /usr/lib

for I in perl5 python2.3 portage modules gcc gcc-lib; do

mv $I ../local/lib/

ln -s ../local/lib/$I $I

done

```

Putting portage in the RAM drive sure is a nice speedup, tho.

```
time /usr/bin/emerge -s mozilla

real    0m3.680s

user    0m2.978s

sys     0m0.131s
```

Step 7

Finalizing

```
mv /usr/sbin /usr/.sbin 

mv /usr/bin /usr/.bin 

mv /usr/lib /usr/.lib

reboot
```

###########Aside##########

If you just want to load certain applications from a RAM disk, you can do something like the following

```

##do this in advance

tar cpf /root/preload.tar /usr/bin/firefox /lib/and /lib/all /usr/lib/of /usr/lib/the /lib/raries/ it's/dependent /lib/on

##replace all the original bins and libraries with links to /preload/whatever

##Then put this in /etc/conf.d/local.start

mount -t tmpfs -o size=128m none /preload > /dev/null 2>&1

cd /preload && tar xfp /root/preload.tar
```

#########################Last edited by thebigslide on Mon Mar 14, 2005 4:32 pm; edited 25 times in total

----------

## BlindSpy

this is probably one of the coolest things i've ever seein! I should just buy like 5 gigs of ram and have as much ram as my laptop has hdd space.

----------

## bet1m

or only /lib and /usr/lib to put in ram, Firefox will load on 0.01 sec.  :Very Happy: 

----------

## Need4Speed

 *bet1m wrote:*   

> or only /lib and /usr/lib to put in ram, Firefox will load on 0.01 sec. 

 

 :Shocked:   :drool: 

I got to get more ram!!  :Twisted Evil:   I know a guy who bought a dual opteron system with 16gigs of ram just so he could do this.

----------

## thebigslide

Rewritten following a reinstall.

This should suite anyone with between 256MB and 1GB of RAM now.Last edited by thebigslide on Sat Feb 19, 2005 7:33 am; edited 1 time in total

----------

## Jinidog

Oh, how sad that /usr/lib ist as big as my RAM (1 GB).

This all brings nothing when there is something swapped.

I would rely on the caching of the kernel.

If there is RAM free, diskspace is cached, if the applications need RAM, they use it.

There is now swapping (swapping costs time).

I don't think that this will speed up anything in real life.

----------

## thebigslide

Maybe you should clean out your /usr/lib/?  Mine is only 200MB and I have TONS of apps installed.  

Adended howto and provided an example of precacheing 1 app.

----------

## stahlsau

nice howto, thank you!

I´ll test it anyway, but do you notice some performance-boost? I´ve tried to put some files and libs on a ramdisk before, but it wasn´t faster than it was without...

----------

## thebigslide

I am going to try this on a 29 node ltsp server this weekend.  right now, if everyone opens firefox, it sits and chugs for minutes before everyone has it open.  We'll see if this makes a difference.  

For me, my HD is damn fast anyways.  It just replaced it with a 200GB seagate.  It is definately faster with the libs running off RAM (I only have 512MB), but it was pretty fast to begin with.  

I like the idea of doing this with a CD booting OS so the damn disk doesn't have to keep spinning up and down, tho.

----------

## Zuti

thebigslide, i see you often msg-ing the tweaks posts  :Wink: 

Dude, you are so cool. 

Tnx.

----------

## COiN3D

Hey there,

nice guide, but some parts are hard to understand I think.

 *Quote:*   

> /usr must be on it's own partition
> 
> /home must be on it's own partition if it's large or you use it for storage
> 
> /root must be on it's own partition if you're putting anything big in it
> ...

 

Do you mean I should create more partitions and update my /etc/fstab, or is it okay to let these on my root-partition? Do I generally have to edit my fstab file?

Sorry if some of those questions sound stupid to you, but I'm really lookin forward to try it out  :Wink: 

----------

## paperp

Just an info for a non geek ; if i move lib e usr/lib on ram , usual apps like firefox , evolution and maybe X itself take a big advantage in term of speed??

----------

## float-

really sweet howto

----------

## thebigslide

 *paperp wrote:*   

> Just an info for a non geek ; if i move lib e usr/lib on ram , usual apps like firefox , evolution and maybe X itself take a big advantage in term of speed??

 

Apps open instantly, but they don't run any faster.  What this does is eliminate the bottleneck of loading the applications and their dependent libraries into RAM off the HD.  Also, RAM seeks WAY faster than a HD, so if multiple instances of an app are called, the system doesn't sit and chug for 10 minutes while the hard-drive tries to do 100 things at once.

I set this up on an LTSP server with 1GB of RAM and it was quite fast with 4 clients connected.  I then powered up the other 30 workstations and it ran out of RAM (because they have KDE on the silly thing) and ran horribly because it was swapping.  30 KDE desktops sure use a lot of RAM (about 40MB a piece).  The RAMdisk was 512MB and had /bin, /sbin, /lib, openoffice firefox, AND most of /usr/lib on it.

----------

## odegard

How much work would it be to make a script that let's you pick programs already installed, create the neccessary dir's and put everything into ram upon reboot?

It should be a simple option to move programs to and from memory...

Imagine what this would do for Linux! Next time you show some friend of yours Linux, just fire up firefox/whatever and *zip* it's there. Startuptimes alone makes a huge impression.

Bravo, great work!  :Cool: 

----------

## rcxAsh

 *thebigslide wrote:*   

> Maybe you should clean out your /usr/lib/?  Mine is only 200MB and I have TONS of apps installed.  
> 
> Adended howto and provided an example of precacheing 1 app.

 

Wow, my /usr/lib is nearly 1GB as well... how would you suggest cleaning it out without breaking anything?

This RAM stuff sounds really cool.. I just wish I had more RAM to burn.  Only 256MB here.  Though, perhaps I may try it out with single applications later.

----------

## madbiker

Mine is also nearly 1 GB. It's looking like thebigslide might be the minority instead of the norm on this one.

I'll admit I have a fair number of programs installed, but I clean out old and unused packages regularily. I need everything that I have installed right now... too bad.

----------

## thebigslide

Hmm, you guys are missing a big plus to this.  By putting critical things on a RAM drive and umounting the partition that holds the tarballs and the update script, you are preventing changes to those files except when you run the script to do so.  If someone were to h@><0r your b0><0r, they'd be changing things in a RAM drive that is flushed as soon as you reboot!  This is also awesome fun for a honey pot running SSH with a typical password or a service with a well known vulnerability.  You can see how people go about exploiting these boxes by simply diffing the filesystems and learn how to better secure your _real_ servers (which are also packing a root RAM drive for security).

To those of you with large /usr/libs: did you look to see what was using the space?  I'm just curious.  If you use rox-filer, there's a nice count feature that tells you the total folder size for multiple selected folders.  You can get a nice idea of where your disk-space is going. 

Also, you can easily make a folder in /usr called preload (or whatever) and mount that in a RAM drive, link things back to /usr/lib and the silly symlinks that rely on .. to point to /usr will still work  :Smile:   The lets you pick and choose what is loaded and what isn't loaded into RAM very eaily and could be automated with a simple bash script.  

odegard:

My buddy Cory is working on one of these right now that takes the binary name as an argument and copies it and all it's dependent libraries to a tmpfs mount and links back to the original after renaming the originals .whatever.  The script will soon have start and stop functions and take a config file.  Might make an interesting addition to some people's init scripts.

Also, btw, I have shown this to a hard core winblowz user who swore he'd never convert to using linux on his desktop because it appears slower (even though his HTPC I made him runs KDE on gentoo on a dual AMD 1800+ box  :Surprised: )  He was jealous.

People like me are working towards making linux easier for the windows transition crowd.  Having apps startup fast is a big thing for those people (especially if they've spent a ton of $$ on a computer already so it will have acceptable performance under windows).  Other projects this particular gentooer is working on include a GUI based livecd and various lightweight stage4 tarballs that can be rebuilt with -e world transparently after install, to allow people to install a working gentoo OS with under 5 minutes of interactivity (and that in a gui) along with post-stages that will add functionality in different areas as binaries and then be easily recompiled/optimized later.  All use ebuild and portage.  Look out for more howtos  :Smile: 

I love Gentoo.Last edited by thebigslide on Wed Feb 23, 2005 5:27 am; edited 3 times in total

----------

## thebigslide

 *failcase wrote:*   

> Hey there,
> 
> nice guide, but some parts are hard to understand I think.
> 
>  *Quote:*   /usr must be on it's own partition
> ...

 

Sorry I missed your post earlier.  

You must have any large folder on it's own physical partition on the disk. 

You will have to edit fstab if you'd like the partitions mounted by gentoo automatically.

/var especially must be on it's own partition or someone can DoS your logger and maybe other things by making it fill up the RAM drive.

----------

## stahlsau

nah, i´ve tried to try it, but i´ve failed with the common "kernel panic: unable to find init. Try passing init= at the kernel line".

Initrd is found, kernel boots fine, then this error. But:

-i HAVE linuxrc

-i HAVE passed the option

dunno. I remember having this probling when i built my own stage4-livedvd, and i fixed it, but i can´t remember how.

And i´ve read almost any post on this forum that´s related to this.

K, some description:

-linuxrc is on / in my initrd

-line from grub: 

```
kernel /bzImage root=/dev/ram0 rw init=/linuxrc ramdisk=32768
```

-i also changed the init= to every path i could think of and copied the file everywhere, no go.

Any ideas anyone?

----------

## thebigslide

is linuxrc executable? 

What are it's contents?

You can try setting init=/bin/sh in grub and then enter your initrd line by line, also.

----------

## asph

great idea and howto, thanks

----------

## stahlsau

hi,

@thebigslide:

linuxrc is executable, works if i start it by hand, and if i set init=/bin/bash i get a shell and can do my job. 

Only if i call linuxrc it doesn´t work, even if i copy it to /bin and call init=/bin/linuxrc from grub. Dunno.

Thanks for the help, maybe s/o has another answer for me  :Wink: 

Another one:

if i call "pivot_root . newroot" it tells me "no such device...". It only works for me if i set newroot to some available directory.

Now a tip from me:

I don´t use your tar-solution for /bin, /sbin etc. Instead i use my old root-partition which i mount in /store. Then i copy over /bin, /etc, /sbin, ... to my root-ramdrive (~50mb), unmount /store and proceed. So i don´t need to modify localmount and don´t need to untar (takes too much time for me)  :Wink: 

for backing up my files before reboot/halt, i use this lil script:

```
#!/bin/sh

cd /

mount /dev/hda3 /hdd

rsync -auv --delete /bin /etc /sbin /lib /root /hdd/

echo "Backup complete."

umount /hdd
```

just put it into /etc/conf.d/local.stop.

----------

## thebigslide

 *stahlsau wrote:*   

> 
> 
> Another one:
> 
> if i call "pivot_root . newroot" it tells me "no such device...". It only works for me if i set newroot to some available directory.
> ...

 

you need to mkdir /newroot on your root filesystem before tarring it up  :Wink:   /newroot is the path that the initrd will reside at following the pivotroot so it must exist for the command to be executed.

----------

## stahlsau

 *Quote:*   

> you need to mkdir /newroot on your root filesystem before tarring it up Wink /newroot is the path that the initrd will reside at following the pivotroot so it must exist for the command to be executed.

 

yeah, i noticed this  :Wink: 

Anyway, ´til now there´s no great speed gain. Not that i would notice it, at least. I´ll try moving some of the "most-wanted" libs to the ramdisk, maybe things´ll change then. All of ´em won´t work since my /lib is 1.5G...

Any ideas what else could help speeding things up? I´ve got ~550mb ram free  :Wink: 

----------

## thebigslide

https://forums.gentoo.org/viewtopic-p-2125152.html

Just preload what's slow?

----------

## darkfolk

2 things;

first, on one of the "mknod" commands, you spelt one as "mknot". Typo?

second, you should all be reminded that you need RAM Disk Support and initrd support in the kernel; I didn't have it and had to reset this. other than that, I'm going to try this out with love2...

----------

## thebigslide

Thanks for the 'bugs'.  I've updated the howto.

----------

## Enlight

Well In fact I hate this post  :Wink:  , as soon as I saw it, I felt obliged to buy two gigs of RAM (I had only 256Mo)and it hurts  :Twisted Evil:  Hope they're comming soon   :Very Happy: 

Thanks for this ass kicking post!

----------

## joKer-O-zen

Hi

great howto ... thks a lot  :Smile: 

 *stahlsau wrote:*   

> hi,
> 
> @thebigslide:
> 
> linuxrc is executable, works if i start it by hand, and if i set init=/bin/bash i get a shell and can do my job. 
> ...

 

same problem here ... did you find how to fix ?

----------

## thebigslide

you must create the /newroot directory in the root filesystem before making the root tarball.  I've modified the howto, although this is a hack and really, I should be modifying the linuxrc to not bother pivotrooting and just chroot instead, thus eliminating the need for this non-standard directory.  I'll poke at that at work tomorrow.  I just hate rebooting my box remotely incase I spell something wrong in a critical file and it doesn't come back  :Surprised: 

----------

## joKer-O-zen

I created /newroot directory, and it's in the tarball ... :/

when i put init=/bin/sh instead of init=/linuxrc i got a prompt. I can run ./linuxrc but i got an error with init ... i'm trying to fix that first ...

[EDIT]

I could'nt fix this problem ... so i let root file system on it's own partition

perhaps i'll create a tarball (or just copy the files ?) for /etc/

what do you think about that ? mounting / fs in ram is really efficient ? 

[/EDIT]

----------

## kimchi_sg

/newroot is not supposed to be in the tarball! It stays right on the / partition.

thebigslide, thanks for giving us tweakers a new way to speed up our systems.  :Very Happy: 

A few suggestions:

chmod +x /sbin/update-balls seems to be mandatory. But it is not included in your post.

In this snippet from the linuxrc script:

```
# Mount root and create read-write directories 

mount -t tmpfs -o size=$ROOTSIZE none /new/ > /dev/null 2>&1 

cd /new/ && tar xpf $STORE/fs.tar > /dev/null 2>&1 

umount $STOREDEV 

# Pivot root and start real init 

cd /new 

pivot_root . newroot 
```

Is the /new/ directory a typo, to be replaced by /newroot/, or have I misunderstood you?

Finally, my snag when trying it all out - I get this fatal error when kernel boots  :Crying or Very sad:  :

```
RAMDISK: ext2 filesystem found at block 0

RAMDISK: image too big! (8192KiB/4096KiB)

UDF-fs: No partition found (1)

Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
```

----------

## thebigslide

 *kimchi_sg wrote:*   

> /newroot is not supposed to be in the tarball! It stays right on the / partition.

  / IS the tarball if you're mounting it on a ramdrive  :Wink: 

 *Quote:*   

> chmod +x /sbin/update-balls[/color] seems to be mandatory. But it is not included in your post. 

 added

 *Quote:*   

> 
> 
> ```
> # Mount root and create read-write directories 
> 
> ...

 

It's supposed to be there.  /new becomes / after the pivotroot.  /new is where the / filesystem's tarball gets extracted to from the initrd's perspective.

 *Quote:*   

> Finally, my snag when trying it all out - I get this fatal error when kernel boots  :
> 
> ```
> RAMDISK: ext2 filesystem found at block 0
> 
> ...

  In your kernel config, increase the size of your initial ramdisks to 8192k.  It looks like it's set to 4096k right now.  The UDF error is because it's using autofs to find the filesystem type and it tried UDF after ext2 wouldn't work.

 :Smile: 

----------

## thebigslide

 *joKer-O-zen wrote:*   

> I created /newroot directory, and it's in the tarball ... :/
> 
> when i put init=/bin/sh instead of init=/linuxrc i got a prompt. I can run ./linuxrc but i got an error with init ... i'm trying to fix that first ...
> 
> [EDIT]
> ...

 

The performance boost is going to be from having /bin, /sbin (if you use the console lots) and /lib in RAM.  You can make a tarball for every directory you want in RAM if mounting / in RAM is not working for you.  Try stepping through the linuxrc line by line and see where the problem is occuring.

----------

## kimchi_sg

Hmm... now it got past that error, but now linuxrc execution dies with

```
VFS: Mounted root (ext2 filesystem).

Freeing unused kernel memory: 192k freed

mount: mount point /root does not exist

umount /dev/hda5: not mounted

pivot_root: No such file or directory

linuxrc: line 21: dev/console: No such file or directory

Kernel panic - not syncing: Attempted to kill init!
```

Darn... I must have messed up my linuxrc somewhere.

Come to think of it, I moved the /usr/.bin , /usr/.lib and /usr/.sbin directories into their original positions so that I could recompile the kernel. And I forgot to move them back. Will try moving them back to see if it helps.

EDIT: That didn't do the trick.

Why on earth is it complaining about the lack of /root ?!  :Confused:  Also, I think that dev/console in the linuxrc script should be /dev/console instead.

----------

## thebigslide

I think it's dying on 

mount $STOREDEV $STORE 

Are you mounting /dev/hda5 on /root?

Did you mkdir a /root in the initrd?

----------

## kimchi_sg

OK, got past that error after making a /root in the initrd, but now system services fail to start. Right after entering runlevel 3:

```
INIT: Entering runlevel: 3

/sbin/rc: line 25: syslog-ng: command not found

* Configuration error. Please fix your configfile (/etc/syslog-ng/syslog-ng.conf) [!!]

[coldplugging, setting of domain name and bringing up eth0 were successful, omitted]

* Starting distccd...

* /usr/bin/gcc-config: Profile does not exist or invalid setting for /etc/env.d/gcc/i686-pc-linux-gnu-3.4.3-20050110 [ok]

* Starting gpm...

start-stop-daemon: stat /usr/sbin/gpm: No such file or directory [!!]

[mounted network filesystems and set clock using NTP - completed OK]

* Starting sshd...

start-stop-daemon: Unable to start /usr/sbin/sshd: No such file or directory [!!]

* Starting vixie-cron...

start-stop-daemon: stat /usr/sbin/cron: No such file or directory [!!]

* Starting local... [ok]
```

EDIT: running update-balls fixes the problem, but syslog-ng still errors out, but a different one this time:

```
* Initializing random number generator... [ok]

INIT: Entering runlevel: 3

/sbin/rc: line 532: 6021 Bus error                    syslog-ng -s /etc/syslog-ng/syslog-ng.conf

* Configuration error. Please fix your configfile (/etc/syslog-ng/syslog-ng.conf) [!!]
```

A look in /sbin/rc shows that line 532 is a comment line  :Exclamation:  , specifically, the line "# the current "normal" runlevel" in this context:

```
# The -f eliminates a warning...

# ...

# ... and

# the current "normal" runlevel.

ln -snf "/etc/init.d/${x}" "${svcdir}/softscripts.new/${x}"
```

This error has me really stumped.  :Confused: 

However, all the subsequent problems with gpm and sshd et al. have disappeared. Seems my /usr/sbin tarball needed refreshing.  :Embarassed:  Apart from syslog-ng not starting (which causes all log messages to get streamed to stdout  :Mad:  ), the system is working normally.  :Very Happy: 

Also, here's my mount command after booting into the system, I'm wondering if it's the intended output  :Rolling Eyes:  :

```
# mount

/dev/hda2 on / type reiserfs (rw,noatime)

/dev/root on /newroot type ext2 (rw)

none on /proc type proc (rw)

none on /sys type sysfs (rw)

none on /dev type ramfs (rw)

none on /dev/pts type devpts (rw)

/dev/hda3 on /home type reiserfs (rw,noatime)

/dev/hda5 on /root type reiserfs (rw,noatime)

/dev/hda6 on /var type reiserfs (rw,noatime)

/dev/hda7 on /usr type reiserfs (rw,noatime)

none on /dev/shm type tmpfs (rw)

none on /usr/bin type tmpfs (rw,size=32m)

none on /usr/sbin type tmpfs (rw,size=2m)

none on /usr/lib type tmpfs (rw,size=256m)
```

And my "time emerge -s firefox" (I performed it once before that but forgot to time it):

```
real    0m5.181s

user    0m2.178s

sys     0m0.407s
```

"time emerge -pv gnome":

```
real    0m4.584s

user    0m4.020s

sys     0m0.408s
```

Seems like a decent speedup.  :Very Happy: 

----------

## thebigslide

Nice.  I wonder what b0rked syslog-ng.  I didn't have that problem.  I wonder if syslog-ng is reporting the line number after stripping comments.  cat /etc/syslog-ng.conf | grep -v # > syslognohash.conf might show you the 'real' culprit.

I wish there was a better way of benchmarking application load time  :Question: 

----------

## kimchi_sg

Aha! Seems like we've got our man. But still don't know what to do with him.  :Sad: 

```
# grep -v [#] /sbin/rc > rcnohash.conf
```

Line 532 is the "if [ -n "${LOGGER_SERVICE}" ]" line in this fragment:

```
for i in $(dolisting "${svcdir}/started/")

        do

                if [ -n "${LOGGER_SERVICE}" ]

                then

                        then

                                continue

                        fi

                fi

                is_critical_service "${i}" || dep_stop "${i}"

        done
```

----------

## thebigslide

hmm

I don't think that's it either.

That's just checking for the existance of an environment variable.

It sees the error in syslog-ng.conf.  syslog-ng was called by rc, that's why it appears that way.

What happens if you run syslog-ng -s /etc/syslog-ng/syslog-ng.conf  from the shell?

----------

## kimchi_sg

 *thebigslide wrote:*   

> What happens if you run syslog-ng -s /etc/syslog-ng/syslog-ng.conf  from the shell?

 

```
# syslog-ng --help

Bus error

# syslog-ng -s /etc/syslog-ng/syslog-ng.conf

Bus error
```

What the... ?!?  :Rolling Eyes: 

Google doesn't seem to help here.  :Confused: 

Also, I enlarged my USRSBINSIZE to 5m in /etc/init.d/localmount, otherwise alsa-utils wouldn't emerge, as the directory was almost full.

----------

## thebigslide

maybe reemerge it.  Possibly some library has been lost/corrupted.

----------

## kimchi_sg

 *thebigslide wrote:*   

> maybe reemerge it.  Possibly some library has been lost/corrupted.

 

Yay! That did the trick.  :Very Happy: 

Aside: Any recommended size for the /usr/bin, /usr/sbin and /usr/lib for a system with 1024MB of physical RAM? I don't want to run into the "No space left on device" problem again. As it stands now, my /usr/bin size is 64MB, /usr/sbin is 5MB, and /usr/lib is 384MB.  :Twisted Evil: 

----------

## thebigslide

You can resize on the fly, just make a copy of bzip2 and unbzip2 in /bin if you're resizing /usr/bin.

It drastically differs from system to system.

You might want to go through /usr/bin and make sure everything else still runs....  If  /usr/lib/ had b0rkage, it's hard to be sure nothing else did.

```

#!/bin/bash

##/sbin/resizeramdisk

mkdir /tmp/specialpurpose

mv /usr/$1/* /tmp/specialpurpose/

umount /usr/$1

mount -t tmpfs -o size=$2 none /usr/$1

mv /tmp/specialpurpose/* /usr/$1/

rmdir /tmp/specialpurpose
```

call with (eg) resizeramdisk lib 512m

----------

## Cinder6

 *thebigslide wrote:*   

> Maybe you should clean out your /usr/lib/?  Mine is only 200MB and I have TONS of apps installed.  
> 
> Adended howto and provided an example of precacheing 1 app.

 

how do you clean it out?

----------

## thebigslide

some things in /usr/lib don't actually belong there.  you can make a /usr/waslib and move those directories there and link them back to /usr/lib.  You'll have to check for these manually, as every system's different.  I have some suggestions at the bottom of the howto.

----------

## Cinder6

k thx

----------

## mecolik

Nice idea indeed. Want to try.

Here is another use of extra memory.

To speed up grip, a very powerfull and so quiet cd ripper.

First create a directory like /mnt/tmpfs or /tmpfs.

```
mount -t tmpfs -o size=800m none /tmpfs
```

Start grip.

In the config tab, change the rip file format to point to the tmpfs directory.

 *Quote:*   

> /tmpfs/data/audio/%A/%y_%d/%t_%n.wav

 

Go to encode tab, and point to your data directory :

 *Quote:*   

> /data/audio/%A/%y_%d/%t_%n.%x

 

In the options tab, check delete .wav

On this computer, the encoding with flac is very fast, if you have only one drive but some memory to use (300m should be enough for most CDs)

this will avoid write at rip and read at encode(probably during write from the following rip process).

It will avoid a lot of IO and the drive will only be accessed when writing the final flac file.

tmpfs use the kernel buffer cache and the size option is limited to half memory size by default.

----------

## kimchi_sg

Note to self: Exercise great control over what components of KDE to emerge in conjunction with this HOWTO. emerge kde dies with /usr/lib out of space about 120 packages in.  :Sad: 

P.S. It will probably be some time before I try this again. I've reinstalled the system as normal since then.  :Wink: 

----------

## rutski89

Would mounting /usr/lib, /lib etc... etc... make a screen video capture program such as xvidcap fly as well? Or is it really ONLY the opening of the program that speeds up.

----------

## unseen-enigma

I did a similar thing using a slightly different method (sorry I dont have step by step but its not a big procedure).  The problem with this procedure for many systems is simple:  Too many applications / libraries.  So I ported over a basic chroot build script (takes core libs and ldd of a given app stores in a directory).  I loaded openoffice(word-like part only), firefox, xorg, xine, and several (about 20) core kde applications into the /ramdisk I made under /ramdisk/bin and /ramdisk/lib (added to path).  A simple init script loads this into ram on boot.  If i need to add a application I can simply add it to my (new) /etc/ramapps file.  My system is speedy and it only cost about 250mB of ram instead of the 1-2gb of many others on this forum.  This approach(and the parent) is great for a terminal system far more than a single user system as you can reap far greater benifit from precaching common application in ram when you can be reasonably sure 90% of them will actually benifit performance rather then clog ram.  Anyone implementing this method please remember to put the /ramdisk/bin first in the PATH and similarly for lib.  If you dont want to or cant rely on the path variable in the way your applications launch you can try bind mounting each applicatiion and library over its / based counterpart but I have no idea what that many bind mounts might do to performance.  You could copy and bindmount /etc but I didnt see the need and I think it would be a pain to keep umounting it whenever I need to install or update my system

----------

## rutski89

Awasom, I'm defeintly goning to get on this soon and set things up. I've got 512 ram, I've got to do some "watch -n .1 freem -m" monitoring on the buffers first to see how much mounting my ram can take. What is the "correct" way to perminently change your $PATH?

----------

## unseen-enigma

/etc/profile is what i normally use

----------

## ruben

thebigslide,

Actually, i've been thinking before about doing something similar to what you present in this howto. I use a laptop with a 4200rpm harddisk, and starting applications can take a long time. Since i have 640Mb ram, it seemed like a nice idea to load some frequently accessed apps in ram at boottime. One thing that bothered me however, is how i could easily move the files from a package to a ramdisk and let the system transparently load files from the ramdisk if they're available on the ramdisk and if they're not available, then load them from the harddisk.

First thing could easily be done with "equery files package", so you could easily make a startup script to copy all files from a selected set of packages on a ramdisk. (Or you could just use the script you wrote to find all the dependencies from an executable) The second thing, let the system transparently take files from the ramdisk if available there and otherwise from the harddisk seemed more difficult, till i read this. Back then however, this file system seemed only available for 2.4 kernels. Today, there are new versions and it should work on 2.6 kernels, and you can get it here. It's also available in portage btw. From what i remember when i read it, with unionfs it would be possible to make a filesystem that takes files from the ramdisk when they're available there and otherwise would access the harddisk. I intend to try to do something like this "some time in the future", but since you already made something similar, i thought you might be interested in unionfs...

ruben

----------

## helmers

I haven't tried this (yet) but I think the idea is really nice, and if you could make it possible to do it via a script, so you could just preload say your 3-4 most used applications it would be great. The percieved speedup would be enormous.

Hope you (or someone else willing) will continue work on this!

----------

## s4kk3

Whoa! This really speeds up things..

I did this sligthly different way.. I didn't do those initrd and linuxrc stuff and commented out tarring / in update-balls script but so far it looks like it works.. This takes over 80% of ram so playing games can be really pain

----------

## Monstros

i seems cool, but there's somthing I cannot understand, as the nioub I am.

If you put your lib in the ramdisk, ok, it's faster to load them, given that there're already loaded. But if you emerge something that update this lib. this lib will be updated and installed in the ramdisk, and when you put your comp off, the update is lost. I guess there's not this problem, cause a lot of peeps feels good with this ramdisk. But I just cannot understand how it works...

edit : another question : when the program need the lib, will it load it a second time in RAM (in the real ram, if I can say that), or will it use the in-ramdisk lib directly ?

----------

## 5a\/ag3

Just found this. However Very cool howto! I am now configuring this with my baby devel  :Twisted Evil: . should go good with 2Gb RAM and Dual Xeons.

----------

## chrissou

hello i'm trying to use this howto to boost start of my application, but i have an error when i try to load linurx :

error line 22 /dev/console not found

I think i have an error with my root partition, i don't now how to work mknod utilities and i must mknod my sda drive ...

i try use mknod /mnt/initrd/dev/sda -b 

Thanks a lot

excuze me for my very bad english

----------

## Sheepdogj15

 *unseen-enigma wrote:*   

>   So I ported over a basic chroot build script (takes core libs and ldd of a given app stores in a directory).  I loaded openoffice(word-like part only), firefox, xorg, xine, and several (about 20) core kde applications into the /ramdisk I made under /ramdisk/bin and /ramdisk/lib (added to path). 

 

hiya, 

i wasn't sure if you were still keeping tabs on this thread, but if so, i was wondering where you got this build script at, or if you still have it handy. the reason i ask is because i like the sound of your method, but copying libs by hand doesn't sound like a fun thing to do on a Saturday night.  :Wink: 

also, how well does it handle symlinks? (i noticed that some "libraries" are actually symlinks to other versions.)

----------

## guitarman

Is there any possable way to do this on a system with only one partition for / ? Only reason is I have no way to back everything up as I used > 100 gigs of my hd and I dont have a second hard drive big enouogh to back it up so I could repartition my disk.

----------

## fE_rdy

Okay, your howto seems to be very cool. If it wouldnt be so late now, i just would make it work right now.  :Smile:  I just had a look in /usr/lib and there are more than 700 MB Ram in there. So an switching mechanism to add certain apps (and their corresponding libs) to the preload-cache is preferable. How could you get to know against which libs a given executable is linked? I thought ldd would do the job (for dynamic linked stuff). This doesn't work for firefox. Well, I go to bed now- but this thing will be far away from getting lost by sleep. 

Well done!

yours,

f3rdy.

----------

## Sheepdogj15

ldd will work with firefox... you have to find the executable (i think what you find in /usr/bin is a script that points to a script that points to a script which points to firefox).

you can also ldd the libraries themselves, as they depend on other libraries. it's not required unless you are doing a full chroot, but if you have the space for more libs and the time to copy them over, why not. 

you could also unmerge a lot of packages you don't need (and --depclean). i did that and shaved off about 100MB from my /usr/lib

i'm ready to actualy try this out. i just haven't had the time to actually implement it.

----------

## 0000000000

Yea Im with you Sheepdogj15; does anyone have a cool build script to do this? 

I also don't understand what would happen if I had firefox and all its libraries loaded into RAM and then ran emerge -u mozilla-firefox. Does this mean that it would just update the ram and get forgotten upon reboot, or would portage rewrite the directories to their original HD locations? If the RAM copy gets changed, could one just re-tar the RAM dir and save it to the HD before shutdown/reboot to keep changes? 

Anyway selective loading of apps to RAM seems to be a very cool idea, Im just unclear as to how system maintenance works...

----------

## Sheepdogj15

i'm not sure how taht is addressed in the OP, but i but i have an idea. 

the idea is that stuff will be loaded into RAM from a basic tarball. so, what do we do? we have two options: (A) create the tarball again everytime you do an emerge job, or (B) have it set to re-create the tarball before a shutdown-reboot. (i.e., add a tar command to local.stop). i'd actually do both, to be honest, using different names so you'd always have a backup of one gets corrupted.

----------

## BitJam

Monstros raises an interesting question.  He asked if libraries could be run directory from the ram-disk or do they need to be loaded into "real" ram to run.  Of course, they need to be loaded into "real" ram. 

I really like the idea of getting programs to load fast, but Monstros' question makes me wonder if maybe a better way to get the speed-up (especially for people like me who have close to 1 gig in /usr/lib) is to just pre-load certain libraries into "real" ram at boot time.

----------

## 0000000000

From the bigslide:

 *Quote:*   

> ###########Aside##########
> 
> If you just want to load certain applications from a RAM disk, you can do something like the following
> 
> Code:
> ...

 

I just wanted to try this so i did 

```
tar cpf /root/preload.tar /usr/bin/firefox /lib/ /usr/lib/MozillaFirefox/
```

and i have a nice 53M tar file, great.  When i run 

```
mount -t tmpfs -o size=128m none /preload > /dev/null 2>&1
```

 i get no error, but then i cannot 

```
cd /preload
```

 ...the dir isn't there.  Am i retarded? Do i need something else in my fstab?  ...currently i have the line 

```
none         /dev/shm   tmpfs      defaults      0 0
```

----------

## Sheepdogj15

the thing is, this is my understanding based on my limited knowledge and a bit of time googling:  if a library isn't already loaded into RAM (i.e., is being used by another application) the only time it is loaded is when it is needed by an app that is about to run. at least, i think that is so. 

but, i have an idea, which at least in theory could work. basically, create a dummy application, and convince ld (if at compile time) or ld.so (every time the dummy app is called for) that it requires all of the libraries we want to mount into RAM. have the dummy run at boot time, and viola, you have tens or hundreds of MBs of libs in RAM. since the principle behind shared libraries is that if an app needs a shared library (basically anything with a .so on it) and that library is already in RAM, it will use that copy instead of loading it into RAM again. thus, we trick the system into autoloading libs into memory without using a ramdisk or tmpfs.

dummy would also have to stay in memory, or else the libs will be cached out (maybe set it up as a daemon that does nothing?  :Wink:  )

there are a few problems i foresee, though. first off, i don't know know symbolic links are treated. for instance, if there's a foo.so.1.0.2, which is a symlink to foo.so.1.0.1, which is a symlink to foo.so.1, and all three are called by separate applications, whill it just load .1 into memory and trick the apps that need the other libs into using that, or will have have three libraries sitting in RAM, basically the same thing but with different names? i don't honestly know how the system handles that. my concern would be that if they are treated as distinct libraries and thus you get separate copies in RAM, that could theoretically be more "expensive" in terms of RAM than other methods listed here. (it also doesn't preload the applications that use those libs, but [1.] smaller binaries by themselves load into RAM pretty fast, and [2.] we can address that by mounting /usr/bin or /bin into RAM with tmpfs. my /usr/bin is only 80MB... my understanding is that it has been /usr/lib that has been a significant problem for many).

second and most important, this seems like it would be a bit unweildy to keep up do date. if you update to Firefox 1.1 in the near future, and (for example) 1.0.4 needed bar.so.1, and 1.1 needs bar.so.2, where they are totally different libs (meaning, one isn't just a symlink to the other), we would want a way to easily update our dummy app so we can get the new lib loaded (and maybe, get the old lib out of there if we don't need it anymore). we would really want an automated process to do it, because if you have to spend a bunch of time monkeying with this stuff everytime you emerge -uDN world, you'd end up spending more time than you saved. 

of course, don't ask me how to make that dummy app in the first place. honestly? i don't know, but maybe someone else here does.

----------

## Sheepdogj15

 *0000000000 wrote:*   

>  i get no error, but then i cannot 
> 
> ```
> cd /preload
> ```
> ...

 

did you create the /preload directory in the first place? you can't mount a drive (real or pseudo) if there is nothing to mount it to.

otherwise, i dunno. i mount /tmp and /var/tmp into RAM and it works fine. try running that command in a console (mount -t tmpfs -o size=128m none /preload > /dev/null 2>&1) and see what errors you get.

BTW< i'd recommend a larger values for "size=". tmpfs will use as little RAM as it can in the first place, but IMO it isn't too hard to fill 128MB especially if we are talking about libraries and such)

----------

## BitJam

 *Sheepdogj15 wrote:*   

> 
> 
> second and most important, this seems like it would be a bit unweildy to keep up do date. if you update to Firefox 1.1 in the near future, and (for example) 1.0.4 needed bar.so.1, and 1.1 needs bar.so.2, where they are totally different libs (meaning, one isn't just a symlink to the other), we would want a way to easily update our dummy app so we can get the new lib loaded (and maybe, get the old lib out of there if we don't need it anymore). we would really want an automated process to do it, because if you have to spend a bunch of time monkeying with this stuff everytime you emerge -uDN world, you'd end up spending more time than you saved. 

 

Perhaps the "ldd" command could be used to automate the selection of which libraries to load when an app gets updated.

----------

## Sheepdogj15

 *BitJam wrote:*   

> 
> 
> Perhaps the "ldd" command could be used to automate the selection of which libraries to load when an app gets updated.

 

yeah, i'm thinking ldd would be used a lot for this kind of project.

maybe we should start a new thread (Unsupported Software or somewhere?) where we can get more people talking about how we could create this. i'd love to do this myself just for the learning experience, but i'm already swamped as it is, and anyways i'm assuming there isn't an easier way than what i proposed about (maybe there is and someone else will know it  :Confused: )

----------

## 0000000000

o thanks sheepdog, i am retarded, didn't have a /preload dir.  Now it seems that firefox takes the same amount of time to load as when i loaded it successively before, ~3 seconds.  It would usually take ~10 sec to start cold.  Is this the best my machine will do (1ghz p3, 576 megs ram, 7200rmp hd), or does it theoretically help to replace the original files with symlinks to /preload?

----------

## Sheepdogj15

 *0000000000 wrote:*   

> o thanks sheepdog, i am retarded, didn't have a /preload dir.  Now it seems that firefox takes the same amount of time to load as when i loaded it successively before, ~3 seconds.  It would usually take ~10 sec to start cold.  Is this the best my machine will do (1ghz p3, 576 megs ram, 7200rmp hd), or does it theoretically help to replace the original files with symlinks to /preload?

 

yeah... though conceivably could just add /preload to your LD_PATH (don't ask, because i don't know how). you will want to go into /usr/lib, etc., find the libraries, rename them (incase something goes wrong, you can undo the process by naming them back) maybe add ".old" to the end of each name or something. then create symlinks that point to the libs in your /preload directory.

----------

## Sheepdogj15

has anyone heard about the Gigabyte i-RAM? i'm wondering if it will be supported by existing linux drivers (i don't see why not... just uses a SATA hdd interface), but if so, it could very well make these issues irrelevant. that is, assuming you have the money for it and 4 sticks of RAM, of course.  :Wink: 

----------

## Stephonovich

 *Sheepdogj15 wrote:*   

> has anyone heard about the Gigabyte i-RAM? i'm wondering if it will be supported by existing linux drivers (i don't see why not... just uses a SATA hdd interface), but if so, it could very well make these issues irrelevant. that is, assuming you have the money for it and 4 sticks of RAM, of course. 

 

Yeah, I was thinking about that awhile ago.  Supposedly they'll get support for 4GB by the time they launch.  I'd mount /usr (as much as possible, anyway) and /var in there.  Ah... visions of nearly instantaneous emerge --sync...

----------

## Sheepdogj15

yeah, definitely. though i'm not so concerned about portage, as that's something i can run in the background while i'm doing other things. otherwise yeah... it will be very nice to get away from head latency and finally make full use of that 150MB/s pipe.  :Smile: 

----------

## Beige Tangerine

Based on some comments here, I've been working on some init scripts to mount only certain libraries in memory.  I'm using a bash script (formerly a Perl script, but I wanted it to be able to run at boot time, even if /usr wasn't mounted) that runs ldd on any given files, follows symlinks, and automatically copies the appropriate libraries to ramdisks (one ramdisk for /usr/lib and one for /usr/bin).  Then I use unionfs to mount each ramdisk together with its original directory.

I'm actually not terribly impressed with the load-time decrease on the one computer I've tried, but I think it's just because the hard drive is fast.  (I can cat about 50MB worth of libraries to /dev/null in about a second and a half, if memory serves.)  However, those of you who see 7-second improvements from your first boot of Firefox to the next are probably still very interested.

Then again, you might not even need to mount the libraries on a RAM disk.  It might just be enough to read the libraries from disk (cat /usr/lib/whatever > /dev/null) so that they're stored in the cache.  This may have much the same effect as starting Firefox (or whatever) once and then closing it.  It would also avoid the dilemma of how to handle writes to /usr/lib (like when you update your system); my plan was to offer two runlevels: one that used ramdisks and had /usr/lib mounted read-only, and one that did not use ramdisks and had /usr/lib mounted read-write.  If the performance gains are roughly equal, it would be much simpler not to have to worry about this.

Anyway, I'll post the script once I get home.  It's not quite finished, especially since I'm not sure it's worth it for the computer I tried it on, but the part that lists all libraries required by the given programs seems to be working fairly well.

----------

## Beige Tangerine

Here's the script.  It has some comments, and I'll put some more below.

/etc/init.d/ramlibs

```
#!/sbin/runscript

# TODO checkconfig section

# TODO die if a dir doesn't exist

# TODO note file that configuration comes from

function listLibs()

{

  declare -a ALL_LIBS

# In theory, readlink might not be available because /usr/bin might not be mounted.

# My understanding is that we can assume that /bin is mounted (likewise for /lib, since it has libraries the bash requires).

# Actually, this happens after localmount, so I don't think that's a concern.

# Still, I found that busybox readlink is faster than standard readlink, so I'm not going to mess with it for now.

# We need busybox for this, but it appears to be pulled in by 'emerge system," so we should be able to count on it.

# Interestingly, this even beats the normal readlink -f for speed.

  function readlink()

  {

    busybox readlink -f $1

  }

# Normal readlink version.

#function readlink()

#{

#  `which readlink` -f $1

#}

  function appendToAllLibs()

  {

    ALL_LIBS[${#ALL_LIBS[*]}]=$1

    echo $1

  }

  function isInAllLibs()

  {

    for (( J = 0 ; J < ${#ALL_LIBS[*]} ; J++ )); do

      if [[ ${ALL_LIBS[$J]} == $1 ]]; then

        true; return

      fi

    done

    false; return

  }

  while [[ $# -ne 0 ]]; do

    appendToAllLibs `readlink $1`

    shift

  done

# Looking at the output, I get the impression that recursion is not actually be necessary.

# I'm not sure, but the script is slower if we leave it in.

# Use the commented-out for() instead of the ORIG_COUNT and other for() to "reactivate" it.

#for (( I = 0 ; I < ${#ALL_LIBS[*]} ; I++ )); do

  ORIG_COUNT=${#ALL_LIBS[*]}

  for (( I = 0 ; I < $ORIG_COUNT ; I++ )); do

    CURRENT_LIB=${ALL_LIBS[$I]}

# The commented-out version spawns a subshell, which means that ALL_LIBS is local to it.  Not good.

#  ldd 2> /dev/null $CURRENT_LIB | sed -e 's/^\t//' | sed -e 's/^.* => //' | sed -e 's/ (0x.*)$//' | grep -v '^$' | grep -v '^statically linked$' | while read LIB; do

    for LIB in `ldd 2> /dev/null $CURRENT_LIB | sed -e 's/^\t//' | sed -e 's/^.* => //' | sed -e 's/ (0x.*)$//' | grep -v '^$' | grep -v '^statically linked$'`; do

      LIB=`readlink $LIB`

      if ! isInAllLibs $LIB; then

        appendToAllLibs $LIB

      fi

    done

  done

}

function start()

{

  LIBS=(/usr/lib/MozillaFirefox/firefox-bin /usr/lib/MozillaThunderbird/thunderbird-bin /usr/bin/AbiWord-2.2 /usr/bin/gnumeric /usr/bin/gimp-2.2 /usr/bin/leafpad /usr/bin/xpdf /usr/bin/rhythmbox /usr/bin/gmplayer /usr/bin/oggenc /usr/bin/cdparanoia /usr/bin/eject /usr/bin/easytag)

  ebegin "Copying libraries from ${#LIBS[@]} programs to RAM"

# TODO remove to config file

  MERGEDIRS=(/usr/bin /usr/lib64)

  RAMDIRS=(/usr/bin_ram /usr/lib64_ram)

  HDDIRS=(/usr/bin_hd /usr/lib64_hd)

  for (( I = 0 ; I < ${#MERGEDIRS[@]} ; I++ )); do

    MERGEDIR=${MERGEDIRS[$I]}

    RAMDIR=${RAMDIRS[$I]}

    HDDIR=${HDDIRS[$I]}

    mount tmpfs $RAMDIR -t tmpfs

    mount -t unionfs -o dirs="$RAMDIR=ro:$HDDIR=ro" none $MERGEDIR

  done

  if [[ ${#MERGEDIRS[@]} -ne ${#RAMDIRS[@]} || ${#MERGEDIRS[@]} -ne ${#HDDIRS[@]} || ${#RAMDIRS[@]} -ne ${#HDDIRS[@]} ]]; then

# TODO say what file to edit

    eerror "Arrays MERGEDIRS, RAMDIRS, and HDDIRS must be of equal length."

  fi

  einfo "Test variable: $TESTING"

# TODO Note that spaces will probably break this.

  listLibs ${LIBS[@]} | while read LIB; do

    for (( I = 0 ; I < ${#MERGEDIRS[@]} ; I++ )); do

      MERGEDIR=${MERGEDIRS[$I]}

      RAMDIR=${RAMDIRS[$I]}

      HDDIR=${HDDIRS[$I]}

      if echo $LIB | grep "^$MERGEDIR" > /dev/null; then

        LIBDIR=`dirname $LIB`

        SHORTLIBDIR=`echo $LIBDIR | sed -e "s#^$MERGEDIR##"`

        mkdir -p $RAMDIR$SHORTLIBDIR

        SHORTLIB=`echo $LIB | sed -e "s#^$MERGEDIR##"`

        cp $HDDIR$SHORTLIB $RAMDIR$SHORTLIBDIR

      fi

    done

  done

}

function depend()

{

  before *

  need localmount

}

function stop()

{

# TODO remove

  MERGEDIRS=(/usr/bin /usr/lib64)

  RAMDIRS=(/usr/bin_ram /usr/lib64_ram)

  HDDIRS=(/usr/bin_hd /usr/lib64_hd)

  estart "Unmounting library RAM drives"

  for (( I = 0 ; I < ${#MERGEDIRS[@]} ; I++ )); do

    MERGEDIR=${MERGEDIRS[$I]}

    RAMDIR=${RAMDIRS[$I]}

    HDDIR=${HDDIRS[$I]}

    umount $MERGEDIR

    umount $RAMDIR

  done

}
```

In theory, LIBS, MERGEDIRS, RAMDIRS, and HDDIRS would eventually be moved to /etc/conf.d/ramlibs.

You need busybox installed.  I think it's part of system now, so you should probably have it.  If not, emerge it.

You also need unionfs, which you can get from portage.

There's a circular dependencies complaint at shutdown.

Here's the most important function:

listLibs file1 [file2 [file3 ...]]

Print a list of libraries (with duplicates removed and symlinks fully resolved) required by the given file(s).  (This includes the file itself.  Note that this won't work for scripts, e.g., rip.  That's why my LIBS contains '/usr/bin/oggenc /usr/bin/cdparanoia /usr/bin/eject')

Note that the current version of the script is for AMD64 (hence lib64 instead of lib).

As currently written, the script expects the "real" /usr/lib64 to be mounted at /usr/lib64_hd.  (You would probably have to move the directories at the console and edit /etc/fstab.)  It also expects /usr/lib64 and /usr/lib64_ram directories to exist (and, presumably, to be empty).  (/usr/bin is similar, with /usr/bin_hd and /usr/bin_ram.)  The script scans the files in LIBS, and any files it lists from /usr/lib64 and /usr/bin are copied from the appropriate _hd directory to the appropriate _ram directory.  Then each _ram/_hd pair of directories is mounted together with unionfs, e.g., /usr/lib64 is /usr/lib64_ram and /usr/lib64_hd.  This way, the system looks on the ramdrive first for any libraries.

Currently /usr/bin and /usr/lib64 are mounted read-only.  Mounting the RAM drive read/write is a bad move because any changes you make to it (probably with portage) will be wiped on reboot.  (It might also fill up quickly if you emerged something large, like Eclipse.)  You could write a script to copy /usr/lib64_ram to /usr/lib64_hd at shutdown time, but if the power goes out or the computer locks up, you might have problems.  Mounting the hard drive read/write is also bad, because any changes made by portage, etc. to a library that is also present on the RAM drive will not show up until you reboot, which is odd behavior.

My plan had been to eventually create another init script that just mounted a read/write copy of /usr/lib64_hd at /usr/lib64 (and similarly, /usr/bin_hd at /usr/bin).  I would put it in a new runlevel, and then I could choose at bootup whether I wanted the fast, unmodifiable libs or the slower, writeable libs.  In theory, you could even switch runlevels once you had started, though it would involve closing down X and other programs, since they would be using the libraries.

Oh, and a warning: Don't try this with /bin or /lib.  The system needs to be able to run bash at startup.  Bash is located in /bin, and it requires libraries in /lib.  I'd go so far as to say that there is almost certainly a way to get around this, but I didn't think it would be worth it, as most of the files in those directories are small, anyway.  Your situation may vary.

Let me know if you have any questions.  I'm sure I missed some important things.  Finally, let me reiterate that this is NOT a finished script.  If you want to use it, you'll have to play around with it, and it's easy to make your system unbootable along the way.

----------

## apberzerk

I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk?  I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown.  If the system fails and for some reason does not shut down properly, you lose the results of your emerges.

It seems to me that the following would be really nice to have (and I don't know how much of it already exists):

When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk.  When something reads from any of these locations, it reads from ramdisk.  How would such a thing be implemented?

----------

## Beige Tangerine

 *apberzerk wrote:*   

> I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk?  I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown.  If the system fails and for some reason does not shut down properly, you lose the results of your emerges.
> 
> It seems to me that the following would be really nice to have (and I don't know how much of it already exists):
> 
> When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk.  When something reads from any of these locations, it reads from ramdisk.  How would such a thing be implemented?

 As I understand it, this isn't possible with unionfs, though I may be overlooking something.  See "Writing to Union" at the unionfs website: *Quote:*   

> ...all changes are stored in leftmost branch.

 In other words, just the ramdisk, or just the hard drive, but not both.

This is why I had planned to mount the ramdisk/hard drive union read-only and offer a separate, hard-drive-only mode for running portage, etc.  (Well, I was also worried that the ramdisk would fill up if I emerged something like Eclipse:)

```
cpovirk@eleventy ~ $ du -sk /usr/lib/eclipse-3/

102115  /usr/lib/eclipse-3/
```

Anyone have any other ideas?

----------

## Sheepdogj15

i haven't messed with Ramdisk, but i have a few things set up in a tmpfs "partition." what i do is have it untar my stuff into the tmpfs on startup and set my mount points. on shutdown, it retars all the contents (in case something has changed, like i did an emerge). i've also made a backup tarball after a major emerge in case my computer dies before it can have a successful shutdown. i figure as long as i remember to do the backups or at least never have nasty system crashes, i'll be fine.  :Confused: 

----------

## apberzerk

 *Sheepdogj15 wrote:*   

> i figure as long as i remember to do the backups or at least never have nasty system crashes, i'll be fine. 

 

I have one of those every week I think...

- Phil

----------

## SaBer

 *apberzerk wrote:*   

> I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk?  I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown.  If the system fails and for some reason does not shut down properly, you lose the results of your emerges.
> 
> It seems to me that the following would be really nice to have (and I don't know how much of it already exists):
> 
> When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk.  When something reads from any of these locations, it reads from ramdisk.  How would such a thing be implemented?

 

Could one perhaps create a raid1-array with a partition and a ramdisk?

----------

## R4miu5

as far as i know the section 

```
echo /sbin/update-balls >> /etc/conf.d/local.stop

chmod +x /sbin/update-balls

cat /sbin/update-balls 
```

will run the creation of the tarballs at every shutdown?

----------

## blue666man

I think some readers on this forum might find  this  interesting:

It's a 66MHz 64-bit PCI-X (NOT pci express) card.  Features:

Capacity: 2GB, 4GB, 8GB, 16GB

Battery backup: Multiple Redundant, Rechargeable On-board Batteries	

Semaphore DMA Initiator for maximizing read and write transfers	

Supports Interrupts	

Error Detection and Correction	

Burst Mode Transfer Rate: 533 MB/s peak

Some ppl were looking into the Gigabyte ramdisk, but I don't see the real benefit since I/O on SATA-I is capped at 150MB/s.  The other, bigger, problem with solid state disks which use IDE or SATA is that you're not getting the 'R' benefit of RAM.  All OS's handle IDE, SCSI, and SATA with sectors on a disk in mind.  Not to mention, all your fav. file systems:  reiser, xfs, jfs, ext*, etc. are also built around the fact that fragmentation slows access because a DISK has to spin and a r/w head has to move.  There's no such thing on CMOS'es.  But gigabyte and company will never make a ramdisk that ISN'T IDE or SATA b/c M$ Winblows can't handle what linux can:  ramfs.  :Evil or Very Mad: 

Anyway, if someone has a xeon MB with a PCI-X slot, I'd take a look into this card. Imagine a 16GB big filesystem where every I/O's seektime is in the hundreds of nanoseconds, not singles of milliseconds (about 20 to 25 times faster)! And a throughput of 533 MB/s! Not to mention the joy of seeing your PC boot as quickly as a palm pilot, but I digress.

----------

## Jonasx

If your having trouble getting the linuxrc to execute, try taking the first row of hashes out, seems I needed the interpreter as the first line to make it go through...

also, you may need to specify the fs type with the -t option in linuxrc file when you mount.  This is the only way I could get a reiser4 part to mount here.

I've almost gotten a native 64bit system w/ 2 gigs of ram like I want it, I'll post some of my experiences and hang ups shorty so it may help some others out  :Smile: 

Edit:

And thaks for the great how to  :Smile: 

----------

## gingekerr

Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&

This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

----------

## Enlight

 *gingekerr wrote:*   

> Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&
> 
> This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

 

readahead <file> if I remember correctly.

----------

## frawd

Anyone think that using this and prelinking binaries could give any kind of problems?

http://www.gentoo.org/doc/en/prelink-howto.xml

Would it speed up loading times more or is it just not worth it?

Thanx

----------

## Enlight

 *frawd wrote:*   

> Anyone think that using this and prelinking binaries could give any kind of problems?
> 
> http://www.gentoo.org/doc/en/prelink-howto.xml
> 
> Would it speed up loading times more or is it just not worth it?
> ...

 

If your tarballs are correctly updated, I can't see how this could cause troubles. And I would say that yes It can speed a bit things but you won't fell it as much as when your libs and progs are on the disk. (adn I personnaly saw very few difference whan it is on HD)

----------

## adsmith

 *gingekerr wrote:*   

> Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&
> 
> This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

 

This sounds like a much better idea.

In particular, how about this:

```

abe@tock ~ $ eix readahead

* sys-apps/readahead-list 

     Available versions:  ~0.20050320.2320 ~0.20050323.0658 ~0.20050328.0142 ~0.20050425.1452 0.20050517.0220

     Installed:           none

     Homepage:            http://tirpitz.iat.sfu.ca/

     Description:         Perform readahead(2) to pre-cache files.

```

Haven't tried it yet, but it seems liks this sort of option is much better.  Let the kernel itself deal with moving things from page cache -- it's very good at it, afterall.  All you need to do is let it know what you want pre-lopaded.  This seems safer and less hackish than an involved tar/mount script.  It's also probably faster, though I have no proof yet.

----------

## adsmith

more info on preloading:

https://bugs.gentoo.org/show_bug.cgi?id=64724

This is ueber-simple to set up, and appears to work very well.

----------

## Frunktz

Well, in this way I lost some minute for extract tarball and copy the context on the RAM. Am I wrong?

Thanks a lot.

Ps:- Copy the /usr in RAM took me about 15-20 sec...

----------

## Adwin

The most important thing is commiting changes both on the ramdisk files AND syncing them in real time to HDD files.

Any idea how to do this, except for rsync ?

----------

## SaBer

 *Adwin wrote:*   

> The most important thing is commiting changes both on the ramdisk files AND syncing them in real time to HDD files.
> 
> Any idea how to do this, except for rsync ?

 

I'm still thinking raid1...

Check this out:  http://lkml.org/lkml/2005/9/4/223

----------

## TenPin

So this essnetially moves the hard disk load times from when you run an application to boot time and ensures that stuff is permanently cached. It also adds time cosuming retarballing for when you emerge something into the tmpfs or shut down.

Another way of doing this could be to just cat /usr/bin/blah > /dev/null all the libraries and binarys that you want to be cached on boot. Then they will be cached and if you have lots of ram will stay cached. 

I find that with 512Mb that the kernel caching works just fine and loading openoffice or firefox for the second time is instantaneous. Sure I lose the cache if I need to use that memory but thats better than having the memory locked into a tmpfs and having to use swap.

It seems to be more trouble than its worth to do this, especially on a laptop which is being shutdown frequently.

----------

## brazzmonkey

 *adsmith wrote:*   

>  how about this:
> 
> ```
> 
> abe@tock ~ $ eix readahead
> ...

 

how do you get it configured ? i couldn't find relevant documentation on this one.

----------

## adsmith

You edit /etc/conf.d/preload, something like this:

```

## verbosity.  0-9, Default is 4.

PRELOAD_VERBOSITY="3"

## this is where preload will store its pid file

#PRELOAD_PIDFILE="/var/run/preload.pid"

## can't get this to work.. :(

## set this for niceness. Default is 15

PRELOAD_NICE="15"

## log file

PRELOAD_LOGFILE="/tmp/log/preload.log"

```

Then you need an init file, something like this:

```

#!/sbin/runscript

# Copyright 1999-2005 Gentoo Foundation

# Distributed under the terms of the GNU General Public License v2

# $Header: $

opts="${opts} reload dump"

#depend() {

#       after xdm

#}

dump() {

    ebegin "Dumping config and state for preload"

    killall -SIGUSR1 /usr/sbin/preload

    killall -SIGUSR2 /usr/sbin/preload

    eend $?

}

reload() {

    ebegin "Reloading config for preload"

    killall -SIGHUP /usr/sbin/preload

    eend $?

}

 

start() {

        ebegin "Starting preload"

        start-stop-daemon --start --quiet --exec /usr/sbin/preload -- \

    --logfile ${PRELOAD_LOGFILE} -V ${PRELOAD_VERBOSITY} -n ${PRELOAD_NICE}

        eend $?

}

stop() {

        ebegin "Stopping preload"

        start-stop-daemon --stop --quiet --exec /usr/sbin/preload

        eend $?

}

```

----------

## brazzmonkey

ok, thanks for that.

i suppose i also have to edit a list so that it knows what has to be preloaded. am i right ? if so, where's this list ?

----------

## adsmith

no, you don't have to edit any such list.  It's automatic and statistical.  All you have to do is turn it on.

----------

## brazzmonkey

allright, this should be ok then. i also added it to default runlevel, because i saw an entry in rc-update show.

if it uses statistics, i suppose this should take some time to get some measurable effect...

thanks a lot for your lightning-fast replies !!

----------

## Centinul

I run a linux based firewall. It is a celeron (mendocino) 550mHz with 256MB of RAM and a 4Gb HD. I was thinking about using this as a security measure on the firewall that way if someone got in and modified files they would be reset on reboot. Is this a viable method? Advice needed. I really don't understand how this protects your system. I would also like someone to explain that to me please.

Here is some output for size of certain folders.

```

139M /usr/lib

5.9M /lib

19M /usr/bin

3.6M /usr/sbin

5.2M /sbin

```

When the system is at idle. I have the following output from "free -m"

```

               total     used free shared buffers cached

Mem:           248       244    3       0        132       5

-/+ buffers/cache:       106    142

Swap:          494       0    494

```

Any thoughts would be appreciated. Thanks.

----------

## adsmith

I think you are confusing two very distinct ideas:

1) pre-cache files which are accessed frequently.  This will increase system responsiveness.  This is mostly what is discussed in tis thread.

2) having a read-only root with RAM-mounted (tmpfs) access for read/write access which is lost on reboot.

The second can be found elsewhere.  I bet the Gentoo securiy howto or the gentoo-wiki has info on this.

----------

## Centinul

adsmith ---

Thanks for the info. I was wondering if you could direct me to a location of a howto for a read only /? I can't find anything in the Security Handbook, Gentoo Wiki or the forums. Thanks!

----------

## adsmith

Here, at gentoo-wiki:

http://gentoo-wiki.com/HOWTO_Read-only_root_filesystem

This was the first hit on google for "read only root flesystem linux"

----------

## Sheepdogj15

that got me thinking... you could also set up a script that remounts / after boot time (say, trigger it from local.start). you'd still want some files in /etc to be writable, so the script would have to take that into account. could be simplier though

----------

## Sheepdogj15

well, i found a sort of cheap workaround so you can "preload" apps into RAM. really, i'd only need it for firefox and thunderbird, as everything else i use loads fast enough for my tastes.

behold: Kdocker

http://kdocker.sourceforge.net/

"KDocker will help you dock any application into the system tray. This means you can dock openoffice, xmms, firefox, thunderbird, anything!"

the idea is to have your WM load the app as a system tray process. (it's not just for KDE, though it requires QT.) really the app is actually running, the idea is you click on the icon and it comes up immediately. if you have enough RAM for the app, should work fine without chunking up your system. best for a few specific apps (not everything in /usr/bin  :Shocked:  )

----------

## Kragen

I've only read the first page, so I'm sorry if this has been mentioned before but...

Shouldnt there be a way of loading commonly used libraries into ram anyway? I mean, doesnt windows do this? Surely what's really needed is a program / system that monitors the useage of different libraries, decides how much and what should be loaded into ram based on the usage of different libraries and available ram, and loads it all for you transparantly.

----------

## Kragen

ok - this readahead is pretty much exacltay what I was thinking of. Does it make any difference?

----------

## curtis119

OK, I haven't read this entire thread so this may have already been brought up. Won't a simple script in local.start that does "cat $NAME_OF_FILE_TO_PRELOAD >> /dev/null" have the exact same effect of preloading the file into RAM without having to do all that magic with a ramdisk?

I use this on some of my machines for mozilla to load faster and it works like a charm. PLUS, you don;t lose any of your RAM space to the ramdisk (which has a set amount of RAM it uses) because the least used apps just get swapped out freeing your RAM for the stuff in memory that you are actually using.

----------

## killercow

agrees,

Any file loaded of disk will be stored in ram for as long a possbile.

On my 4gb box, i can actually see whole movies trickle into ram, while i download/decompress them.

After something like 6 movies, ram is full, and the first one is purged again. (but only partially), Thus it automatically keeps everything as speedy as possible. 

This is also the elusive memmory usage, linux newbee's complain about.

----------

## Zentoo

Hey, i'm pretty interesting to cache my applications in ram at boot time so i 've read the thread.

But as a lot of us, my /usr/lib* is really huge for my RAM so the question is:

How choose files that needs to be cached ?

i think i've found an easyway to do it:

you need to:

```
emerge sys-process/lsof  
```

 (Lists open files for running Unix processes)

then launch manually all the applications that you want speed up and just type the following command in a shell:

```
  lsof -F | sed "s/^n//g" | grep -v "^c" | egrep "^/bin|^/lib|^/sbin|^/usr" | sort | uniq 
```

and there you have the list of files open for your process, you need just to redirect it in a file and it's done !  :Wink: 

NOTE: i think we don't catch all files involved at starting time so i'm looking for a way to list all the files used for a time lap. If anyone have an idea ?

----------

## depontius

I just came across this thread, while searching for something else. As I was skimming it, another "opportunity" occurred to me. For the ext2 filesystem, there is a relatively new option, "execute in place". (CONFIG_EXT2_FS_XIP) Think about it for a minute... You've just loaded your executable files into RAM, so you can get at them quickly. When it's time to run something like firefox, you grab a copy from filesystem in RAM, and load it into... RAM. You've now got 2 copies of firefox in RAM, one as a file and one as executing code. Seems to me that this is the situation XIP was made for.

Beyond that, take a look at what is really happening here. As someone has mentioned, by default Linux caches files. The problem is that it has no idea what file you're about to ask for, but there is the underlying assumption that if you've asked for it once, you will likely ask for it, again. That's what a cache does. What you're really doing is saying, "I'm smarter than a cache, because I *know* what I'm going to need, and will preload it into RAM." But you've just rather indiscriminately put practically *everything* into RAM.

You can be smarter. How about a directory structure with /ram, containing /ram/lib and /ram/bin, maybe /ram/sbin? Take the stuff you *really* want to be fast, and copy it there at initialization. Change your path to put /ram/bin ahead of /usr/bin, and change /etc/ld.so.conf to upt /ram/lib ahead of /use/lib. With a little thought, this can be combined with XIP mentioned above.

By the way, it's probably counter productive to put something *most* used like bash into /ram/bin. Most likely it will remain RAM resident for the duration of most boots, and copying the file into RAM is a waste of space.

----------

## daveisgreat

Nice HOWTO! Just what I came looking for! Cheers!

----------

## Bono

Hello,

Concerning this How to, how do you manage emerge working well after that ? I mean, it will install libraries in the ramdisk for instance and they won't be anymore on the disk ?

Thanks in advance,

Marc

----------

## abfluss_bombe

perhaps you could use unionfs for that? i havent looked deeply in that but it has somehow to be possible to read from ramdisk but write on harddisk. ok the update if files are changed could be a problem.

----------

## thebigslide

 *Bono wrote:*   

> Hello,
> 
> Concerning this How to, how do you manage emerge working well after that ? I mean, it will install libraries in the ramdisk for instance and they won't be anymore on the disk ?
> 
> Thanks in advance,
> ...

 

Hi.  The easiest way is to use a script to update the tarball that is unpacked to populate the ramdrive bootup.  With a little work you can easily make this an init script.

Perhaps this howto could use some revision.

----------

## ssam

have you looked at preload http://sourceforge.net/projects/preload

 *Quote:*   

>  preload is an adaptive readahead daemon. It monitors applications that users run, and by analyzing this data, predicts what applications users might run, and fetches those binaries and their dependencies into memory for faster startup times.
> 
> 

 

there is also a releated summer of code project http://code.google.com/soc/2007/ubuntu/appinfo.html?csaid=8EDA2B217C83972

http://code.google.com/p/prefetch/

----------

## NaiL

May be all this stuff can be combined with this tip:

https://forums.gentoo.org/viewtopic-t-465367.html

----------

## kusi

I mounted my /var/tmp/paludis as tmpfs and did a performance analysis: I emerged digikam

my ram drive:

```
mount -t tmpfs tmpfs -o size=2000M /var/tmp/paludis
```

I emerged digikam twice

```
time paludis -i digikam
```

w/o ramdrive

real    15m12.167s

user    10m27.595s

sys     8m59.962s

with ramdrive

real    15m9.048s

user    10m30.951s

sys     8m53.873s

As you can see, the speed benefit of using a ramdrive is marginal. Did somebody experience the same? Is this what you have to expect with modern hardware? I use a 2.2ghz core 2 duo, 4gb ram

Kusi

----------

## micr0c0sm

using -pipe pretty much makes sure everything is in ram anyway, so there should be no compilation speedup. Throughput shouldn't increase (unless you have a truly fast usb drive and really slow hard drive), but responsiveness is much, much better since there is no spinup or seeking.

----------

## spupy

This is what I did:

I renamed /usr/bin to abin. Created a dir named bin and mounted it into ram with:

```
mount -t tmpfs -o size=300m none /usr/bin > /dev/null 2>&1
```

Moved the stuff from abin to bin.

Did the same for /usr/lib.

Unfortunately, this did not decrease the cold-boot times of programs. Does this command really mount the folder in ram, because it looked like it didn't...

----------

## robak

Hi guys!

i know that this thread is old but i have a little problem.

i put the whole filesystem into one file excluding etc and var dirs. the problem is that i get tons of "No such file or directory" errors while unpacking the fs.tar

and no errors while creating it.

```

.....

tar: usr/sbin/paperconfig: Cannot open: No such file or directory

tar: usr/sbin/ck-log-system-start: Cannot open: No such file or directory

tar: usr/sbin/rpcinfo: Cannot open: No such file or directory

tar: usr/sbin/accept: Cannot open: No such file or directory

tar: usr/sbin/in.rshd: Cannot create symlink to `rshd': No such file or directory

tar: usr/sbin/useradd: Cannot open: No such file or directory

tar: usr/sbin/partx: Cannot open: No such file or directory

.....

```

can someone help me?

greetings robak

----------

## robak

i figured out, what the problem is:

my ramdisk has 4G space and should be big enough to store the whole fs (which has 2,9G) but somehow i run out of free-space. does anybody know how to fix that?

----------

## Redmumba

I've read through all the posts, and tried some tweaking of my own, but it doesn't seem like I'm able to mount my initrd image.  I'm receiving the same "linuxrc failed" reported earlier, but I'm actually not able to access ANY of the executables.  But its saying that the ramdisk *is* being mounted on 1:0... booting back into my normal install, I mount the initrd image and all the files are there and have correct permissions.

Is there any reason why my initrd image wouldn't be loading, but say it is?  Ramdisk and Initial Ramdisk support are _all_ built into the kernel, so I'm not sure what would be causing this...

Andrew

grub.conf

 *Quote:*   

> 
> 
> title Gentoo Linux 2.6.29-r1 (w/ RAM disk!)
> 
> root (hd0,0)
> ...

 

----------

## slick

Why dont try the simplest way: 

- create a ramdisk (greater than du -sh /usr/lib), i.E.: mount none -t tmpfs /mnt/ramdisk

- copy all files from /usr/lib to the ramdisk, i.E.: cp -a /usr/lib/* /mnt/ramdisk

- mount the ramdisk to /usr/lib, i.E: mount -o bind /mnt/ramdisk/ /usr/lib/

- if you like to update your system, just umount /usr/lib and /mnt/ramdisk, update the system and do the stuff above again

(this can simple do in the background in /etc/conf.d/local.start)

Now OpenOffice and other big apps starts in <1 sec. and no extra modification on systemfiles are necessary, you probably need a little bit more ram (in my case ~ 1.2 GB only for the ramdisk )

 :Wink: 

(sorry for my bad english)

----------

## aych

what would the effect of this be, I presume the bootup times will suffer significantly..

I was thinking what would happen if it was a rc script on startup which allows for normal bootup from hard drive and standard usage.  After normal loading then it will being populating a tmpfs with predefined folders etc, then after the tmpfs is setup mount the tmpfs over the existing /lib.  would it cause system instability swapping over half way during normal usage?

----------

## PhoeniXII

thanks for the great tip,

even though it ads approx 40 sec to my boot-up time,

since I put the whole /usr dir in mem,

but I never had such a responsive system before ^_^

----------

## ChrisCummins

I understand that this is an old thread, but it still seems relevant, so I'll just ask:

 *slick wrote:*   

> Why dont try the simplest way: 
> 
> ...
> 
> - if you like to update your system, just umount /usr/lib and /mnt/ramdisk, update the system and do the stuff above again

 

Following those steps, sure enough I get the blindingly fast application load times, but I am unable to umount /usr/lib64 once I've set it up, even with --force. umount /mnt/ramdisk works but upon restart all changes to lib64 are lost. Any tips on how to unmount a stubborn tmpfs?

Regards

Chris

----------

## arhenius

Hello Chris

I suppose you are copying /usr/lib* to the ramdisk at boot time using a script in /etc/init.d/local.

If that is your setup, perhaps commenting those lines out, rebooting the system, doing the upgrades, uncommenting those lines and rebooting will probably work.

I'd like to do this in my laptop, has anyone tried it? How does it affect the battery life?

Cheers

Filipe

----------

## PM17E5

I'm also interested. I already use fstab to mount /var/tmp /tmp /home/user/.mozilla but I wouldn't mind doing my whole system since I got 16 gigs of ram. Curious what's the best way of achieving this.

----------

## arhenius

 *Quote:*   

> I already use fstab to mount /var/tmp /tmp /home/user/.mozilla 

 

I was thinking on doing that also, would you share your experience on how does that affect battery life and system responsiveness?

Cheers

Filipe

----------

## PM17E5

Hmm to be honest I'm not sure how it effects my battery life, I have an i7 ultrabook so it's already pretty shortlived. Usually when I'm mobile and I'm not doing anything intensive I have /etc/init.d/cpufrequtils turned on. Here's what I put in my /etc/fstab:

```
tmpfs           /tmp                            tmpfs   nodev,nosuid,noexec                     0 0

tmpfs           /var/tmp/                       tmpfs   nodev,nosuid                            0 0

tmpfs           /home/user/.mozilla           tmpfs   nodev,nosuid,noexec                     0 0
```

Then I have two files in /etc/local.d:

mozilla.start:

```
cp -pr /home/user/.mozilla1/* /home/marker/.mozilla/
```

mozilla.stop:

```
rm -rf /home/marker/.mozilla/*
```

You have to make sure they're executable (chmod +x mozilla.start mozilla.stop). I've seen people make really complex scripts for all of this and have 10 page long discussions on how to do it, but I really don't get why it needs to be so complex. I don't like the idea of using tar to archive it or unarchive it every time you start up or shut down because that's just adding on more delay. I have an SSD drive so copying my mozilla forlder into ram is pretty much instantaneous. I kind of like not saving all of those 3 folders because it keeps the system a little cleaner. You can modify yours to move it back, and make occasional backups for the accidental shutdown, but I chose to just set up my browser once how I like it and have it clear every time I reboot so any browsing data/settings/accidentally added on crap/etc is gone and I have a freshly set up browser every time I reboot. The only time this kind of becomes annoying is if you get plugin updates and you have lots of plugins.

The reason I clear it at all when I shut down is incase I restart the script while it's running if I experience anything funny with my firefox profile.

But the reason I posted in this thread, is I have 16 gigs and I would actually like to eventually just throw my whole system into ram. But I'm not really sure how safe/good this solution is and what's the best way to do it.

Responsiveness? Firefox flies, loads in the blink of an eye with 0 delay. Emerging packages seems to have gotten a nice boost in speed as well. I'd gladly put in 10 times as much time as I spent setting up a couple of those things, for the increase in reponsiveness I've obtained  :Smile: .

----------

## Hell-Razor

Are people still using this? some of us poor fellows that just have norma sd dries and not SSDs may benefit. Anybody that has done this (and has for a long time) any updates?

----------

## Petros

 *slick wrote:*   

> Why dont try the simplest way: 
> 
> - create a ramdisk (greater than du -sh /usr/lib), i.E.: mount none -t tmpfs /mnt/ramdisk
> 
> - copy all files from /usr/lib to the ramdisk, i.E.: cp -a /usr/lib/* /mnt/ramdisk
> ...

 

Why unmounting those directories? Couldn't someone just write the changes to the actual fs with sync or something? You know sync the contents of the disk with those on the ramdisk.

----------

## Petros

```
#!/bin/sh

mkfs -t ext2 -q /dev/ram50 180000

[ ! -d /ramlib64 ] && mkdir -p /ramlib64

mount /dev/ram50 /ramlib64

/bin/cp -r /lib64/* /ramlib64

mount -o bind /ramlib64 /lib64
```

This content is under /etc/local.d/initramlib64.start

It doesn't work because Linux doesn't let to login. As soon I give the root name it prints "Login incorrect". Before that and during boot, it rants about "No space left on device" or something like this while copying (I suppose). My /lib64 is about 150MB and my ramdisk about 180M. What am I missing?

----------

## Petros

 *Petros wrote:*   

> 
> 
> ```
> #!/bin/sh
> 
> ...

 

I discovered that I had a directory /lib64/ramdisk/{The whole lib64 dir from previous mv}. This explaines the "No space left" message. 

I tried to do this:

```
mkfs -t ext2 -q /dev/ram50 180000

[ ! -d /mnt/lib64 ] && mkdir -p /mnt/lib64

mount /dev/ram50 /mnt/lib64

/bin/cp -r /lib64/* /mnt/lib64

mount -o bind /mnt/lib64 /
```

But it whipped my /lib64. Fortunately I had a squashed backup and I restored it from my Arch.

----------

