# TIP: emerge speed-up, compile in RAM not on disc.

## funkyade

Just had a 'why didn't I think of that before!' moments that I would like to share.

After testing this on a couple of emerges I am quite confident it works AS LONG AS YOU HAVE ENOUGH RAM - enough is 768M or more, although may be okay on 512M with a minimal desktop or X-less server. Tested on three machines with (768M, 1G, and 3G of RAM). The emerge of xorg-server went from 1.5 hours to about 20 minutes on an athlon-xp,  for example.

Portage uses /var/tmp/portage (by default) as it's working directory, everything is built in there before it gets merged to /. So, why not stick /var/tmp/portage in RAM? It's a tmp directory after all, saves all that I/O bottleneck...

Interested? Okay, here's what you have to do...

(I assume you are su'd as root, if not prepend sudo to all the following commands)

```
# nano /etc/fstab
```

add the following line to the end -

```
none    /var/tmp/portage    tmpfs    nr_inodes=1M    0 0
```

before we mount it, you may want to clear out your on disc /var/tmp/portage -

```
# rm -fdr /var/tmp/portage/*
```

mount your new tmp directory -

```
# mount /var/tmp/portage
```

try an emerge

```
# emerge freeciv
```

quick wasn't it!  :Wink: 

notes:

1. I haven't had more than 130M used on an emerge yet, that was on an update (emerge -u world), however, I am not sure of the upper limits of the size of this directory during big/multiple emerges. It depends on how often portage/emerge clears it out. Anybody know, and can this behaviour be changed?

2. If you want to set an exact size for this you can change the entry in your fstab to -

```
none    /var/tmp/portage    tmpfs    nr_inodes=1M,size=256M    0 0
```

I would not go below 256M for safety's sake!!! Saw this variant on the Jackass! forums.

--------

ADDENDUM:

1. creating a tmpfs seems to default to 50% of your total RAM size. Not to worry, as it doesn't use much if it isn't populated with files. I'm not sure precisely what will happen if it fills up......  :Shocked:  [EDIT: the emerge fails with "No space left on device"! Set size=100M and emerged mysql. mysql peaks at around 140M.]

[EDIT]1a. It would be useful to know what the biggest package in portage is.(?) When emerged how much space does it use at the maximum, if we added, say, 10% to the figure then that would give us our maximum acceptable size for the tmpfs.

2. Most of the space is taken up when emerge unpacks a dist tarball into /var/tmp/portage. It only seems to unpack one package at a time, and then removes it when it has finished. This behaviour may be different dependent on your setting for MAKEOPTS in make.conf, can't confirm this as I have only tested it with "-j2" (the default).

3. There's an entry on the WIKI that I missed that has a very cool script that turns this on and off when you need it, it's http://gentoo-wiki.com/TIP_Speeding_up_portage_with_tmpfsLast edited by funkyade on Tue Feb 20, 2007 1:26 pm; edited 3 times in total

----------

## _droop_

Hi,

This is a good idea.

 *funkyade wrote:*   

> I would not go below 256M for safety's sake!!! Saw this variant on the Jackass! forums.

 

In the documentation of tmpfs, there is a warning :

 *Quote:*   

> size:      The limit of allocated bytes for this tmpfs instance. The
> 
>            default is half of your physical RAM without swap. If you
> 
>            oversize your tmpfs instances the machine will deadlock
> ...

 

So you should not increase too much this parameter.

Of course, for some package (openoffice), tmpfs should be disabled :

```
umount /var/tmp/portage
```

----------

## funkyade

I agree. Thanks.

Had added to the original post before I read yours...

----------

## lost+found

Create some (more) swap. When tmpfs is full, all other mem using apps might need it, and prevents crashing the machine. The last mounted swap partition/loop file get's a lower priority, and would only be used in case of emergency...

----------

## funkyade

You are suggesting having more than one swap partition, or enlarging what you have?

On my desktop machine (this one) my swap is on another physical drive (3xIDE and 3xSCSI drives, swap is on an old 4G SCSI), on my server (redundant laptop with broken screen) it's a partition on the single hard disk (500M, with 1G RAM). I've never had swap be more than 25 % full... I have swappiness=25 in /etc/sysctl.conf.

Sounds like a good idea all the same!

Thanks.

----------

## lost+found

 *funkyade wrote:*   

> You are suggesting having more than one swap partition, or enlarging what you have? ...

 

Both is OK. Gentoo installation default is 512 MB swap. If you don't want to repartition, and knowing it will hardly be used, it can be created on an old slow drive, or as a loopback file somewhere... Maybe another 512-1024 MB would be save (perhaps the same amount portage will eat out of your RAM).

----------

## funkyade

ok.

Got 4G on my desktop machine on one drive, purely because, as you said, it's an old and slow drive and there was little point in partitioning it further. This is massive overkill as I've got 3G of RAM on the machine too.

Going to try compiling openoffice as a benchmark later.

Testing a server setup in a VM on another machine at the moment which I have only allocated 320M of RAM to (to replicate another old laptop), used 'size=256M' in fstab, and so far it has been stable and compiles are nice and quick too. Host is an athlon-xp 2600 running Gentoo.

results are (forgot to add 'time' before I ran them... doh! so have just been looking at the clock) -

ran this first -

```
# emerge --fetchonly apache php mysql
```

to remove any waiting for package retrievals later. Made a snapshot in VMWare.

with no tmpfs for /var/tmp/portage

```
# emerge apache php mysql
```

about 4 hours on a fresh install.

Then went back to snapshot I made earlier.

with tmpfs for /var/tmp/portage

```
# emerge apache php mysql
```

about 1.5 hours on a fresh install.

I'm as happy as Larry... (and he's as pleased as punch)...  :Smile: 

----------

## JeliJami

emerging openoffice requires 5G of disk space!

also see:

https://forums.gentoo.org/viewtopic-t-469758.html

https://forums.gentoo.org/viewtopic-t-491962.html

http://gentoo-wiki.com/TIP_Speeding_up_portage_with_tmpfs

https://forums.gentoo.org/viewtopic-t-478658.html

----------

## i92guboj

This is nice to compile small things (yeah, xorg is not that big). But it will fail to compile most big things. To name a few: wine, kmail or openoffice. If you have enough swap, the compilation might success, but it will take much more than doing it the traditional way, cause the swap will kill the cpu and disk i/o.

----------

## funkyade

 *Quote:*   

> emerging openoffice requires 5G of disk space!

 

ouch! Haven't done it as I use Abiword. Maybe not one to try...!

 *Quote:*   

> http://gentoo-wiki.com/TIP_Speeding_up_portage_with_tmpfs

 

weird, didn't find that when I searched the WIKI to see if this was common knowledge...

 *Quote:*   

> This is nice to compile small things (yeah, xorg is not that big). But it will fail to compile most big things. To name a few: wine, kmail or openoffice. If you have enough swap, the compilation might success, but it will take much more than doing it the traditional way, cause the swap will kill the cpu and disk i/o.

 

I updated kmail last night without problem (my tmpfs for /var/tmp/portage is about 1.5G though) but not tried it with a fixed smaller size, nor did I check how much space it was using. I just think it means that for some very big apps like openoffice for example you will have to turn it off...  :Sad: 

I think now we have a more modular kde rather the old monolithic packages it makes it easier to split off the apps, so gives more success.

----------

## lost+found

 *funkyade wrote:*   

> ... kde ...

  Take the kdeenablefinal USE flag into account (takes lots of more RAM)... I emerged the latest kmail recently, with kdeenablefinal on, and saw a big increase of RAM+swap usage (total about 900 MB incl. cache files, running Gentoo non-graphically). Since I've gotonly 320 MB RAM, this was S-L-O-W. kdeenablefinal produces better optimized code, though...   :Wink:  BTW I'm not using tmpfs for compiling.

----------

## alex.blackbit

sounds like it could become a portage feature in the future.

one could create some mechanism that checks how many RAM is in the machine and let portage do most things in RAM.

maybe there would have to be some flag or other information in the ebuilds for that.

----------

## sirdilznik

 *alex.blackbit wrote:*   

> sounds like it could become a portage feature in the future.
> 
> one could create some mechanism that checks how many RAM is in the machine and let portage do most things in RAM.
> 
> maybe there would have to be some flag or other information in the ebuilds for that.

 

I was thinking along those lines myself.  Maybe a script that calculates the size that would be used by an emerge before it starts and compares it to the space allocated to tmpfs, or if that's not feasible, maybe make a list in a text file of packages that are too large.  I'm thinking something along the lines of /etc/portage/package.blacklist or /etc/portage/package.notmpfs.  Portage would check that file and if the package about to be emerged was on the list it would unmount tmpfs, emerge the package, then mount tmpfs afterward.  Does this sound feasible?

----------

## alex.blackbit

the idea itself is great.

but are you sure that /etc/portage is the right place for this(these) file(s)?

normally in this directory the user places some configuration files.

what we want to do in this case is provide portage some information in the tree.

i believe this should go somewhere in /usr/portage.   :Rolling Eyes: 

----------

## devsk

did anybody try how much does it make the emerge faster? here is the catch: emerge time changes very little (my calculations: 5-10%) between tmpfs and ext2 filesystem with 2k block size, mostly because compile times are cpu bound not IO bound. linux kernel doesn't really reach out to disk when you create object files during compile, unless there is a long gap between compile and link. they will stay in cache if you have enough memory.

the real issue is the contradiction: smaller emerges (which are supposedly helped by this mechanism) do not generate enough IO, and larger emerges can't fit completely into the tmpfs memory. So, there is nothing great about this. test it yourself.

----------

## vinboy

this is me. I use 2GB tmpfs

```
none                    /var/tmp/portage   tmpfs  size=2000M,nr_inodes=1M         0 0
```

things work fine though.

I had a peek at memory & swap the system-monitor. swap only get used when memory is full.

when no emerging takes place, every memory related is run in the memory.

i don't see anything wrong with big tmpfs. any ideas?

----------

## yamakawa

 *devsk wrote:*   

> compile times are cpu bound not IO bound. linux kernel doesn't really reach out to disk when you create object files during compile, unless there is a long gap between compile and link. they will stay in cache if you have enough memory.
> 
> the real issue is the contradiction: smaller emerges (which are supposedly helped by this mechanism) do not generate enough IO, and larger emerges can't fit completely into the tmpfs memory. So, there is nothing great about this. test it yourself.

 

Even though this method does not contribute speedup, isn't it better for longer drive life at least? Especially for those of laptops?

I had my HDD terminated while emerging series of KDE packages. Now that I am going to have a new laptop, I think this method could help the HDD somehow.

Any thought?

----------

## devsk

 *yamakawa wrote:*   

>  *devsk wrote:*   compile times are cpu bound not IO bound. linux kernel doesn't really reach out to disk when you create object files during compile, unless there is a long gap between compile and link. they will stay in cache if you have enough memory.
> 
> the real issue is the contradiction: smaller emerges (which are supposedly helped by this mechanism) do not generate enough IO, and larger emerges can't fit completely into the tmpfs memory. So, there is nothing great about this. test it yourself. 
> 
> Even though this method does not contribute speedup, isn't it better for longer drive life at least? Especially for those of laptops?
> ...

 modern drives are made to withstand a lot of abuse. I have a WD ATA100 120GB drive which is 4.5 yrs old (mfgd in jun2002), and which I have hammered pretty bad over the years. It just refuses to die! so, your drive died doesn't mean that drives are not supposed to work for long times. We some times feel pity, "oh god, imagine those tiny little heads moving all over the disk to do file seeks and data fetches and writes, and doing it every few milliseconds, over time they will sure be tired and die. god forbid, we do 13 hour OO build on it". These "machines" are designed to do so, over extended periods of time!! so, don't worry, buy a new drive (I reco. WD for reliability, the only brand that hasn't failed on me, yet!) and let your swap and TMPDIR sit on it. And keep it as cool as you can! Most drives die prematurely because of heat, not because of excessive read/write!

anyway, the counter logic to your argument is that once the RAM fills up (with portage tmpfs, other programs and the linux buffer cache), and it does fill up fairly quickly on linux, linux will end up using swap to throw out in-memory programs and buffers to allow for compiler temp files to be written to tmpfs i.e. you end up doing disk IO. Actually, depending upon how you setup swappiness, linux may start to swap even much before all your RAM fills up. So, you are no better than just using disk for the portage TMPDIR.

----------

## alex.blackbit

the harddisk in my internet gateway is a damn old fujitsu 10G drive.

according to S.M.A.R.T. the power-on-time is 20811h+38m+54s. well, it still works.   :Very Happy:  and since i use gentoo it has quite a lot to too when doing a "emerge -uDN world" every couple of days... but i do backups, because it can fail every day.

since you gave "thumbs up" for WD drives... the only brand that has never failed for me is seagate. i made the experience that they are reliable, silent, fast. as a bonus the model descriptions follow a understandable scheme, the size of the disk is always printed on the sticker (which is not the case with all manufacturers!) and the jumbers are always good described. so, they are my favorites.

----------

## devsk

 *alex.blackbit wrote:*   

> the harddisk in my internet gateway is a damn old fujitsu 10G drive.
> 
> according to S.M.A.R.T. the power-on-time is 20811h+38m+54s. well, it still works.   and since i use gentoo it has quite a lot to too when doing a "emerge -uDN world" every couple of days... but i do backups, because it can fail every day.
> 
> since you gave "thumbs up" for WD drives... the only brand that has never failed for me is seagate. i made the experience that they are reliable, silent, fast. as a bonus the model descriptions follow a understandable scheme, the size of the disk is always printed on the sticker (which is not the case with all manufacturers!) and the jumbers are always good described. so, they are my favorites.

 actually, seagate itself never failed on me, but now they own Maxtor as well. And maxtor failed on me twice.

----------

## alex.blackbit

yeah, for me too. maxtor is crap.

----------

## Akkara

I had used a 1500MB ramsf on /var/tmp/portage a long time without problems, including gcc, glibc, wine, kde (but no office). This is with 2GB of physical ram installed.

A few months ago there was some package that needed a lot more space to compile (I don't recall which offhand, it wasn't office it might have been wxGTK).

So I upped it to my current 3GB, along with 3GB of swap.

The big compile went through fine. At its peak using all of memory and 1.5GB into swap.  I didn't check whether it was actually faster than simply unmounting the ramfs and using disk directly but it did work.

It is still working well.

Just a datapoint to add to the mix.

----------

## yamakawa

Thanks guys for comments!!

Actually the dead poor HDD was in a small space in a small laptop.

The temperature sometimes went up to 60 C for MB and over 50 C for HDD, the sensor said.

I did emerges almost everyday and for every update!!   :Cool: 

When emerging OOo, it took more than two straght days if it ended up with success.   :Laughing: 

Maybe I had tortured the poor HDD too much and he simply wanted to kill himself to escape from the weight.

The manufacturer of the HDDs I have used are FUJITSU and HITACHI for laptops. Both died anyway.

The one I reported was of HITACHI, which I had been relied on since the FUJITSU HDD died some years ago.

I chose Maxtor and Seagate for a desktop. They are just fine.

So, I still think Akkara's report is really interesting. My new laptop will be 2GB of physical, so I will try the same and report my benchmark here...in just a few weeks.   :Cool: 

----------

## Torangan

I've used this approach and it did speed up compiles. The data for object files is written to disk rather fast usually as it's dangerous to keep hundreds of MB in RAM. It could stay in read cache but it's not read, just written. Therefore the RAM disk approch gains you less disk access and especially helps for the unpack / install stage. Normall that's copying data disk -> RAM -> disk and now it's disk -> RAM for unpack and RAM -> disk for install. Quite a bit faster and less fragmenting of the file system.

----------

## FastTurtle

Funkyade:

Interesting Tip that I'll have to try (got 4GB) by doing a world rebuild. Will let you know how it works with kdeenablefinal in make.conf using monolithic builds instead of split packages.

----------

## gary987

This is a hog... I keep a 1G tmpfs ... not big enough for it.. here's hoping 1.5G is..

----------

## enderandrew

 *FastTurtle wrote:*   

> Funkyade:
> 
> Interesting Tip that I'll have to try (got 4GB) by doing a world rebuild. Will let you know how it works with kdeenablefinal in make.conf using monolithic builds instead of split packages.

 

Openoffice might break it.  I thought I read that Openoffice needs 5 gigs to build.

----------

## LD

 *enderandrew wrote:*   

>  *FastTurtle wrote:*   Funkyade:
> 
> Interesting Tip that I'll have to try (got 4GB) by doing a world rebuild. Will let you know how it works with kdeenablefinal in make.conf using monolithic builds instead of split packages. 
> 
> Openoffice might break it.  I thought I read that Openoffice needs 5 gigs to build.

 

The ebuild actually checks for that and if it doesn't see that 5GB will error out the ebuild.

----------

## Cyker

Do we still need to do this?

I notice that the more recent versions of portage support a envar called PORTAGE_TMPFS which by default points to /dev/shm, a notorious standard tmpfs location.

The wording in the make.conf.example (The only documentation I can find for this var!) makes it sound like emerge will use the tmpfs location pointed to automatically?

----------

## likewhoa

 *Cyker wrote:*   

> Do we still need to do this?
> 
> I notice that the more recent versions of portage support a envar called PORTAGE_TMPFS which by default points to /dev/shm, a notorious standard tmpfs location.
> 
> The wording in the make.conf.example (The only documentation I can find for this var!) makes it sound like emerge will use the tmpfs location pointed to automatically?

 

not if you set that variable, just edit your /etc/fstab and you're good to go.

----------

## ChojinDSL

Having TMP in Ram sounds like a great idea. Wouldnt it be way better though if portage supported this directly.

e.g, by having something like 

RAMBUILD=Yes

RAMBUILDSIZE=1G

in "FEATURES" in the make.conf.

So basically portage should try to compile anything in ram as long as it fits into the size you determine, and if its too big, it automatically uses HD space.

----------

## Cyker

Is that not what the PORTAGE_TMPFS thing I talked about does???

----------

## likewhoa

 *Cyker wrote:*   

> Is that not what the PORTAGE_TMPFS thing I talked about does???

 

yes, but you still need to specify the RAM size in your /etc/fstab.

----------

## Cyker

 *likewhoa wrote:*   

>  *Cyker wrote:*   Is that not what the PORTAGE_TMPFS thing I talked about does??? 
> 
> yes, but you still need to specify the RAM size in your /etc/fstab.

 

Sorry, yeah I get that; I was replying to ChojinDSL's idea (As it already implemented \o/)

EDIT: Oh, wait, you actually HAVE to set a RAM size in fstab or it won't work...??

Because I just tried the PORTAGE_TMPFS= and it didn't do a thing...Last edited by Cyker on Fri Oct 19, 2007 5:34 pm; edited 1 time in total

----------

## StringCheesian

Except it still doesn't intelligently avoid cases where the package would overflow allocated space. OpenOffice may check that 5g is available, but other ebuilds don't. It can't ever be a set it and forget it type thing until this is solved.

I think the ideal would be a special tmpfs that overflows into a directory or a growing file instead of swap.

----------

## likewhoa

nothing wrong with swap.. and if a package fails most people should just know to comment the make.conf variable to resume without tmpfs support.

----------

## likewhoa

 *Cyker wrote:*   

>  *likewhoa wrote:*    *Cyker wrote:*   Is that not what the PORTAGE_TMPFS thing I talked about does??? 
> 
> yes, but you still need to specify the RAM size in your /etc/fstab. 
> 
> Sorry, yeah I get that; I was replying to ChojinDSL's idea (As it already implemented \o/)
> ...

 

you will have to umount /dev/shm, edit fstab with size option then remount it for it to take effect.

----------

## StringCheesian

 *likewhoa wrote:*   

> nothing wrong with swap.. and if a package fails most people should just know to comment the make.conf variable to resume without tmpfs support.

 

Swap is usually a lot smaller, and a lot of time can be wasted before you find out you'll need to restart.

----------

## waterloo2005

http://www.funtoo.org/en/articles/linux/ffg/3/

this is about above limit of it .

----------

## waterloo2005

when I emerge gentoo-sources , i fint it use above 300M . I set it to 512M

----------

## waterloo2005

when i emerge texlive 2008, it use mem more than 512M. so it gets error.

----------

## wklam

I have been using this tips.  I have 3GB of RAM.

My /etc/fstab:

```

# tmpfs for for portage to compile in RAM.

none                    /var/tmp/portage tmpfs nr_inodes=1M 0 0

```

When do "df", I usually have 1.5GB of /var/tmp/portage

Today I finally tested the limit with the following package in portage.  emerge was out of space for /var/tmp/portage.

x11-misc/openclipart

I had to umount /var/tmp/portage temporarily and let it uncompress on actual harddisk.

-William

----------

## gonaked

Thanks for the tip! I'm using it. Hard drives have longer life now. Thanks again!

----------

