# tmpfs larger than ram

## Nicias

I have a transcoding machine with 2GB of ram.

When I transcode, I use a two step procedure, so I need to store the intermediate file someplace. It can vary from about 300MB to 4 GB (or rarely larger.) Typically they are either about 300MB or they are about 1.5-2.5GB. 

Right now I have tmpfs set up with 1.9GB for the temp file. 

My transcoding script checks the duration to guess if the resulting file will be smaller or larger than that 1.9GB, and if it is larger puts the temp file on disk.

Would there be any improvement to making the tmpfs larger (maybe 6GB) and adding swap to back that up. I'm mostly thinking about files in the 1.9-2.5 GB range.

If I have say, 6 GB of tmpfs, with 2 GB of ram and 4 GB of swap, what happens when I write 2.5 GB of data to the tmpfs and then read it back, sequentially. (my machine actually only uses about 400MB not counting caches)

Does it keep the first ~2GB in ram so that only the .5GB  hits disk?

Does it keep the most recently written ~2GB in ram, so that as the 2.5 GB is sequentially written and then read all of it hits the disk?

----------

## toralf

 *Nicias wrote:*   

> Would there be any improvement to making the tmpfs larger (maybe 6GB) and adding swap to back that up.

 No

----------

## 1clue

RAM in that quantity is pretty cheap.  Is there a hardware limitation of 2g?  Personally I'd bump it to 8g and keep tmpfs at 7g.  If you're doing this professionally it will be cheaper than spending your time trying to hack this.

Virtual memory is a hack to prevent the system from crashing when more memory is required than the system has available.  Every time swap is used it imposes a stiff penalty in terms of speed and possibly other things, depending on system load.

Tmpfs uses that same mechanism to use RAM instead of disk, which provides a serious speed improvement.  Unless you actually have to hit the disk, in which case you're back to those stiff penalties.

The virtual memory system will decide which code can be swapped out, and then determine what it considers to be the least likely page to be used.  That page will be swapped out.  This system is actually pretty good, it's been tuned over several generations of processors.  But it's still no replacement for actual RAM.

In my professional opinion, designing a system such that it will regularly use swap is a huge mistake.  It leads to early system failure, and leaves no overhead for expansion.

http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=8gb+ram

----------

## Tony0945

1clue is correct. Using tmpfs with a large swap can be advantageous for the occasional use, but for regular use there is no substitute for real memory. 

I use tmpfs for /var/tmp/portage. I have one package that requires 12G. My largest system is 8G. I can either unmount tmpfs or keep it with a huge swap file. I have built that package both ways and using available memory plus swap is almost twice as fast as using pure disk. Even so, I plan on 16GB for my next system.

All four slots are full on the 8G box so I am reluctant to switch. I did switch on another box, going from 1G to 4G (2 slots). It's a lot more sprightly.

In your case, knowing that you will regularly use swap, I would consider buying the memory. My purchase of the 4G was guaranteed for life, guaranteed compatible memory direct from Crucial this May for $59.49, including Illinois' high sales tax and shipping. Cheaper memory is available. If you can spare $100 and have the slots, load it to the gills.

If you are going to go the tmpfs without additional memory, I strongly suggest creating a big swap file instead of re-inventing the wheel with your own swap algorithm. Let the kernel do it's thing.

----------

## s4e8

Over-commit tmpfs only swapout over-commited part, FS files will writeback all files, it should save some disk IO.

But if you need to read files multiple times, tmpfs maybe swapin/swapout multiple times, FS files needn't writeout if data is clean.

----------

## Nicias

Thanks for all of the feedback.

The machine is an old laptop being used as a fileserver/transcoder. The ram can't be expanded.  I set up a swapfile and made the tmpfs larger. I dd'ed from /dev/zero to a file in the tmpfs and then dd'ed that file to /dev/null to test the linear read/write speed. For a 3G file it was faster to do it to the tmpfs, then to the harddrive. 

So I think I'll keep the swap/tmpfs hack. 90% of my files turn out to be under 2G anyway. This keeps the script simpler.

Real ram will be better, and when I replace this machine I will replace it with one with at least 8G ram.

----------

## TigerJr

Swap needed than you used all the RAM. But if you swap place in RAM than you quickly loose your RAM and after that your swap can't save you after RAM memory is used up.

----------

## khayyam

 *Nicias wrote:*   

> Would there be any improvement to making the tmpfs larger (maybe 6GB) and adding swap to back that up. I'm mostly thinking about files in the 1.9-2.5 GB range.

 

Nicias ... I'm not sure that the tmpfs will really improve your encoding speed for the following reason:

By allotting system ram to tmpfs that ram is then removed from memory management. Yes, read/write to tmpfs will be faster, but page cache is such that "a read() from a file on disk [...] is read into memory, and goes into the page cache [... a] second read of the same area in a file, the data will be read directly out of memory". As encoding is actually mostly process intensive, the speed of read/write of the file(s) is most likely fairly slow (as the processing is the bottleneck) you may actually be better off not having the file in tmpfs, but making that ram available to MM.

I would be surprised if the read/write speed to a HD would cause any noticeable difference in the speed of the encode (again, the bottleneck being the processing). That processing may benefit more from the ram, than it would from the increased I/O caused by the file being on tmpfs.

That would be my hypothesis, I've not tested it in practice.

best ... khay

----------

## Nicias

Hmm, that's a good point.

I was mostly trying to limit the IO to the disk. I suppose I was using "speed" as a proxy for "amount of IO to disk."  This transcoding happens in batches, there is no actual hurry. 

The machine is on the old side, (~8 years) and  so I want to limit the amount of writing to the drive. The media is stored on an external drive, I didn't want to put the temp files on the internal drive because I don't want to wear it out. I don't want to put the temp file on the external drive to limit thrashing. That leaves the tmpfs.

----------

## mirekm

You can try zram instead of tmpfs.

Please try:

http://gpo.zugaina.org/sys-block/zram-init

----------

## Nicias

 *mirekm wrote:*   

> You can try zram instead of tmpfs.
> 
> Please try:
> 
> http://gpo.zugaina.org/sys-block/zram-init

 

I don't think that would help. Most of what I am storing is compressed video, so not really compressable.

----------

