# SSD: which filesystem ?

## xavier10

Hello,

I am considering an SSD based laptop, which I would install Gentoo on. I am wondering about support issues with SSD drives and Linux, and what decisions should be made.

In particular, what about the filesystems ? Are all filesystems ok for SSDs or are some preferable to others ?

How do they impact on wear of the SSD ?

Another thing I feel bad about is the swap: is it ok to swap on an SSD ? It seems to me this is a very bad decision due to the wear it may incur on the drive ?

For that reason, I am considering having no swap at all (and taking a decent amount of RAM to begin with, of course!).

----------

## massimo

[1] is a list of filesystems which you might want to consider. Since the EEE (small ASUS notebooks (not all) with SSD) community deals with similar questions you might want to browse their wikis and forums for opinions when running an OS (e.g. Linux) on a SSD.

[1] http://en.wikipedia.org/wiki/List_of_file_systems#Flash_memory_.2F_solid_state_media_file_systems

----------

## jesnow

The idea of the SSD is that it mimics, and can mimic a normal drive. I used ext3 and it worked fine, I get 123MB/sec throughput on my OCX ssd, as opposed to 80MB/sec on my samsung spinpoint 500GB sata drive.  Some things are *much* faster though, and watching a silent emerge --sync is cool. 

BUT I'd be curious too what the *optimum* fs would be. 

 *xavier10 wrote:*   

> Hello,
> 
> I am considering an SSD based laptop, which I would install Gentoo on. I am wondering about support issues with SSD drives and Linux, and what decisions should be made.
> 
> In particular, what about the filesystems ? Are all filesystems ok for SSDs or are some preferable to others ?
> ...

 

----------

## drescherjm

 *Quote:*   

> Another thing I feel bad about is the swap: is it ok to swap on an SSD ? It seems to me this is a very bad decision due to the wear it may incur on the drive ? 

 

I would not worry about this. All descent SSDs have wear leveling. And this will generally increase the life of the device to 3 years or more of continuous (24/7) writing to the drive.

----------

## Dagger

I'm using 4xSSD (OCZ 128GB) in raid (1+0) array for postgresql database.

SSD aren't super fast in constant reads (only around ~250MB/sec) comparing to SAS array (~320MB/sec), but random access time is awesome (~2ns comparing to ~12ns)

I've got over 500m records in my database and very complicated queries take around ~10 min on SAS array and ~4 min on SSD.

I've been testing these drivers before I put them into the array, and they behave very similar to traditional drives. So generally speaking FS choice should depend on your needs.

I needed very fast random access, so I've chosen reiserfs. For general use ext or xfs should be good.

As drescherjm mentioned I wouldn't really care about lifetime because current generation of drives have longer lifetime than average laptop.

----------

## pdw_hu

I'll be getting an OCZ SSD for my laptop soon.

1. What do you think about Ext4 w/o journal? As it's a laptop the only data loss it might endure is near lock-ups, but sysrq can usually at least do a sync before rebooting it.

2. Move /usr/portage to the now freed HDD connected through USB (or FireWire).

3. Mount /var/tmp/portage into tmpfs.

4. Using noatime.

Any other tips?

----------

## OneOfOne

I'd recommend btrfs and mount with compress,ssd.

----------

## s4e8

 *OneOfOne wrote:*   

> I'd recommend btrfs and mount with compress,ssd.

 

btrfs will eat your SSD very quickly. It write data constantly even in an idle system.

----------

## HeissFuss

btrfs will also eat your children.  It's not very stable atm.

----------

## poly_poly-man

ext2 or minix.

----------

## d2_racing

Ext2, in fact, you don't need to log the change.

----------

## ial

and what about Reiser4? 

especially with its compression set on -- will the transparent compression move the burden away slightly from stressing the actual physical medium rather into operations located more within buffer/RAM ?  I mean, files before a write are compressed significantly in fs buffer so afterall  much less resulting data is physically engraved on SSD medium, and much fewer NAND cells are being worn, is that correct?

 *s4e8 wrote:*   

> btrfs will eat your SSD very quickly. It write data constantly even in an idle system.

 

I hope Reiser4 does not behave that ugly...?

BTW. What does the option "SSD Mode" mean in Btrfs and how does it improve SSD wear friendliness? Will Btrfs under this option more or less wear physical media than ext2?

----------

## Need4Speed

 *s4e8 wrote:*   

>  *OneOfOne wrote:*   I'd recommend btrfs and mount with compress,ssd. 
> 
> btrfs will eat your SSD very quickly. It write data constantly even in an idle system.

 

This is just not true.  Modern SSDs have WEAR  LEVELING  :Exclamation:  .  This means you can write to them constantly 24/7 for many YEARS before they will fail.  Most SSDs now have longer MTBFs than traditional hard disks.

----------

## ial

 *Need4Speed wrote:*   

> Modern SSDs ... you can write to them constantly 24/7 for many YEARS before they will fail.  Most SSDs now have longer MTBFs than traditional hard disks.

 but does it apply to MLC either?

does "most" mean "the most espensive ones" maybe?

However please, address the issue of suitability particular filesystems to SSDs. Is the true all advanced fs have advantages that are only visible on HDD, i.e. purely aimed to ease traditonal HHD spinning plates limitations (access/seek time) and now it is pointless to use them on SSD?

 *d2_racing wrote:*   

> Ext2, in fact, you don't need to log the change.

 So maybe you shold read this please: "Journaled filesystems will definitely exercise the wear leveling firmware, but so will ext2.  The metadata and file data blocks are in a fixed location and use small block sizes.  So, metadata heavy workloads will hammer on the SSD either way."

So would minix be the best for SSD indeed?

And also, what about NILFS? "NILFS: A File System to Make SSDs Scream"  :Wink: 

----------

## Need4Speed

I just switched over to NILFS2 for my rootfs and fixed the partition alignment to 512k.  It has a made HUGE difference in write speeds!   :Shocked: 

Random writes used to be glacial and would sometimes cause my system to hang for 10-30 seconds if a fsync were forced.

I am using a "Gen 1" Ridata SSD, so NILFS2 is probably helping me more than if you owned something like Intel's X25-m, which has "better" firmware that tries to hide the some of the SSD's characteristics from the filesystem.  But I would still give NILFS2 a try if you want faster writes.

SSD Alignment Info:

http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/

http://www.ocztechnologyforum.com/forum/showpost.php?p=373226&postcount=98

----------

## energyman76b

reiser4 is always the right answer - because of moving journal, compression, it is fast...

----------

## Grubshka

 *Need4Speed wrote:*   

> I just switched over to NILFS2 for my rootfs and fixed the partition alignment to 512k.  It has a made HUGE difference in write speeds!  
> 
> Random writes used to be glacial and would sometimes cause my system to hang for 10-30 seconds if a fsync were forced.
> 
> I am using a "Gen 1" Ridata SSD, so NILFS2 is probably helping me more than if you owned something like Intel's X25-m, which has "better" firmware that tries to hide the some of the SSD's characteristics from the filesystem.  But I would still give NILFS2 a try if you want faster writes.
> ...

 

I read that NILFS was making a lot of disk operations, which was bad for SSD longevity? Is this true?

(This may depend of people means with "longevity" : I don't expect to use my laptop more than 10 years).

I go every days on the webstore, I think I'll click on "buy one day, when I'll decide which one to buy...

----------

## energyman76b

http://blogs.gentoo.org/nightmorph/2009/08/09/ssds-and-filesystems-part-2

ext4 is crap.

----------

## wildhorse

Patriot Memory is backing its new Torqx M28 Series SSDs with a 10-year warranty. Is there any IDE (PATA/SATA) HDD available with anything close to a 10-year warranty?

About the pagefile, I say no pagefile is the best pagefile. Go for RAM.

----------

## energyman76b

you need a 'pagefile' to be able to overcommit. Which you need if you really want to make use of your ram. It doesn't have to be big. Just be there.

----------

## bassai

I'm running ext4 on a SSD. This works fine for 3 months now.

I mounted /var/tmp/portage on tmpfs to compile in RAM.

Furthermore I increased my sync interval.

----------

## pdw_hu

 *bassai wrote:*   

> I'm running ext4 on a SSD. This works fine for 3 months now.
> 
> I mounted /var/tmp/portage on tmpfs to compile in RAM.
> 
> Furthermore I increased my sync interval.

 

Same here, and am using data=writeback. For some reason s2ram doesn't work (journal problems), but as i don't really need it i haven't dug myself into that issue.

----------

## runem

 *pdw_hu wrote:*   

>  *bassai wrote:*   I'm running ext4 on a SSD. This works fine for 3 months now.
> 
> I mounted /var/tmp/portage on tmpfs to compile in RAM.
> 
> Furthermore I increased my sync interval. 
> ...

 

Acording to http://patchwork.ozlabs.org/patch/49687/ data=writeback and TRIM do not mix well, so I have changed to data=ordered.  Using deadline io-sched wtth fifo_batch=1 seems to work well.

----------

## tnt

 *runem wrote:*   

> Using deadline io-sched wtth fifo_batch=1 seems to work well.

 

have you noticed any performance difference over default fifo_batch=16 ?

I've read somewhere that recent kernels have some ssd-related optimizations for cfs, but unfortunately I've forgot where. 

can anyone confirm that?

----------

## runem

 *tnt wrote:*   

> 
> 
> have you noticed any performance difference over default fifo_batch=16 ?
> 
> I've read somewhere that recent kernels have some ssd-related optimizations for cfs, but unfortunately I've forgot where. 
> ...

 

I have just made at test with tiobench and the default of 16 was slightly better  :Surprised: 

If /sys/block/<device>/queue/rotational is set to 0 then the scheduler works better with SSDs and USB-sticks for that matter.

----------

## tnt

 *runem wrote:*   

> I have just made at test with tiobench and the default of 16 was slightly better 

 

aha. thx.

 *runem wrote:*   

> If /sys/block/<device>/queue/rotational is set to 0 then the scheduler works better with SSDs and USB-sticks for that matter.

 

but it still lags behind deadline?

----------

## darkphader

 *energyman76b wrote:*   

> http://blogs.gentoo.org/nightmorph/2009/08/09/ssds-and-filesystems-part-2
> 
> ext4 is crap.

 

Guess you stay away from everything Google then:

http://goo.gl/m7KA

----------

## energyman76b

 *darkphader wrote:*   

>  *energyman76b wrote:*   http://blogs.gentoo.org/nightmorph/2009/08/09/ssds-and-filesystems-part-2
> 
> ext4 is crap. 
> 
> Guess you stay away from everything Google then:
> ...

 

read again, the only reason was the easy migration from ext2... they even skipped ext3 - which should tell you something.

----------

## runem

 *tnt wrote:*   

>  *runem wrote:*   I have just made at test with tiobench and the default of 16 was slightly better  
> 
> aha. thx.
> 
>  *runem wrote:*   If /sys/block/<device>/queue/rotational is set to 0 then the scheduler works better with SSDs and USB-sticks for that matter. 
> ...

 

If the test is made with a single thread then it is almost a tie. With the default 4 threads deadline wins. The write performance is better but read performance is the about the same

----------

## darkphader

 *energyman76b wrote:*   

> read again, the only reason was the easy migration from ext2...

 

Not the only reason, just the main reason for picking EXT4 over XFS.

----------

## energyman76b

 *darkphader wrote:*   

>  *energyman76b wrote:*   read again, the only reason was the easy migration from ext2... 
> 
> Not the only reason, just the main reason for picking EXT4 over XFS.

 

and if you read your own link: XFS was faster. hmmm... faster.. better tested. Sure, when you have the hardware available google has, ext4's fucking brokeness might not be a problem...

----------

## darkphader

 *energyman76b wrote:*   

> ...ext4's fucking brokeness...

 

Sorry, but some anecdotal evidence is not convincing that EXT4 is broken.

----------

## tnt

 *runem wrote:*   

> If the test is made with a single thread then it is almost a tie. With the default 4 threads deadline wins. The write performance is better but read performance is the about the same

 

thx a lot!

guess that majority desktop users depend more on read performance to achieve interactivity, but I'll stay with deadline for the time being.

----------

## Uzytkownik

Noone seems to mentioned it - logfs.

----------

## drescherjm

I have 10TB+ on ext4 at work. Not a single problem at all. However at work I do not upgrade the kernel every month..

----------

