# SSD tweaking tips?

## Havin_it

Hi,

I just upgraded my netbook with a Kingston V+100 SSD, and am now about the business of seeing how I can make the most of it, i.e. getting best performance out of it and eliminating unnecessary disk writes. The whole system is on the SSD, including swap partition (I only have 1G RAM so need it sometimes), a Truecrypt partition (homedir) and the original WinXP install (but let's not worry about that now  :Wink:  )

I've done a few things already:

1) Added "discard" to the swap entry in fstab

2) Added "-W1" to /etc/conf.d/hdparm

3) Put /tmp and ~/.kde4/tmp-$HOSTNAME/ on tmpfs

4) Turned off disk cache in firefox and konqueror

5) I already use tmpfs as $PORTAGE_TMPDIR when emerging so no problem there.

Now I'm looking for more tweaks. There are some issues I've got issues or uncertainties with:

1) Kernel compilation is a worry. I have a gentoo server on my LAN which can help, but I'm wondering what the best approach would be here. An NFS mount to install the sources onto, and build them there? The server is only pentium3, so I dunno if it's safe to let it do the build with its own toolchain... (then again, I use distcc already so perhaps this amounts to the same thing?)

2) ~/.kde4/cache-$HOSTNAME/ - this has a lot of contents, but is it safe to put it on tmpfs? What would be the downside?

3) Any other disk-caches that one could live without, as I'm sure I haven't found them all...

4) Setting IO scheduler = "noop". My system currently uses Deadline, and in my kernel config I only have options for Deadline and CFQ. Am I missing some config dependency there?

Also if anyone can point to any good guides on this sort of thing, that would be great. I've seen a couple already (where I got some of the ideas above) but I bet there are more.

Thanks in advance  :Very Happy: 

----------

## dmitryilyin

Are you shure that swap "fs" support discard? afaic only ext4 does.

I also heard that commit=300 and more mount option can be usefull.

I lso would lower vm.swapiness and raise vm.dirty_background_ratio and vm.dirty_ratio

maybe lower vm.vfs_cache_pressure

and enable vm.laptop_mode

Are you shure that hdparm -W1 works for ssd?

Also find out did you align partitions right and did you use optimat block size and stride for fs.

----------

## Havin_it

Hi dmitryilyin, thanks for the reply   :Very Happy: 

I'll do your points one-by-one.

 *dmitryilyin wrote:*   

> Are you shure that swap "fs" support discard? afaic only ext4 does.

 

I don't have the reference (most of what I read were Ubuntu-based) but it said that swap was at the time the only FS that made use of it.  It said the test of whether it works would be that the line found through "dmesg | grep -i swap" would end in "SSD" instead of "SS", which it did when I made this change. Of course I have no idea what any of this means...   :Embarassed: 

 *dmitryilyin wrote:*   

> I also heard that commit=300 and more mount option can be usefull.

 

I will have to look this one up. I also heard about noatime, but I already had that anyway.

 *dmitryilyin wrote:*   

> I lso would lower vm.swapiness and raise vm.dirty_background_ratio and vm.dirty_ratio
> 
> maybe lower vm.vfs_cache_pressure
> 
> and enable vm.laptop_mode

 

These should all map to nodes in /proc/sys/, right? I'll explore these.

On this subject, one of the items I read (again, no reference, sorry) mentioned using sysfs-utils and putting directives in /etc/sysfs.conf (Ubuntu again), but when I emerged the named package it didn't create this file so I'm not sure if I'm on the right track there at all.

 *dmitryilyin wrote:*   

> Are you shure that hdparm -W1 works for ssd?

 

Well, when I issued it in bash it changed the value without error, so I guess so.

 *dmitryilyin wrote:*   

> Also find out did you align partitions right and did you use optimat block size and stride for fs.

 

Honestly not sure. I did my partitioning in gparted from a liveusb, and the option was to align to MiB or cylinders (I chose MiB based on some search results). Block size, no idea. How would I tell?

----------

## dmitryilyin

Well you should align partitions by factor of 2048 sectors usually, it is also true for 4kb sectorsize hdds like WD Green and is default for recent version of fdisk (Also i advice to use GUID partition table).

Sector size and stride size should be factor of ssh erase block size, but it is semi-arcane, try OCZ forums there were some topics.

I guess you need 4kb block size (is this default?).

You are correct about noop elevator, discard, noatime(you should always use it for any media) tmpfs.

It it's not laptop I would put system on ssd and home on hdd.

I head about Kolivas's Brain Fuck Sheduler, they say it helps a lot for desktop.

Also 1gb is not for KDE4, well not for any linux DE actually, maybe fluxbox or enlightment?

----------

## Havin_it

Woof, this partition spec lark is really baking my noodle. I guess I'd better tackle this first, 'cause if it means wiping and reformatting the whole drive, everything else will take a back seat.

Here's the current information from fdisk:

```
[root@myhost ~]# fdisk -l /dev/sda

Disk /dev/sda: 96.0 GB, 96029466624 bytes

255 heads, 63 sectors/track, 11674 cylinders, total 187557552 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0003f050

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1            2048    25884671    12941312   83  Linux

/dev/sda2   *    25884672    60944383    17529856    7  HPFS/NTFS/exFAT

/dev/sda3        60944384   183814143    61434880   83  Linux

/dev/sda4       183814144   187555839     1870848   82  Linux swap / Solaris
```

In this AnandTech review of my drive, there is this remark:

 *Quote:*   

> Remember that NAND is written to at the page level (4KB), but erased at the block level (512 pages).

 

I'm not sure if this is a drive-specific remark though, so I'm going to email Kingston tech support and ask them for the erase-block size.

----------

## Hypnos

IMHO, only TRIM and sector alignment matter -- the first for preventing write amplification, and the latter for performance.

With new SSDs you no longer have to worry about lifetime.  If a sector can be rewritten 1 million times, in heavy usage the drive will still last over 20 years!  

What is your SSD rated for?

----------

## Havin_it

One more upcoming issue while I wait for Kingston's reply.

My / partition is currently ext3, but after some reading I want to make it ext4. Supposedly the big gain here (apart from TRIM) will be extents, which some people are claiming as a significant read-speed boost. However, I gather that direct migration will mean only new files are handled as extents, and this sounds messy generally. Therefore the strategy I want seems to be to copy the files from the partition and back, so they're created anew on a newly-formatted ext4 partition.

Now I can't use normal copying, rsync etc because the backup drive is NTFS, so I guess I would need to tarball the whole partition contents, then extract back onto the new partition. However, since I'm not a tar maven, I could use some validation that this is sound practice in theory. I know tar keeps info like permissions, so I assume all files from the ext3 will be re-created correctly when extracted onto ext4. Is this correct? And are there any particular flags during archiving/extracting that are needed to preserve everything accurately?

EDIT: adding this for my own reference, and peer-review  :Wink: 

```

# to archive

cd /mnt/gentoo

tar -cvf /path/to/backup/file.tar .

# to extract

tar -xvf /path/to/backup/file.tar -C /mnt/gentoo/
```

Please do speak up if this misses out anything that'd be important when dealing with the root filesystem.Last edited by Havin_it on Wed Jul 27, 2011 1:39 pm; edited 2 times in total

----------

## Havin_it

 *Hypnos wrote:*   

> IMHO, only TRIM and sector alignment matter -- the first for preventing write amplification, and the latter for performance.
> 
> With new SSDs you no longer have to worry about lifetime.  If a sector can be rewritten 1 million times, in heavy usage the drive will still last over 20 years!  
> 
> What is your SSD rated for?

 

MTBF = 1,000,000 hrs. Here's the whole datasheet (I have the 96GB version)

http://www.kingston.com/ukroot/ssd/vplus100.asp

----------

## Hypnos

 *Havin_it wrote:*   

> MTBF = 1,000,000 hrs. Here's the whole datasheet (I have the 96GB version)
> 
> http://www.kingston.com/ukroot/ssd/vplus100.asp

 

That's 114 years, but I'm not sure MTBF is what you want.  The issue is not component failure, but wear.

Let's say that the write endurance is 1 million cycles, and you will be writing to it continuously at the maximum spec'd rate of 180MB/s.  For a 96GB disk, that's 17 years before you wear it out.

Unless you want to pass the drive onto your children, I don't think it's worth your time and inconvenience to concern yourself with extending its life.

----------

## dmitryilyin

https://wiki.archlinux.org/index.php/Solid_State_Drives

http://lwn.net/Articles/428584/

----------

## Havin_it

We have a reply:

 *Quote:*   

> Page size is 8 kilobytes, block is 128 pages (size 1MB). 

 

So this should mean I'm already good, since I used Alignment = MiB for each operation when creating the partition table and each individual partition. Right?

----------

## aderesch

 *Hypnos wrote:*   

> Let's say that the write endurance is 1 million cycles, and you will be writing to it continuously at the maximum spec'd rate of 180MB/s.  For a 96GB disk, that's 17 years before you wear it out.

 

That's way off. MLC is specified for 10000 cycles (at least I haven't seen any other values yet), not 1 million.

That said, I agree with you about not worrying too much. I don't know about this particular drive, but both my SSDs (one SuperTalent, one Samsung) report a remaining lifetime estimate via SMART. The SuperTalent one even gives min/avg/max erase counts. Extrapolating from these values, with my current usage pattern expected lifetime is >20 years, which I consider good enough. This is with aligned partitions, trim enabled, most large files on other partitions, world updates usually once a week, kernel updates every release.

ad

----------

## Hypnos

 *Havin_it wrote:*   

>  *Quote:*   Page size is 8 kilobytes, block is 128 pages (size 1MB).  So this should mean I'm already good, since I used Alignment = MiB for each operation when creating the partition table and each individual partition. Right?

 

I believe so.  When you create the filesystems you will want to set the stride and stripe-width so that this number times the block size is 1MiB.  For example:

```
mke2fs -t ext4 -E stride=256,stripe-width=256 -b 4096 -O dir_index,extent,has_journal /dev/sdaX
```

If you know the exact erase block size you can make this width smaller.

 *aderesch wrote:*   

>  *Hypnos wrote:*   Let's say that the write endurance is 1 million cycles, and you will be writing to it continuously at the maximum spec'd rate of 180MB/s.  For a 96GB disk, that's 17 years before you wear it out. 
> 
> That's way off. MLC is specified for 10000 cycles (at least I haven't seen any other values yet), not 1 million.

 

You're right -- everything I see on the web indicates that 1M is for SLCs, and consumer MLCs are ~10K .  I stand corrected! (wikipedia)

Then, assuming optimal wear leveling and no write amplification, for a 96GB drive to last 5 years it can tolerate 525GB/day of writes.  Relaxing those assumptions, probably 100GB/day would fine ...

----------

## Havin_it

 *Hypnos wrote:*   

>  *Havin_it wrote:*    *Quote:*   Page size is 8 kilobytes, block is 128 pages (size 1MB).  So this should mean I'm already good, since I used Alignment = MiB for each operation when creating the partition table and each individual partition. Right? 
> 
> I believe so.  When you create the filesystems you will want to set the stride and stripe-width so that this number times the block size is 1MiB.  For example:
> 
> ```
> ...

 

Couple of queries about this before I go ahead:

1) I was looking at the mke2fs manpage, and the stride and stripe-width options are described in relation to creating a RAID array. Do they serve any purpose if RAID is not involved?

2) If yes, what is the optimum combination of values given the page and  erase-block sizes above?

3) If no, is it still  a good idea to specify -b? What is the best value - bigger or smaller? (I see from the manpage only 1024, 2048 and 4096 are acceptable values.)

----------

## Hypnos

 *Havin_it wrote:*   

> 1) I was looking at the mke2fs manpage, and the stride and stripe-width options are described in relation to creating a RAID array. Do they serve any purpose if RAID is not involved?

 

While these features were originally for LVM/RAID setups, ext4 doesn't know what kind of hardware it's sitting on, so you can use them wherever or for whatever you like.  You can read about it from the man himself.

 *Quote:*   

> 2) If yes, what is the optimum combination of values given the page and  erase-block sizes above?
> 
> 3) If no, is it still  a good idea to specify -b? What is the best value - bigger or smaller? (I see from the manpage only 1024, 2048 and 4096 are acceptable values.)

 

If your erase block is 1MiB, then you want strip-width*(block size) and stride*(block size) to be 1MiB.  Given that the maximum block size is 4096, this means that strip-width and stride must be 256.  This should achieve the desired alignment of the filesystem structure on the disk, assuming your partitions were properly aligned.

----------

## Havin_it

Thanks for that - I just wasn't sure from your phrasing whether the e.g. was a concrete recommendation.

Now... Eek! I tarred the partition contents, formatted to ext4 per exactly the command you gave, replaced the contents (per the tar commands I listed above, but also with -z option), changed the fstab to ext4, rebooted...

...but the partition wouldn't mount read/write! What did I do?

Nothing helpful in dmesg -- it only mentions the initial readonly mount during boot -- and no other logs were written to (obviously). And the boot process is so chuffing fast now that there wasn't time to read any of the output in the console   :Embarassed: 

Wat do?

----------

## Hypnos

First thing to do is boot up with a CD/DVD and see if you can mount and use the partition.  If so, then it's a problem with your bootloader setup.

Does your Windows partition behave as expected?

----------

## Havin_it

Mounting the partition under my Arch pen-drive install (which I've used for all the other operations so far) works fine, and Windows (on /dev/sda2) boots fine too. Swap (sda4) also mounted successfully during the Gentoo boot.

Could it be a factor that I didn't destroy the ext3 partition before reformatting it?

Or maybe something with my kernel config for ext4? I think it had everything except extended attributes enabled - could that be it?

----------

## Havin_it

OK, I tried booting again and caught the error line. Apparently I need huge file support in the kernel in order to mount R/W. Recompiling now...

Note I don't have any 2TB+ files (and not that likely to, on a ~12GB partition   :Laughing:  ) - is supporting them perhaps something that would save some resources if turned off? Can it be turned off in ext4?

----------

## Hypnos

 *Havin_it wrote:*   

> OK, I tried booting again and caught the error line. Apparently I need huge file support in the kernel in order to mount R/W. Recompiling now...

 

Yeah, I was about to post that.

 *Quote:*   

> Note I don't have any 2TB+ files (and not that likely to, on a ~12GB partition   ) - is supporting them perhaps something that would save some resources if turned off? Can it be turned off in ext4?

 

It can be turned off with tune2fs.  I'd be surprised if there were any appreciable performance difference, since it is turned on by default.

----------

## Havin_it

Whew - and we're back. Seems odd for this to be default (especially if it isn't so in the kernel) - surely not many folk are working with 2TB+ files? Or am I sheltered?

Anyway, Panic over. I take the point you made, Hypnos, that beyond the obvious things there are questionable benefits to going after ever-more-obscure tweaks. I do have a life to get on with ... then again, you never know - another day's work now might save a couple of days over the life of the machine. OK, perhaps a long-shot, but I like to feel I've made a good fist of things before I get bored and move on   :Wink: 

Now, my last exercise for anyone still interested is a run-through of the remaining flags I haven't explored yet: I'd welcome any thoughts to add to my understanding.

discard

As you say, it's a no-brainer to use TRIM, especially for this drive according to the Anandtech review, if it mitigates the "aggressive" builtin garbage-collection. However, is it better to do this with discard mount-option, or to run scheduled fstrim commands? If discard effectively means more work whenever something's deleted, perhaps it's better to just run fstrim when idle.

nodiratime

This is seldom mentioned ìn tweak-guides that mention noatime. I'm suspicious: why is this? Can it have problematic consequences?

nobh

OK I'll be honest and say I don't understand at all what this does, but it seems to come up a lot.

data=writeback

Sounds risky. Worth it?

commit=<bignum>

Ditto. The suggestion of 300s above seems extreme: 5 minutes between commits?

nobarrier

I have a good battery and KDE is set to not take any chances with powering down when low. Does that leave any reason I shouldn't use this?

max_batch_time=n

This sounds promising and I actually understand it, but wouldn't have a clue how to arrive at a good value.

À good set of answers to this lot, all in one place, could be a boon to the whole community   :Wink: 

----------

## Hypnos

 *Havin_it wrote:*   

> Whew - and we're back. Seems odd for this to be default (especially if it isn't so in the kernel) - surely not many folk are working with 2TB+ files? Or am I sheltered?

 

If you read the option description, it's not just for files over 2TB (not uncommon for corporate or scientific databases), but block devices over 2TB, which are quite common these days.

My personal comments on mount options are below.  Note that you can explore what these options mean in greater depth by reading the kernel source documentation in:

```
/usr/src/linux/Documentation/filesystems/ext4.txt
```

 *Quote:*   

> [*]discard
> 
> As you say, it's a no-brainer to use TRIM, especially for this drive according to the Anandtech review, if it mitigates the "aggressive" builtin garbage-collection. However, is it better to do this with discard mount-option, or to run scheduled fstrim commands? If discard effectively means more work whenever something's deleted, perhaps it's better to just run fstrim when idle.

 

If I am not mistaken, the built-in garbage collection is filesystem dependent, usually requiring Windows NTFS.  So for any runtime zeroing of unused sectors under Linux, you must use TRIM.  If the controller on your SSD is at all sane, it should be nearly as efficient to use the mount option as it is to run fstrim on schedule.

BTW, you can set this option in the filesystem using tune2fs, so you don't have to specify it as a mount option.

 *Quote:*   

> [*]nodiratime
> 
> This is seldom mentioned ìn tweak-guides that mention noatime. I'm suspicious: why is this? Can it have problematic consequences?

 

This is implied by "noatime".  If you do want access time logged for all files except directories, you can specify "nodiratime" .

 *Quote:*   

> [*]nobh
> 
> OK I'll be honest and say I don't understand at all what this does, but it seems to come up a lot.

 

See below re. writeback.

 *Quote:*   

> [*]data=writeback
> 
> Sounds risky. Worth it?

 

Since it's not the default, I defer to the devs unless I see benchmarks showing that it blows away ordered mode.  Otherwise it's not worth the hassle to me.

Of course I keep full system backups when I'm at home (as any rational human should), but I'm often on the road with my laptop, so data integrity is extra important.

 *Quote:*   

> [*]commit=<bignum>
> 
> Ditto. The suggestion of 300s above seems extreme: 5 minutes between commits?

 

Why would this extend SSD life -- all writes would get synced eventually, no?  Also, there are other knobs to control this behavior, such as /proc/sys/vm/dirty_writeback_centisecs, which might be useful for small power savings on a laptop.

 *Quote:*   

> [*]nobarrier
> 
> I have a good battery and KDE is set to not take any chances with powering down when low. Does that leave any reason I shouldn't use this?

 

See above regarding writeback.

 *Quote:*   

> [*]max_batch_time=n
> 
> This sounds promising and I actually understand it, but wouldn't have a clue how to arrive at a good value.

 

Me neither, so I stick with the default.

----------

## asturm

I still haven't come to a definite conclusion whether TRIM is necessary for SLC SSDs too. Does anyone have the knowledge?

----------

## tnt

@Havin_it

sorry if I'm rude, but you shoudn't get Kingston at all. at least not in these days.

Kingstone used to have couple of good SSDs by rebradning of Intel, but everything else was not so good.

you should get some SandForce 2xxx based SSD, or at least some Intel 320 or 510. even SandForce 1xxx would be better option.

anyways, couple of observations:

- garbage collector should be OS independent.

- TRIM should not necessary slow down operations as triming of cells by SSD controller is usually postponed. TRIM is very good compared to manual clean sripts because SSD controller gets more cells to combine writes as quick as it can.

- data=writeback doesn't go well with discard option, at least that was the case last time I've checked:

http://comments.gmane.org/gmane.comp.file-systems.ext4/18431

- commit=<bignum>: I've monitored my SSD host writes and noticed big difference in it by rising commit for ext4 from default 5 to 100 seconds. setting 900 instead of 100 seconds gave no big improvements, at least not for my desktop workload.

- nowdays, CFQ should be aware of SSD device and be no worse then deadline nor noop.

- 10000 write cycles is outdated info for 50nm NAND. newer NAND has 5000 or even 3000 write cycles. then again, SSD controllers are getting better and better in wear leveling.

- you could use --preserve-permissions flag during tar.

- it's better to use windows 7 instead of xp, as win 7 will TRIM unused sectors/blocks and align writes better.

----------

## Hypnos

 *genstorm wrote:*   

> I still haven't come to a definite conclusion whether TRIM is necessary for SLC SSDs too. Does anyone have the knowledge?

 

TRIM reduces write amplification.  If by "necessary" you mean extend the life of the SSD, then it depends on your usage.  If by "necessary" you mean improving write performance (esp. when all the blocks on the SSD have been dirtied), then it is.

----------

## Hypnos

 *tnt wrote:*   

> - garbage collector should be OS independent.

 

How does the collector know which blocks are stale, if it doesn't know how the filesystem is structured?  Isn't that the point of TRIM, that the OS can tell the disk which blocks are stale?

Maybe you are referring to defragmentation, which the drive can do in a way transparent to the OS.  (a link)

----------

## aderesch

 *Hypnos wrote:*   

>  *tnt wrote:*   - garbage collector should be OS independent. 
> 
> How does the collector know which blocks are stale, if it doesn't know how the filesystem is structured?  Isn't that the point of TRIM, that the OS can tell the disk which blocks are stale?
> 
> Maybe you are referring to defragmentation, which the drive can do in a way transparent to the OS.  (a link)

 

And as witnessed by your link the manufacturers call this garbage collection. Yes, this does not have the same information as with TRIM, and so cannot have the same effect. All it can do is keep write performance up at the cost of increased write amplification, while TRIM improves both.

(I bought a Samsung SSD a few days ago, and according to the information available to me this is exactly what it does.)

ad

----------

## Hypnos

 *aderesch wrote:*   

> (I bought a Samsung SSD a few days ago, and according to the information available to me this is exactly what it does.)

 

No TRIM, nor even the NTFS garbage collection?  My Lenovo OEM Samsung has the latter.

----------

## aderesch

 *Hypnos wrote:*   

> No TRIM, nor even the NTFS garbage collection?  My Lenovo OEM Samsung has the latter.

 

It has TRIM support and automatic garbage collection. No mention of NTFS anywhere.

Where does the information on NTFS specificity come from? I was unable to find anything even semi-official on this -- only vague guesses by users.

ad

----------

## Hypnos

 *aderesch wrote:*   

> Where does the information on NTFS specificity come from? I was unable to find anything even semi-official on this -- only vague guesses by users.

 

Old Samsung SSD drives (and drives with Samsung controllers) were rumored to have NTFS-only TRIM-like garbage collection.  But I've only ever seen it mentioned in reviews (an example), not in official docs like the datasheets.  Of course, the datasheets don't mention TRIM, either.

Here is a review of an OCZ RAID with a SandForce controller showing its NTFS-specific garbage collection in action.  So it does exist somewhere ...

----------

## Havin_it

 *tnt wrote:*   

> @Havin_it
> 
> sorry if I'm rude, but you shoudn't get Kingston at all. at least not in these days.
> 
> Kingstone used to have couple of good SSDs by rebradning of Intel, but everything else was not so good.
> ...

 

Haha, no rudeness inferred, but it's a little late now  :Wink:  my choice was purely financial really, because I got about 60% off the price. I didn't do any comparative research, and didn't expect Kingston to be King of the hill anyway. Maybe next time I'll treat myself (if I have a better income)...

 *Quote:*   

> 
> 
> anyways, couple of observations:
> 
> - garbage collector should be OS independent.
> ...

 

That was the impression that I got from their literature, but don't ask me how it works - good question.

 *Quote:*   

> 
> 
> - TRIM should not necessary slow down operations as triming of cells by SSD controller is usually postponed. TRIM is very good compared to manual clean sripts because SSD controller gets more cells to combine writes as quick as it can.
> 
> - data=writeback doesn't go well with discard option, at least that was the case last time I've checked:
> ...

 

I had talked myself out of using writeback as it sounds a bit too risky, and I'm using discard now. Interestingly though, even after a fairly short period (and with discard enabled), using "fstrim -v /" reports around 4GB trimmed. Can it really be so high?

 *Quote:*   

> 
> 
> - commit=<bignum>: I've monitored my SSD host writes and noticed big difference in it by rising commit for ext4 from default 5 to 100 seconds. setting 900 instead of 100 seconds gave no big improvements, at least not for my desktop workload.
> 
> 

 

For some reason, without me specifying anything, it's been automatically mounting with commit=600 (or so dmesg claims). What's caused that?

 *Quote:*   

> 
> 
> - nowdays, CFQ should be aware of SSD device and be no worse then deadline nor noop.
> 
> 

 

Well, even after specifying noop on the kernel commandline, what I get in the sysfs node (forget the path right now) is "[noop] deadline" - what does this mean?

 *Quote:*   

> 
> 
> - 10000 write cycles is outdated info for 50nm NAND. newer NAND has 5000 or even 3000 write cycles. then again, SSD controllers are getting better and better in wear leveling.
> 
> - you could use --preserve-permissions flag during tar.
> ...

 

Sounds like that would have been a good idea, but the perms seem to have been kept anyway. Not that I did any exhaustive test of this, but nothing's broken yet   :Rolling Eyes: 

 *Quote:*   

> 
> 
> - it's better to use windows 7 instead of xp, as win 7 will TRIM unused sectors/blocks and align writes better.

 

Not my choice to make unfortunately - certainly no funds left now to buy Win7. Anyway if I installed Win7 I'd have no space left for Gentoo - I only keep the darn XP install around in case of some theoretical "need Windows" situation, which has only been BIOS upgrades so far and I doubt the NC10 will see any more of those!

----------

## aderesch

 *Havin_it wrote:*   

> "[noop] deadline" - what does this mean?

 

That IO schedulers "noop" and "deadline" are available with "noop" currently active. If you want to see "cfq" added to that list you need to compile it either into the kernel or as module. If you have already done the latter you need to "modprobe cfq-iosched".

ad

----------

## whiteghost

anyone know if a certain amount of free space is required for optimal performance?

----------

## 1clue

Sorry if I just scanned most of the thread.

I notice you said you have only 1G RAM, and then I didn't see anyone else mention it.

If you can afford an SSD then you should be able to afford more RAM.

If you want to avoid writes on your SSD, then jack your RAM up as much as you can, within reason.  I occasionally get 6G use on my box.  I make good use of tmpfs in the places you mentioned, and a couple more which relate to the software I use.  Any directory where temp files are written to frequently is a good candidate, but it seems you know that.

The other half of tmpfs is that it uses the swap mechanism to work.  Which means that, if you have room, the data is written to RAM rather than the disk.  It doesn't even go through the interface, which not only dramatically speeds up tasks like compiling but also prevents the data from ever being written.

The smaller your RAM the more likely the swap mechanism will write to the disk.  I love the idea of swap, but I hate having it actually use the drive.  It's why I use 12g RAM.  The one time I noticed swap using the entire disk is the day I went out and bought the second 6g.

Right now, if I leave the system up for a month or so I see approximately 11g usage.  Half or more of that is usually disk cache, which is not something you really control.  It's just Linux keeping what you used already in hopes you might find it useful later too.  Clarification:  Sometimes I approach 6g of swap+real usage, the rest is disk cache.  If I run several VMs I can exceed 6g real use easily.

Realistically, if you don't use any virtual machines (kvm, vmware, etc) then you could probably get by with 4g without hitting swap too much.  I recommend more even so.  I don't know what hardware you're on but right now RAM is about as cheap as I have ever seen it.

----------

## 1clue

 *whiteghost wrote:*   

> anyone know if a certain amount of free space is required for optimal performance?

 

Yes and no.

On a spinning disk, it matters because of fragmentation issues.  Normal tradition dictates that 10% free is necessary to prevent serious fragmentation.  In my experience if you have a partition which sees continuous reading and writing and changes to file sizes, you can detect performance problems if you have less than 25% free.

On an ssd you don't have a performance hit exactly for fragmentation, because there is essentially zero seek time.  You CAN suffer from fragmentation though, it's just that it takes less time to shuffle things around.

I don't know the specifics of the disk logic, but if you have an app like a database there can be a file with internal fragmentation.  If your disk fills up and either the app or the OS tries to defragment, then you still need to read the file, defragment it and then write it back somewhere.  The less space there is available the smaller the read/write cycles need to be, and the less your buffers can help you.

----------

## whiteghost

 *whiteghost wrote:*   

> anyone know if a certain amount of free space is required for optimal performance?

 

http://www.ocztechnologyforum.com/forum/showthread.php?97693-is-certain-amount-of-free-space-required-for-optimal-performance

for optimum performance i would not exceed 70 percent capacity.

you can go more but just to ensure drive longevity the more free space the better.

during our beta testing i gradually filled a V3 and performance did not seriously degrade till i reached over 90 percent full.

i would never recommend that though. 

now over-provisioning is a different story, the new sandforce controller does not seem to need any extra to help keep performance up like the V2s did. 

i usually have lots of free space in my raid volumes so i add a little extra just because i'm used to doing it.

----------

## oegat

I'll reactivate this thread since it seems to be the most recent on the matter. Since both kernel features and ssd hardware change rapidly I'd like to check some issues I do not have any recent info on.

1. What to move away from ssd?

I have a 120Gb Intel 320 ssd as the boot/system disk in my (xen-enabled) desktop system. So far I have a small boot partition =sda1 and an 18Gb root =sda2 for dom0, with /var, /usr/src and /usr/portage relocalized to a mechanical disk in order to reduce writes. But I do not know how big matter it is to relocalize those filesystems, I'm thinking of moving back /var to ssd root fs for simpler management. 

Given what I read in this thread /var writes on a desktop system is probably not a big deal, but how about portage and usr/src? Assuming PORTAGE_TMPDIR is set to a ramdisk, how detrimental are 'emerge --sync' once or twice a week or a 'make' or 'make clean' in /usr/src?

(/home is on a mechanical raid1 and will stay there).

2. Alignment and LVM2

I believe that I have gotten partition alignment reasonably right so far (please tell me if I'm wrong about this!):

```
 # fdisk -l /dev/sda

Disk /dev/sda: 120.0 GB, 120034123776 bytes

32 heads, 32 sectors/track, 228946 cylinders, total 234441648 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x********

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048      411647      204800   83  Linux

/dev/sda2          411648    38160383    18874368   83  Linux

```

Let's say I make another partition of the rest of the ssd to be an LVM2 volume group, I can just start on the next block and alignment will be retained? 

(provided that I create megabyte-aligned logical volumes, of course).

3. TRIM, LVM2 and XEN

My third question regards TRIM and LVM2. I plan to run both HVM Windows 7 and PV gentoo's under xen. These VMs' filesystems will preferably live on dedicated LVM2 volumes on the ssd. 

Would ext4 TRIM work as expected from a PV linux domU, even if the domU does not talk to the ssd directly but sees only the logical volume? How about a HVM Win7 vm on an lvm2 volume, will the underlying ssd benefit from Win7's trim-like capabilities?

These are things that puzzle me and I haven't found that much recent info (most I find is a couple years old). I'm not particularily read up on ssd technology, I assume educated guesses can be made but I prefer to ask those who may know. Thanks!

----------

