# Excessive trim [SOLVED]

## Tony0945

I have a Crucial Mx200 250GB SSD. Following recommendations I have a cron job that runs trim twice a day:

```
gentoo ~ # crontab -l

# DO NOT EDIT THIS FILE - edit the master and reinstall.

# (/tmp/crontab.XXXXgxqYvk installed on Wed Dec 14 12:12:45 2016)

# (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $)

15 1,13 * * * /sbin/fstrim -v / | logger

```

It seems like excessive bytes are trimed.

```
Dec 14 13:15:01 gentoo root: /: 14 GiB (14957654016 bytes) trimmed

Dec 15 13:15:01 gentoo root: /: 3.4 GiB (3610587136 bytes) trimmed

Dec 16 01:15:01 gentoo root: /: 183.9 GiB (197481418752 bytes) trimmed

Dec 16 13:15:01 gentoo root: /: 2.5 GiB (2718674944 bytes) trimmed

Dec 17 13:15:01 gentoo root: /: 2.5 GiB (2652045312 bytes) trimmed

Dec 18 01:15:01 gentoo root: /: 182.7 GiB (196180692992 bytes) trimmed

Dec 18 13:15:01 gentoo root: /: 4.8 GiB (5117321216 bytes) trimmed

Dec 19 13:15:01 gentoo root: /: 3 GiB (3204509696 bytes) trimmed

Dec 20 13:15:01 gentoo root: /: 3.1 GiB (3334537216 bytes) trimmed

Dec 21 13:15:01 gentoo root: /: 3.5 GiB (3789377536 bytes) trimmed

Dec 22 13:15:01 gentoo root: /: 2.8 GiB (2949726208 bytes) trimmed

Dec 23 01:15:01 gentoo root: /: 2.8 GiB (3028803584 bytes) trimmed

Dec 23 13:15:01 gentoo root: /: 3.5 GiB (3782463488 bytes) trimmed

Dec 24 13:15:01 gentoo root: /: 3.1 GiB (3303751680 bytes) trimmed

Dec 25 13:15:01 gentoo root: /: 4.9 GiB (5187260416 bytes) trimmed

Dec 26 01:15:01 gentoo root: /: 1.4 GiB (1467097088 bytes) trimmed

Dec 26 13:15:01 gentoo root: /: 2.9 GiB (3043270656 bytes) trimmed

Dec 27 13:15:01 gentoo root: /: 182.5 GiB (195903832064 bytes) trimmed

Dec 28 13:15:01 gentoo root: /: 182.4 GiB (195819532288 bytes) trimmed

Dec 29 01:15:01 gentoo root: /: 1.1 GiB (1155567616 bytes) trimmed

Dec 29 13:15:01 gentoo root: /: 2.8 GiB (2942869504 bytes) trimmed

Dec 30 13:15:01 gentoo root: /: 3.8 GiB (4109144064 bytes) trimmed

Dec 31 13:15:01 gentoo root: /: 2.9 GiB (3105202176 bytes) trimmed

Jan  1 13:15:01 gentoo root: /: 4.4 GiB (4692619264 bytes) trimmed

Jan  2 13:15:01 gentoo root: /: 3.4 GiB (3653017600 bytes) trimmed

Jan  3 13:15:01 gentoo root: /: 8.7 GiB (9372717056 bytes) trimmed

Jan  4 13:15:01 gentoo root: /: 3.5 GiB (3713122304 bytes) trimmed

Jan  5 13:15:01 gentoo root: /: 3.4 GiB (3659714560 bytes) trimmed

Jan  6 13:15:01 gentoo root: /: 2.4 GiB (2515193856 bytes) trimmed

Jan  7 13:15:01 gentoo root: /: 2.9 GiB (3092750336 bytes) trimmed

Jan  8 01:15:01 gentoo root: /: 180.9 GiB (194243280896 bytes) trimmed

Jan  8 13:15:01 gentoo root: /: 4 GiB (4254441472 bytes) trimmed

Jan  9 13:15:01 gentoo root: /: 2.5 GiB (2669527040 bytes) trimmed

Jan 10 13:15:01 gentoo root: /: 180 GiB (193207033856 bytes) trimmed

Jan 11 13:15:01 gentoo root: /: 2.6 GiB (2791075840 bytes) trimmed

Jan 12 13:15:01 gentoo root: /: 2.9 GiB (3109646336 bytes) trimmed

Jan 13 13:15:01 gentoo root: /: 2.8 GiB (3037503488 bytes) trimmed

Jan 14 01:15:01 gentoo root: /: 178.8 GiB (191984988160 bytes) trimmed

Jan 14 13:15:01 gentoo root: /: 2.6 GiB (2823249920 bytes) trimmed

Jan 15 13:15:01 gentoo root: /: 2.7 GiB (2889191424 bytes) trimmed

Jan 16 13:15:01 gentoo root: /: 4.1 GiB (4387782656 bytes) trimmed

Jan 17 13:15:01 gentoo root: /: 2.6 GiB (2744942592 bytes) trimmed

Jan 18 01:15:01 gentoo root: /: 2.2 GiB (2320166912 bytes) trimmed

Jan 18 13:15:01 gentoo root: /: 3.7 GiB (3970105344 bytes) trimmed

Jan 19 13:15:01 gentoo root: /: 3 GiB (3265916928 bytes) trimmed

Jan 20 13:15:01 gentoo root: /: 2.6 GiB (2753716224 bytes) trimmed

Jan 21 13:15:01 gentoo root: /: 4 GiB (4289691648 bytes) trimmed

Jan 22 01:15:01 gentoo root: /: 176.2 GiB (189169852416 bytes) trimmed

Jan 22 13:15:01 gentoo root: /: 3.4 GiB (3588984832 bytes) trimmed

Jan 23 13:15:01 gentoo root: /: 5.2 GiB (5533437952 bytes) trimmed

```

 Of particular concern are those frequent 170+ GiB trims occuring roughly once a week (but see Dec 27 & Dec 28.  At this rate my SSD may not last a year.

This machine rarely has user logons. It's primary purpose is as an apache web server, http-replicator server and minidlna server. The minidlna data files are on a seperate 5TB hard drive.  Is the problem portage and http:replicator? I thought portage data was synced not copied wholesale. Perhaps I should move /usr/portage to hard drive and symlink it.

I want to know why the SSD's sectors are recycled so much or if something is wrong with fstrim.

```
gentoo ~ # df -h |grep /dev/sda

/dev/sda2       228G   51G  165G  24% /

/dev/sda1      1022M  2.3M 1020M   1% /boot/efi
```

Last edited by Tony0945 on Wed Jan 25, 2017 1:34 am; edited 1 time in total

----------

## NeddySeagoon

Tony0945,

Have a look at smartctl -a ...

```

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

171 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0

172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0

173 Ave_Block-Erase_Count   0x0032   100   100   000    Old_age   Always       -       1

202 Percent_Lifetime_Used   0x0031   100   100   000    Pre-fail  Offline      -       0

206 Write_Error_Rate        0x000e   100   100   000    Old_age   Always       -       0

```

----------

## frostschutz

 *Tony0945 wrote:*   

> Following recommendations I have a cron job that runs trim twice a day:

 

If you have decent amounts of free space and little write activity, it's completely fine to do it weekly or even monthly.

I've elaborated my thoughts here: SSD: how often should I do fstrim?

 *Tony0945 wrote:*   

> 
> 
> It seems like excessive bytes are trimed.
> 
> ```
> ...

 

The amount of data trimmed is decided by the filesystem itself and it's difficult to make any deductions from this value. It doesn't mean anything. 

Some filesystems try to remember what they already trimmed and won't trim it again. Other filesystems always trim all the free space. Some trim all the free space once after it was mounted (in practice that means usually once per boot). So you get all sorts of different, confusing results. This is normal.

 *Tony0945 wrote:*   

> Of particular concern are those frequent 170+ GiB trims occuring roughly once a week (but see Dec 27 & Dec 28.  At this rate my SSD may not last a year.

 

As long as it's not larger than free space you should be fine. Do you have 170G free space in that filesystem? Then it's not a problem.

In the end only the SSD itself knows what's trimmed and what not. Trimming already trimmed does not cause any harm whatsoever.

----------

## Tony0945

smartctl -a /dev/sda

```
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate     0x002f   100   100   000    Pre-fail  Always       -       0

  5 Reallocate_NAND_Blk_Cnt 0x0032   100   100   010    Old_age   Always       -       0

  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       1330

 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       9

171 Program_Fail_Count      0x0032   100   100   000    Old_age   Always       -       0

172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0

173 Ave_Block-Erase_Count   0x0032   100   100   000    Old_age   Always       -       1

174 Unexpect_Power_Loss_Ct  0x0032   100   100   000    Old_age   Always       -       5

180 Unused_Reserve_NAND_Blk 0x0033   000   000   000    Pre-fail  Always       -       2592

183 SATA_Interfac_Downshift 0x0032   100   100   000    Old_age   Always       -       77

184 Error_Correction_Count  0x0032   100   100   000    Old_age   Always       -       0

187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0

194 Temperature_Celsius     0x0022   066   063   000    Old_age   Always       -       34 (Min/Max 24/37)

196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0

197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       0

198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0

199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       -       0

202 Percent_Lifetime_Used   0x0030   100   100   001    Old_age   Offline      -       0

206 Write_Error_Rate        0x000e   100   100   000    Old_age   Always       -       0

210 Success_RAIN_Recov_Cnt  0x0032   100   100   000    Old_age   Always       -       0

246 Total_Host_Sector_Write 0x0032   100   100   000    Old_age   Always       -       264370752

247 Host_Program_Page_Count 0x0032   100   100   000    Old_age   Always       -       8261797

248 Bckgnd_Program_Page_Cnt 0x0032   100   100   000    Old_age   Always       -       6962275

```

----------

## Tony0945

 *frostschutz wrote:*   

> I've elaborated my thoughts here: SSD: how often should I do fstrim?
> 
> 

   I read that and now have added discard to the fs options. If the size of trims goes down, I'll switch to weekly trimming.

 *frostschutz wrote:*   

> 
> 
>  *Tony0945 wrote:*   Of particular concern are those frequent 170+ GiB trims occuring roughly once a week (but see Dec 27 & Dec 28.  At this rate my SSD may not last a year. 
> 
> As long as it's not larger than free space you should be fine. Do you have 170G free space in that filesystem? Then it's not a problem.

 I only have 165G, however 12G is in a swapfile.

 *frostschutz wrote:*   

> 
> 
> In the end only the SSD itself knows what's trimmed and what not. Trimming already trimmed does not cause any harm whatsoever.

 Doesn't it shorten life? I only have 80T written as a life spec.

I still have the original hard drive in the system but unmounted. I shrank the root on that and created a swap partition as /dev/sdb3. I'm now using that instead of /swapfile. If it helps, I'll just delete /swapfile and reclaim the space.

----------

## frostschutz

 *Tony0945 wrote:*   

> I read that and now have added discard to the fs options. If the size of trims goes down, I'll switch to weekly trimming.

 

 :Confused: 

There's literally no relation whatsoever between the two. Well, there could be - but it's totally up to the filesystem.

Not sure why so many people worry about the sizes reported by fstrim - it really doesn't mean anything. Just ignore it.

If you believe you have a problem just because of whatever size fstrim reported, then you really don't have a problem.

 *Tony0945 wrote:*   

> Doesn't it shorten life? I only have 80T written as a life spec.

 

Uhm, no. If you write 1GB, then delete that 1GB, trim the same 1GB, trim the same 1GB again, and trim the same 1GB again, then you still only wrote 1GB. If you think that's writing 5GB, then no.

----------

## NeddySeagoon

Tony0945,

Think of trim being an instruction to the drive that it may perform an erase.

Erase is not write.  Further, erase is an expensive operation time wise. The drive has to keep track of erased blocks, for low level block allocation, so it won't erase already erased blocks.

Regardless of fstrim, the drive will do what it wants.  Think of fstrim as giving the drive permission to erase the trimmed blocks.

Its not forced to, it will make a note and get back to it.

discard works much the same way.  You can't tell exactly when the erase will be performed.  Its up to the drive.

Erase blocks are bigger that write blocks.  There may be advantages (to the drive) in delaying erase cycles, so that write amplification is minimised.

e.g. If an entire erase block is to be trimmed, it contains no data to be kept.

trim is a complex process in SSD firmware and they don't all get it right.

There is at least one drive in the wild that trims still used data but that's what backups are for.

----------

## Tony0945

Gentlemen:   Thank you for your input. I respect you both based on your work in these forums.

Before I mark this [SOLVED] (and I'm not so sure where I should have swap - SSD or HDD) please confirm that the wiki, https://wiki.gentoo.org/wiki/SSD,  is all wet *Quote:*   

> Rootfs
> 
> The -odiscard option on a rootfs mount should not be used. discard is the "TRIM" command that tells the SSD to do its magic. Having discard running constantly could potentially cause performance degradation on older SSDs. Modern SSDs use discard by default. Rather the following command can be used manually or be setup as a cron job (see below) to run twice a day, which should suffice for the rootfs:
> 
> root #fstrim -v /

 

----------

## Roman_Gruber

A bit offtopic:

I never used trim on my SSDs. I have 3 of those. And i swap in them regularly as system or backpu drives. the file system is wiped and recreated during the backup stage.

and before that i used the SSD for ages without backup... without trim. Is it really that necessary for /?

----------

## frostschutz

@Tony0945: fstrim is what pretty much all distros use these days, just not twice a day.  :Laughing: 

As for swap, I don't have any. If you're not tight on RAM you'd only need it for hibernate. Hibernate on SSD is faster than HDD but not THAT much (hibernation image is one blob of consecutive data, that's the one thing HDD aren't horrible at). Also depends on how often you do it... something you have to try out for yourself. If you don't need hibernation that's for the best, so many things can go wrong with it.

----------

## Tony0945

No hibernation, just a server that runs 24/7. I think those big fstrim's were from reboots.  Apparently on shutdown, the kernel forgets what was trimmed before and trims all free space. I'm surprised. it shouldn't be that hard to save a binary file with the data on shutdown and restore it on reboot. Of course, the drive could be messed with by e.g. sysrescuecd between shutdown and reboot. I only reboot when I build a new kernel which is fairly often these days. I doubt if the swapfile was ever used. The system has 8G but the APU reserves 1G for graphics. Maybe a few ebuilds.  

What about the advice to NOT use discard?  Some sources say ext4 uses it automatically now. I'm running 4.9.x kernels.

----------

## frostschutz

 *Tony0945 wrote:*   

> What about the advice to NOT use discard?

 

There's nothing wrong with using discard, if instant discard is what you want. It's just that most people don't need it, and it comes with performance penalties, so fstrim is the better choice.

----------

## A.S. Pushkin

As I've been battling my 850 PRO SSD and though they are fast I'll be going back to

mechanical HDDs.

I've heard various remarks about TRIM, use it don't use it, but I fail to see that it does anything to restore

last capacity to my drives. I began with Gentoo installed on a 128GB 850 PRO and filled it up? I thought

that perhaps I just needed a larger disk for Gentoo so I upgraded to a 256GB 850 PRO. Well, the same thing

is happening.

/home is located on a Seagate Baracuda

I built this box for 3D modeling, video and audio editing and I can no longer create DVDs with DeVeDeng.

Now I'm not sure if this is just Samsung SSDs ot all, but I'm done with SSDs. I own three 850 PROs and a fast

boot is of little value to me if I can not do my work.

----------

## NeddySeagoon

A.S. Pushkin,

Do you use eclean?

What about --depclean?

/var/tmp/portage fills with any failed ebuilds.

/var/log/portage can contain logs for everything you have ever built. 

Do you rotate your logs in /var/logs ?

There are a few other locations that grow until you prune them too.

/boot - old kernels and initrds

/lib/modules - loadable modules for all the kernels you have ever made.

/usr/src - bits of kernels that are not --depclean ed. 

Cleaning out your junk is as important as keeping your Gentoo up to date.

You can easily fill 250G in a year or so if you don't git rid of the rubbish.

discard/fstrim has nothing to do with cleaning out the filesystem.  It informs the SSD that hosts the filesystem that it may erase once used space that is no longer in use.

This maintains SSD speed as there is no need to perform the slow erase operation before a block can be rewritten. The drive should do that in response to the discard/fstrim command. well before the blocks are reused.

----------

## A.S. Pushkin

I'm been trying to use emerge --depclean, but I'm a bit leery of removing too much.

I've got some logs, but total at this time less that 1GB.

Rotation is another area I'm uncertain, that is how to set up rotation, so I must assume I do not. Suggestions

as I do have syslogng installed.

/boot - 

/lib/modules 

/usr/src

All seem alright

When I run fstrim -v /

It returns no more than about 11GB.

Thanks

----------

## NeddySeagoon

A.S. Pushkin,

Run

```
du -d 1 /
```

 as root.  It will read your entire filesystem and tell you how much space each directory uses.

You can run the same command on that directory and so on, to find out where all your space is used.

I get 

```
$ sudo  du -hd 1 /

16K   /lost+found

459M   /lib64

4.0K   /boot

182G   /usr

0   /sys

8.9M   /bin

359M   /opt

921G   /home

272K   /dev

8.5M   /etc

1.4G   /var

19M   /sbin

72M   /root

4.0K   /tmp

164K   /run

du: cannot access '/proc/2440/task/2440/fd/4': No such file or directory

du: cannot access '/proc/2440/task/2440/fdinfo/4': No such file or directory

du: cannot access '/proc/2440/fd/3': No such file or directory

du: cannot access '/proc/2440/fdinfo/3': No such file or directory

0   /proc
```

/usr looks huge. 

```
$ sudo  du -hd 1 /usr

Password: 

330M   /usr/bin

1.2G   /usr/armv7a-hardfloat-linux-gnueabi

37G   /usr/aarch64-unknown-linux-gnu

1.4G   /usr/armv6j-hardfloat-linux-gnueabi

2.5G   /usr/lib64

16K   /usr/lost+found

126M   /usr/x86_64-pc-linux-gnu

13M   /usr/sbin

134M   /usr/games

1.3G   /usr/share

4.8M   /usr/local

41M   /usr/i686-pc-linux-gnu

32G   /usr/packages

300M   /usr/include

24K   /usr/var

7.3G   /usr/src

1.3G   /usr/libexec

99G   /usr/portage

4.0K   /usr/tmp

182G   /usr

21G   /mnt

1.1T   /

```

32G	/usr/packages includes all the amd64 binaries I have built since 2009, when this system was new. 

99G	/usr/portage includes all he distfiles this system has downloaded.

7.3G	/usr/src is down to kernel source trees.

That accounts for 140G of the 180G.

If you keep filling HDD wint your Gentoo and its not in /home, you need to understand whats using the space.

----------

## Anon-E-moose

I run a mix of ssd and old hd tanks (lol) on my mostly server system.

On the ssd I have /, /usr/portage and a mostly static partition, I fstrim daily on root and /usr and weekly on /x. 

On one of the hd's, it's my "storage" filesystem, media (sound and movies) along with a few odds and ends.

It gets written hard every once in a while, and read a lot, and I'll swap it over to ssd when the 1 gig come down in price.

On the other hd, I have /var, and /home, both of which get pretty frequent writes along with reads.

/var/tmp/portage and /tmp are ram disks

How you set up your ssd or a mix of ssd/hd should be dependent on how you use your system.

Edit to add: 

I keep all distfiles and ebuilds backed up daily onto a backup drive that only gets mounted for backups,

and then do a eclean-dist every 2-3 months (that will keep the size of /distfiles down.

When in doubt do "eclean-dist -p" to see what would be removed.

----------

## A.S. Pushkin

Thank you NeddySeagoon and Anon-E-moose!

I clearly have got to become more adept at du!

Anon-E-moose how do you run fstrim on those particular partitions?  Do you use cron or

do you do it manually? I've been concerned that running fstrim too much was  bad idea

for the SSD longevity. I've heard it was particularly bad for Linux.

Thanks

----------

## NeddySeagoon

A.S. Pushkin,

fstrim is harmless to the drive.  It informs the drive of which blocks the filesystem is not using.

The drive will not erase already erased blocks.

Its also up to the drive if and when it takes any action at all based on this information.

While that's correct, its not complete.

The erase block size is bigger than the write block size.  This leads to 'write amplification' as the drive needs to move used written blocks around to empty an erase block before it erases it.

How and when the drive decides to do this is up to the drive firmware.

----------

## A.S. Pushkin

NeddySeagoon thanks for the confirmation on what takes place.

I'm puzzled still. I did run fstrim on locations you suggest.

In this search I realized I should use sys-apps/smartmontools and installed it along with sys-apps/gsmartcontrol, which is very helpful.

I'm wondering if my NEW drive is worn out?

```
SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0

  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       389

 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       124

177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       2

179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0

181 Program_Fail_Cnt_Total  0x0032   100   100   010    Old_age   Always       -       0

182 Erase_Fail_Count_Total  0x0032   100   100   010    Old_age   Always       -       0

183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0

187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0

190 Airflow_Temperature_Cel 0x0032   078   065   000    Old_age   Always       -       22

195 ECC_Error_Rate          0x001a   200   200   000    Old_age   Always       -       0

199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0

235 POR_Recovery_Count      0x0012   099   099   000    Old_age   Always       -       12

241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       1132095707

```

I'm not sure  how to interpret this output, but it does not look good.

Running fstrim has produced a 10G return, but running df -h on my system does not show any increase in available disk space.

Does suggest need for a firmware update?

Thanks!

----------

## NeddySeagoon

A.S. Pushkin,

fstrim does not free any space on your filesystem.

When you delete a file, the data remains in place on your hard drive. Only the pointers to it are removed. 

The space occupied by the file is returned to the filesystem free space.

fstrim tells the drive about the free space un the filesystem.  The drive can then erase it well before it will be reallocated.

Compared to reads and writes, erase is a very slow operation,

A magnetic hard drive does not require a separate erase operation an it can be overwritten by a write.

SSDs must be erased before they can be written.

The total tumber of blocks written to your drive is 

```
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       1132095707 
```

Each block will be good for over 100,000 erase cycles and the drive will perform 'wear levelling' to ensure the wear is spread over all the available blocks.

If your drive was only 50MB (Mega) then with the wear levelling, it would be worn out with your 1132095707 writes.

Your drive is hundreds or 1000s of times bigger than 50MB, so the lifetime used is correspondingly less.

----------

## Anon-E-moose

 *A.S. Pushkin wrote:*   

> 
> 
> Anon-E-moose how do you run fstrim on those particular partitions?  Do you use cron or
> 
> do you do it manually? I've been concerned that running fstrim too much was  bad idea
> ...

 

I run fstrim from cron, thus the once a day (I do it early morning, since the computer is on all the time)

I wouldn't run fstrim hourly but I would think once a day or even once a week wouldn't be a problem for most modern ssd's. 

The typical lifetime is in the 5+ year range, with moderately heavy usage.

----------

## A.S. Pushkin

NeddySeagoon you put me the wondering what was happening. I assumed it was just a SSD problem.

On my part there is some confusion about the automounter, but I finally discovered what happened.

Perhaps there is something not configured properly? But this is what happened with all my disk space.

I assumed my 1TB drive in the hotswap rack was being mounted at /media. It was not. It was being

mounted at /run/media/

When I created two directories they actually created in / rather than the 1TB drive. After actually moving

the files I thought were on the 1TB drive from /media space on my root drive to the 1TB drive USE% went from

100% to 10%.

I clearly have more to do, but will take into account the plans used by you and Anon-E-moose.

Thanks.

----------

## NeddySeagoon

A.S. Pushkin,

You SSD will be none the worse for being filled up once.

You should run fstrim now that its 90% free so the drive will erase those blocks just moved to your 1TB drive.

----------

