# [SOLVED] SSD says it is full

## davidbrooke

I was notified that space was running out on my 240GB SSD. I looked and there was less than a GB of free space. I quickly removed a few files that freed up approximately 15GB. This drive is mainly just the OS and usually ranges in used space from 20-30GB. I ran:

/ $ df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda1       220G  196G   14G  94% /

devtmpfs         10M     0   10M   0% /dev

tmpfs           785M  612K  785M   1% /run

cgroup_root      10M     0   10M   0% /sys/fs/cgroup

shm             3.9G   30M  3.9G   1% /dev/shm

tmpfs           3.9G   20K  3.9G   1% /tmp

I decided to review what was using the rest of the space. I ran the following from the root directory:

du -Sh | sort -rh | head -n 100

and 

find . -type f -exec du -Sh {} + | sort -rh | head -n 100

Using either method, I added up the used space and came up with approximately 23GB verses the 196GB shown to be used. I also checked the smart data which showed no failures or errors and ran fsck -c. So at this point I'm not sure if I have a file or files totaling 173GB that I can't find or there is a hard drive issue.

Any help would be appreciated.

ThanksLast edited by davidbrooke on Mon Jun 22, 2015 3:49 pm; edited 1 time in total

----------

## mv

You did not write which fiesystem you are using: Possible answers depend on this.

For a "classical" filesystem (e.g. ext*) there can be 2 causes for the difference:

1. You have a lot of very small files. For instance, for the gentoo tree the factor 10 between the sum of the data and the disk space actually used can be normal. Similarly, if you have huge html directories. For other data such a factor is strange...

2. Your system has reserved one (or several) huge files for some reason (e.g. on /tmp) but the filehandle is not yet closed. To make sure that the data is freed, you need to umount and mount again. If it is your main disk, you might need to reboot.

----------

## frostschutz

```

mkdir /mnt/root

mount --bind / /mnt/root

xdiskusage /mnt/root

```

----------

## davidbrooke

 *mv wrote:*   

> You did not write which fiesystem you are using: Possible answers depend on this.
> 
> For a "classical" filesystem (e.g. ext*) there can be 2 causes for the difference:
> 
> 1. You have a lot of very small files. For instance, for the gentoo tree the factor 10 between the sum of the data and the disk space actually used can be normal. Similarly, if you have huge html directories. For other data such a factor is strange...

 

I'm using ext4 fs and i have similar installations that are not having this problem of consuming the hard drive. I have also used Dolphin to review the space consumed and it is relatively close to the du commands.

 *Quote:*   

> 2. Your system has reserved one (or several) huge files for some reason (e.g. on /tmp) but the filehandle is not yet closed. To make sure that the data is freed, you need to umount and mount again. If it is your main disk, you might need to reboot.

 

I have rebooted numerous times and used System Rescue to run the du commands.

----------

## toralf

B7c you use a SSD, do you let the OS trim/discard it ? (eg.: I do have :

```
/dev/sda3               /               btrfs           noatime,discard,compress=lzo,ssd_spread        0 0

```

)

----------

## davidbrooke

 *toralf wrote:*   

> B7c you use a SSD, do you let the OS trim/discard it ? (eg.: I do have :
> 
> ```
> /dev/sda3               /               btrfs           noatime,discard,compress=lzo,ssd_spread        0 0
> 
> ...

 

My current fstab entry:

/dev/sda1 / ext4 defaults,relatime,discard 0 1

----------

## davidbrooke

 *frostschutz wrote:*   

> 
> 
> ```
> 
> mkdir /mnt/root
> ...

 

Please give some insight as to what you are proposing.

Thanks

----------

## frostschutz

bind mount lets you find files currently hidden under other mountpoints

xdiskusage (if you have X) is easier to interpret than du output

----------

## davidbrooke

 *frostschutz wrote:*   

> bind mount lets you find files currently hidden under other mountpoints
> 
> xdiskusage (if you have X) is easier to interpret than du output

 

Using your approach I was able to better see what was happening but I still don't understand how it happened.

Maybe someone can give me some insight so that it doesn't occur again. Here are more details of my situation:

1. I was using a nightly cron job to rsync sdc1 to sdd1. Here is the crontab -l:

```
# DO NOT EDIT THIS FILE - edit the master and reinstall.

# (/tmp/crontab.XXXXhZ3RjR installed on Mon Mar  2 11:51:04 2015)

# (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $)

0 1 * * * /usr/bin/rsync -av --stats --delete /media/sdc1/ /media/sdd1/
```

2. I do periodic checks on sdd1 to make sure all is well. I did this and saw that sdd1 hadn't been updated for a couple of weeks. I also saw that smart data said that the drive was failing. I commented out the fstab entry and ordered a new drive.

3. I few hours later I was notified via KDE that sda1 was low on space (< 1GB left). I then researched and worked on a solution but failed and came to Gentoo forums.

4. Using the xdiskusage I was able to see that part of sdd1 was copied onto sda1 causing the low disk space issue. I saw this earlier using the du command but didn't quite understand it do to my lack of linux knowledge.

5. I used System Rescue and mounted sda1 and removed /media/sdd1. Rebooted and saw that sda1 was using 35GB vs the 195GB previously.

If anyone could comment on how my issue happened and how not to let it occur again, it would be appreciated.

Thanks

----------

## frostschutz

You have to check that a mountpoint is actually mounted afore dumping tons of data into it.

Either that or make sure the copy will fail somehow by removing permissions of the unmounted mountpoint.

----------

## davidbrooke

I found this example:

```
DIR=/Volumes/External; 

if [ -e $DIR ]; 

then rsync -av ~/dir_to_backup $DIR; 

else echo "$DIR does not exist"; 

fi
```

I would change it:

```
DIR=/media/sdd1; 

if [ -e $DIR ]; 

then rsync -av --stats --delete /media/sdc1/ /media/sdd1/; 

else echo "$DIR does not exist"; 

fi
```

or use:

```
DIR=/media/sdd1;

test -e $DIR && rsync -av --stats --delete /media/sdc1/ /media/sdd1/;
```

Comments please.

Thanks

----------

## davidbrooke

I think it would be better to start a new topic. I will be moving over to "Other Things Gentoo" to get feedback for using rsync.

Thanks

----------

