# squid does not respect cache_dir size but uses all diskspace

## DawgG

i run squid-3.5.27 on an up-to-date amd64-server, its cache-dirs (and logs) are placed in their own ext2-partition with a size of 100GB. i want the cache to use 75GB of this so the directive in squid.conf is

```
cache_dir aufs /mnt/cache/squid 75000 16 256
```

recently the webcache became inaccessible because the whole partition was 100% full (space, not inodes) and when i checked i noticed that squid frequently occupies more than the allocated 75GB; eg just now

```
squid2 ~ # df -h

/dev/sdb2        99G     82G   13G   87% /mnt/cache
```

this webcache is used mainly for windoze-updates and has some unconventional settings because of this (see https://wiki.squid-cache.org/ConfigExamples/Caching/WindowsUpdates), especially

```
range_offset_limit 6000 MB

maximum_object_size 6000 MB

quick_abort_min -1

quick_abort_max -1

quick_abort_pct -1
```

but i think this should not let the whole cache become inaccessible or override the cache_dir-setting above.

what do you think?

----------

## Irom

It's very unlikely because the size difference is immense, but here's at least something to try: Maybe the space is wasted by the minimum block size of your fs?

How much GB does

```
find /mnt/cache/squid -printf '%k\n' | awk '{ sum += $1 } END { printf("%.2f GB\n", sum / 1024 / 1024 ) }'
```

show?

----------

## DawgG

THX for your reply!

the difference is minimal:

```
find /mnt/cache/squid -printf '%k\n' | awk '{ sum += $1 } END { printf("%.2f GB\n", sum / 1024 / 1024 ) }'

93.04 GB
```

 vs.

```
df /mnt/cache/

/dev/sdb2      103212320 97898508     70932  100% /mnt/cache

df -h /mnt/cache/

/dev/sdb2        99G     94G     0  100% /mnt/cache
```

so the service went down again (from cache.log):

```
2018/06/29 11:34:03 kid6| diskHandleWrite: FD 18: disk write error: (28) No space left on device

FATAL: Write failure -- check your disk space and cache.log

Squid Cache (Version 3.5.27): Terminated abnormally.

CPU Usage: 0.018 seconds = 0.014 user + 0.005 sys

Maximum Resident Size: 70608 KB

Page faults with physical i/o: 9
```

i guess the cache_swap-settings somehow conflict with the large object_size and the non-terminating transfers of the f***ing windoze-updates. i'll look deeper into that.

----------

## Irom

I just saw the following as a comment in my own squid.conf and think it could explain your problem.

 *http://www.squid-cache.org/Doc/config/cache_dir/ wrote:*   

> Do NOT put the size of your disk drive here. Instead, if you want Squid to use the entire disk drive,	subtract 20% and use that value.
> 
> 

 

75 GB + 20 GB fits your 95 GB.

----------

## DawgG

 *Quote:*   

> Do NOT put the size of your disk drive here. Instead, if you want Squid to use the entire disk drive, subtract 20% and use that value. 

 

yeah, seen&done that, too  :wink: 

i set

```
cache_swap_low 70

cache_swap_high 80
```

(default was 90/95) in order to evict objects earlier and managed to restart squid w/out having to erase the whole cache. fs-usage went down to 60-70GB in between but now the service went down again b/c "disk full."

i reduced the above settings to 60/75 but if this does not help it's time for the squid-mailing-list, i guess.

----------

## bunder

have you tried with diskd instead of aufs?  mine seems to be working fine...

```
cache_dir diskd /var/cache/squid 8000 16 256

```

----------

## DawgG

@bunder

thx for your reply, but i am quite certain the store-type is not the problem since i run an (almost) identical squid-server (cache_dir gets 200 of 250GB) w/out all those modifications for windoze-updates and it's been running flawlessly for years

```
cache_dir aufs /mnt/cache/squid 200000 32 51

squid ~ # df -h 

/dev/sdb1       247G    179G   56G   77% /mnt/cache/squid

```

likely a case for the squid mailing list (or squid's bugzilla).

----------

## DawgG

i asked the squid mailinglist and (most likely, not completely resolved yet) the reason is that 

```
workers 8
```

 (an SMP-setting for squid) cannot be used with aufs. it is supposed to work with ufs, but in a quick test my impression was that it dows not work with that, either.

i turned of the SMP-setting, but will look into it further (as i still think this is related to those windoze-update-settings.)

----------

