# limit size of kernel inode cache possible?

## fangorn

Hi,

Is there a way to limit the size of inode cache (xfs_inode, nfs_inode) of the kernel?

I have long running two production servers of which one mirrors the other through 

rsync daemons. They ran for about a year without problems. A few weeks ago the 

main server began showing hangs from time to time. I monitored with atop 

and slabtop and detected that the kernel used memory (slab) went from less than 

300 MB to over 4 GB from time to time. After some time the memory usage goes 

back to normal again. 

This leads me to the conclusion that it is not a memory leak but just "normal" behaviour. 

The problem is, from time to time, under high load, new processes are killed because

no free memory is available. And occasionally the machine hangs even when 

nothing (visible) is happening on the machine (the cpu, harddisk and network load are 

next to zero). Today the server completely halted, claiming some completely low load 

daemons (license daemons, idle rsyncd and nfs server, ... ) eating all of its four XEON 

core CPU cycles. Nevertheless the machine was totally unresponsive over the network, 

the local console and even the hardware power switch. 

Now I am the one to provide a solution that this sudden death does not happen again.   :Rolling Eyes: 

My problem is, that the logs are clean to my eyes. For the system the machine was 

still running when I cut the power. The only hints I have are the tremendous slab usage 

of the inode cache, the occasional events when new processes are killed for missing 

memory and the sometimes unresponsive server. 

A little background: the server is a supermicro barebone with a four core XEON processor, 

8 GB RAM and hardware RAID6 on 12 2TB SATA disks. It serves multiple filesystems 

over NFS (version 3 and 4 mixed). All data filesystems use XFS with standard parameters. 

Kernel is 2.6.32 amd64. Network driver is the kernel internal e1000. 

To come back to my question: Is it possible to tell the kernel to not use more than X % of 

the builtin RAM for inode caches? This would cost me some performance, but it would 

possibly work around the occasional hangs. 

Or if you have other ideas, don't hesitate to tell. I am stuck at the moment. 

Thanks for reading this rather lengthy post, 

fangorn

----------

## fangorn

Seems the problem is not that common after all.  :Twisted Evil: 

What I forgot to write is the reason why I blame the kernel inode caches 

for this strange behaviour. In Slabtop I saw the inode caches of XFS and 

NFS go up in seconds during a sync process, effectiveĺy reading the file 

metadata of multiple multi-TB filesystems from the disk and storing them 

in memory. 

As every filesystem is read only once the cache is worthless when switching

the filesystems. I would like to at least  flush the caches before starting with the 

next sync process. 

A increase in system memory will most likely not work, as the caches just will

grow further if they are not limited in some way. 

So I'll give a call to the kernel mailing list. I'll report here if I get an answer.

----------

## fangorn

```
echo 2 > /proc/sys/vm/drop_caches
```

drops the inode cache. Just for reference. 

I get more and more the impression that this is less a software than a hardware problem.

----------

