# Apache Memory Comsumption

## dageyra

I have a server that has been restarting itself recently.  I traced the problem to an out-of-memory error, and the culprit appears to be Apache.  The machine is a 64-bit quad xeon processor with 4 GB of RAM and 8 GB of virtual memory.  The machine handles bind, apache and mysql, while a separate machine handles mail.  top says that the top memory users are apache and mysql, of which the most are apache.  I am wondering how I can get more information about what site is causing such high memory usage.   We are using the worker MPM, but maybe we could change that.  Our 00_mpm.conf for the worker module is as follows:

```

<IfModule mpm_worker_module>

        StartServers            2

        MinSpareThreads         25

        MaxSpareThreads         75

        ThreadsPerChild         25

        MaxClients                      150

        MaxRequestsPerChild     10000

</IfModule>

```

When I run top, I see the following info:

```

Mem:   4055960k total,  3662084k used,   393876k free,   263560k buffers

Swap:  8008392k total,      164k used,  8008228k free,  3071564k cached

....

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

11762 apache    20   0  513m  45m 6036 S    0  1.1   0:01.18 apache2

11761 apache    20   0  572m  43m 5960 S    0  1.1   0:01.20 apache2

 3733 mysql     20   0  417m  37m 3796 S    0  1.0   9:29.78 mysqld

11757 root      20   0  184m 9772 3872 S    0  0.2   0:00.03 apache2

 3613 named     20   0 65696 9288 1992 S    0  0.2   0:01.01 named

11760 apache    20   0  181m 4272  656 S    0  0.1   0:00.00 apache2

```

Does anyone have ideas on what we can try to reduce the apache consumption or possibly some tracing tools we can use to identify the websites that are consuming the greatest memory?  Thanks in advance for your help.

----------

## St3v3

try lowering MaxRequestsPerChild to about 250 and see how that goes

----------

## dageyra

 *St3v3 wrote:*   

> try lowering MaxRequestsPerChild to about 250 and see how that goes

 

Hello St3v3:

Thanks for your help.  I will do this and see how it helps.  Can you explain the logic here?  I do not recall why the number got so high, but I remember we had problems with Apache being slow long long ago.

----------

## St3v3

max requests per child is high by default, it doesn't have to be though

you can also try adding "Timeout 45" in there too, by default its 300 seconds and memory will be tied up for that thread even if the user is long gone.

----------

## dageyra

 *St3v3 wrote:*   

> max requests per child is high by default, it doesn't have to be though
> 
> you can also try adding "Timeout 45" in there too, by default its 300 seconds and memory will be tied up for that thread even if the user is long gone.

 

Unfortunately, that did not resolve the problem.  I made the change and rebooted the server.  For awhile, memory usage hovered around 200-300 MB, but today it's back to very high (almost 4 GB):

```

Mem:   4055960k total,  3957152k used,    98808k free,   248376k buffers

Swap:  8008392k total,      164k used,  8008228k free,  3428772k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

30012 apache    20   0  573m  45m 5832 S    0  1.2   0:03.02 apache2

 3711 mysql     20   0  406m  29m 3736 S    0  0.8   7:04.24 mysqld

 3839 root      20   0  184m 9796 3888 S    0  0.2   0:00.46 apache2

 3595 named     20   0 65696 9288 1992 S    0  0.2   0:00.48 named

30937 apache    20   0  400m 7400 1268 S    0  0.2   0:00.00 apache2

 3842 apache    20   0  181m 4280  656 S    0  0.1   0:00.00 apache2

```

I will try the timeout, is there someway to identify the site that is consuming the most memory?

----------

## LILY

Thank for providing such useful information, as i was suffering with this problem from last couple of days.

----------

## St3v3

 *dageyra wrote:*   

>  *St3v3 wrote:*   max requests per child is high by default, it doesn't have to be though
> 
> you can also try adding "Timeout 45" in there too, by default its 300 seconds and memory will be tied up for that thread even if the user is long gone. 
> 
> Unfortunately, that did not resolve the problem.  I made the change and rebooted the server.  For awhile, memory usage hovered around 200-300 MB, but today it's back to very high (almost 4 GB):
> ...

 

hmm try this command and have a look at what apache is doing:

/usr/local/apache/bin/apachectl status

the obvious thing to do to cut down on memory usage is lower maxclients, but then obviously not as many people will be able to connect

----------

## Mad Merlin

 *dageyra wrote:*   

> 
> 
> ```
> 
> Mem:   4055960k total,  3957152k used,    98808k free,   248376k buffers
> ...

 

Free memory is wasted memory. You have 3.4G used for cache, which can be evicted at any time at no cost -- you can consider that as available memory. Therefore, you have 3.5G of 4G memory free.*

The VIRT numbers you see in top are the amount of address space the program is using, because of shared libraries, mmap() and other factors, this number does not represent the actual memory usage of the program. The RES number is the amount of memory used by that process (and only that process) that's currently resident in RAM, that's the number you count.

* When you read a file, it needs to be copied into RAM first, because this is a relatively slow operation (waiting for the disk(s)), Linux leaves the copy of the file there in RAM until the memory needs to be used for something else or the file is deleted. Because that file is still present on disk, the cached copy in RAM can be thrown away at any time without needing to do anything extra, it'll simply be fetched from disk next time. In the mean time, your machine can hum along without needing to touch the disks when serving files from the cache.

Observe:

```

$ dd if=KNOPPIX_V6.0-ADRIANE_V1.1CD-2009-01-27-EN.iso of=/dev/null

1329036+0 records in

1329036+0 records out

680466432 bytes (680 MB) copied, 7.96729 s, 85.4 MB/s

$ dd if=KNOPPIX_V6.0-ADRIANE_V1.1CD-2009-01-27-EN.iso of=/dev/null

1329036+0 records in

1329036+0 records out

680466432 bytes (680 MB) copied, 0.722715 s, 942 MB/s

```

The first time was uncached, while the second was cached.

In summary: You're trying to fix problems that aren't problems, though it's a pretty common reaction until it's been explained.

----------

## dageyra

 *Mad Merlin wrote:*   

>  *dageyra wrote:*   
> 
> ```
> 
> Mem:   4055960k total,  3957152k used,    98808k free,   248376k buffers
> ...

 

Hello Mad Merlin:

Thanks for your insightful information, that actually does help out a lot.  This actually is a problem for me, the server's RAM gets eaten up in cache (I just learned about this because I did not get notified about your reply to this post, but basically said precisely what you had said without the useful examples), then the swap starts to fill, and the server reboots itself.  This has happened when users were accessing it a few times.

I found this page: http://www.scottklarr.com/topic/134/linux-how-to-clear-the-cache-from-memory/

Do you know anything about this method to control the cache?  I'd actually prefer just to limit the size of the cache.

----------

## cach0rr0

flushing the cache will only be a temporary band-aid for the problem, as that memory will quickly be gobbled back up when new requests are received

More than anything I'd be keen to look at real usable figures on how much data this system is serving. 

There are also things like this to be made aware of - http://www.funtoo.org/en/security/slowloris/

You have any metrics on just how much work Apache is actually doing? Again, as the fella above mentioned the top output you show is of a system with heaps of memory to spare (as you've acknowledged - though I do understand eventually swap gets eaten). 

You may quite simply be throwing more requests at this box than it alone can handle  - a completely unqualified, uneducated statement on my behalf, but none of us can give more educated statements on the matter I don't think, until we know whether the resource  utilization we're seeing is real, genuine request data overwhelming the box.

----------

## dageyra

 *cach0rr0 wrote:*   

> flushing the cache will only be a temporary band-aid for the problem, as that memory will quickly be gobbled back up when new requests are received
> 
> More than anything I'd be keen to look at real usable figures on how much data this system is serving. 
> 
> There are also things like this to be made aware of - http://www.funtoo.org/en/security/slowloris/
> ...

 

Thanks for your reply.  My gut reaction is apache too, hence the focus of this post, but I admit the Linux memory management is an on-going learning process for me.  How did I miss something as basic as caching.  My concern is primarily focused around the random reboots--I have another server that can be up for a year (which I know is not a great boast for Linux, but great for me compared to the Windows machines I administer).  Heck, I probably wouldn't even notice this as a problem if it were Windows, just another day, another reboot.

Part of this post is actually a request for suggestions on what metrics I can use with Apache, so if there are any in particular I can get for you, I'm like a sponge.  I will look into that link you sent, but if you have any immediate suggestions, please pass them on.  I'm also open to looking at other services aside from Apache that may be the culprit, but since the other service memory usage seems stable overall, Apache seems most likely to me (and it has the added issue of dealing with user-driven code that could possibly be a memory eater, though the top output doesn't support this).

I have posted the output of my Apache memory configuration, what else would you recommend?

----------

## cach0rr0

log-based reporting apps like awstats would be a start - BUT

what concerns me most is the less than graceful fashion in which this is handling an out of memory scenario. 

i don't really have any specific pointers - i can only say if it were my box, I'd be fishing around /var/log/dmesg and /var/log/messages etc to see if I could correlate any specific events to the reboot. No specific suspects, would be a shot in the dark for me. 

Using monitoring tools like Nagios (e.g. deploy an agent to this box) will give you an idea of system load right before the reboot, as well it'll allow you to do proper trending. Of course, most people would be loathe to go through with setting this up on a box they already know to be problematic. 

Sorry, that may be completely useless to you. That's just how I personally would go about it, and there's nothing to say my methods aren't completely and totally wrong - just familiar.

----------

## Mad Merlin

 *dageyra wrote:*   

> Hello Mad Merlin:
> 
> Thanks for your insightful information, that actually does help out a lot.  This actually is a problem for me, the server's RAM gets eaten up in cache (I just learned about this because I did not get notified about your reply to this post, but basically said precisely what you had said without the useful examples), then the swap starts to fill, and the server reboots itself.  This has happened when users were accessing it a few times.
> 
> I found this page: http://www.scottklarr.com/topic/134/linux-how-to-clear-the-cache-from-memory/
> ...

 

You shouldn't ever need to flush the cache, the only time it makes sense to do is if you're benchmarking and you want to measure uncached numbers. As mentioned in the comments of that article, you can tune your swappiness (or disable swap) to control how much swap is used.

----------

