# Preventing one user to use all the cpu available. (solved)

## 22decembre

Hello

I wonder how to forbid one (or two, but at least one should be cool !) user to eat more than a fixed percentage of the cpu power.

I have few software in my web server that run under apache id (but not the web server itself : this process are only launched by apache) and this process can forbid access to the server generally speaking (ssh or web access for exemple) because they eat all the cpu available. They are already nice 19.

Thus, if I can limit apache to... let's say 50% of cpu (or other value), it may solve my problem I think.

----------

## rh1

Maybe this is what your looking for?

http://cpulimit.sourceforge.net/

----------

## 22decembre

it can be...

I was thinking of it in the kernel itself (due to the scheduler, same reason the dev is saying...).

I will try !

Anyone has already use that software ? Especially with apache (as apache is the process launching the other process that are the problem... if you understand me !). Apache seems to me a little bit strange as it as several instances (one on top, and several answering the request if I understand correctly), and the process are never stable ! The only thing I can do is to limit apache itself, or modify the code of the webapp...

Thus, if anyone has use cpulimit with apache (or not) (s)he is welcome to say me the result (good or not). Thanks !

Of course, everybody can still answer !

----------

## gentoo_ram

If the high-CPU processes are already "nice"-d to 19, then why are you worried?  The scheduler will yield CPU time to other higher-priority processes if they need it.  Also, you might want to explore installing the new 2.6.38 kernel.  It has automatic support for cgroup scheduling which is supposed to help smooth CPU time across different users.

Why would you want the CPU to be forced "idle" if there are processes waiting for more CPU time?

----------

## madchaz

I find that 99% of the time, the best solution is to actually fix the issue. In your case, the webapp. Find out why it consumes so much cpu and fix it. Then your problem goes away much more cleanly.

Specifically, if you have reniced it to 19 and it STILL causes problems, then you have a major prob in there. I have processes eating my idle time on my server ( as in they will consume ALL available cpu no one else is using) and they are nice 19 and never cause problems. If you have something that is that reniced and still manage to floor the box, you have a major bug.

----------

## aCOSwt

What about fiddling things as part of /etc/security/limits.conf ?

----------

## 22decembre

 *aCOSwt wrote:*   

> What about fiddling things as part of /etc/security/limits.conf ?

 

I remember having exploring this way, but didn't succeed and don't remember presently why. I should take a look again.   :Confused: 

 *gentoo_ram wrote:*   

> If the high-CPU processes are already "nice"-d to 19, then why are you worried? The scheduler will yield CPU time to other higher-priority processes if they need it. Also, you might want to explore installing the new 2.6.38 kernel. It has automatic support for cgroup scheduling which is supposed to help smooth CPU time across different users.
> 
> Why would you want the CPU to be forced "idle" if there are processes waiting for more CPU time?

 

I am worried because it act ! The process is said on the webapp to be nice at the highest (19) rank possible, but still block the server sometimes ! I am clearly not enough qualified to know what is the problem (kernel, python, nice, or an other soft running elsewhere in the server...). I think it's because even if the process has the lowert priority, if it eats all cpu, thus, other process with higher priority must wait at least a little time, and as the process in fault is about BIG files (the process in fault is a torrent downloader : torrentflux), it can be long to solve the task (of course, I am speaking in microtask of 10 ms... you see, I am not qualified as I think you will tell me I am wrong ! Please be gentle and explain rather than just kidding me !)

I have read around on gentoo forum, on Pappy_mcfae thread or somewhere else too that this new kernel is somewhat exciting, but I recently take some important decision : I will always use now stable release of the kernel ! Stable according to what people say and packages.gentoo.org. Pappy pretends the new kernel should be stable (explanations here ) but I won't use this now !

 *madchaz wrote:*   

> If you have something that is that reniced and still manage to floor the box, you have a major bug.

  So I think I have a bug !

If I had not such problems, I won't try to solve it ! I have this problems !   :Sad: 

----------

## madchaz

I think we found your problem. 

No matter how nice you make a torrent downloader, it WILL floor your machine because of the sheer number of network connections it opens. 

You need to limit how many concurrent connections the app can make per download and limit the amount of concurent downloads it allows. Otherwise, even if you nice the process, you will still get floored by the TCP connections. 

As I said earlier, the issue as to be with the app if you floor the machine with it being niced to the max.

----------

## 22decembre

@madchaz : you know you sound good ? really ! You're a master !

I never though of such thing ! Oh ! I may try different values so, or limit the bandwitch maybe ? But I don't know how to guess such things (I know how to setup a network connection, but selecting the correct settings for a network card... calculate a network flow...)

Thanks !

----------

## madchaz

 *22decembre wrote:*   

> @madchaz : you know you sound good ? really ! You're a master !
> 
> I never though of such thing ! Oh ! I may try different values so, or limit the bandwitch maybe ? But I don't know how to guess such things (I know how to setup a network connection, but selecting the correct settings for a network card... calculate a network flow...)
> 
> Thanks !

 

Have a look at the script and see about limiting the number of concurent downloads and the max number of allowed connections per download, to get no more then 150 - 200 connections. (so say a max of 10 concurrent downloads with 15-20 connections each max)

See what the load looks like, then adjust accordingly.

----------

## 22decembre

solved by limiting the number of connection inside the torrentflux downloader

----------

