# [SOLVED] Need simple traffic control/shaping

## ArmorSuit

Hi all.

I've been reading on traffic control and shaping, lots of docs available on the net, but the topic is a bit too complex for me, though I'm sure I'll get a hang of it eventually.

What I need is very simple traffic control, but I don't know where to begin (qdisc configuration). I have a 10MBit pipe on my server, and I want basically two classes of traffic:

1. High priority ssh

2. Everything else (services: web, ftp, smtp, pop3)

But, with everything else, I want fair queuing. So if I have 10Mbit, and say 10 concurrent connections, I want each to have a max of 1Mbit allowed. If I had 100 concurrent connections, I want each to have a max of 100kbit allowed.

Is this possible and where do I begin? Is my reasoning correct, or am I approaching this from the wrong angle? Thanks.Last edited by ArmorSuit on Wed Dec 16, 2009 9:50 am; edited 1 time in total

----------

## Hu

Expediting some traffic over everything else is possible.  You can ensure fair queuing within a type, which should guarantee that no one connection gets starved.  However, I am not sure you would even want to ensure equal splitting within a connection type.  What if an attacker opens 99 do-nothing connections in that protocol, and sends just enough to keep the connection alive?  Do you really want the server to use only 1/100 of its total capacity to serve a legitimate connection, and leave the rest of the link idle?  A scheme which provides fair queuing to outbound traffic would allow that one legitimate connection to rise to 100% of capacity if there was nothing else to be sent.

----------

## ArmorSuit

Thanks for your reply.

See, that's the part I can't fully grasp in tc/QoS. I don't really want to split traffic according to number of connections, just their potentials. So if there are 99 connections open, I want their max throughput to be 1/99th of total available bandwidth. So if one connection uses far less -- and as far as I understand it uses less tokens than available for it -- the unused tokens could/would be given to other connections? Is that possible?

While I can technically limit the bw per connection in the webserver, that will result with unnecessary limiting on idle server. I do want to allow clients with big pipes to download files as fast as possible, meaning lots of bw on idle server, and a "fair" share on loaded server.

----------

## boerKrelis

 *ArmorSuit wrote:*   

> the unused tokens could/would be given to other connections? Is that possible?

 

Yes, that's possible, the htb scheduler does that. It has a 'rate' and a 'ceiling'.

 *ArmorSuit wrote:*   

> But, with everything else, I want fair queuing. So if I have 10Mbit, and say 10 concurrent connections, I want each to have a max of 1Mbit allowed. If I had 100 concurrent connections, I want each to have a max of 100kbit allowed. 

 

TCP/IP's congestion algorithms already do that, sort of. There are a couple of algorithms you can compile into your kernel.

However, you may also want to have a look at flow classifiers. With those you can get fairness on a host level. Say there's two hosts downloading stuff from your server. They should get 50/50 even though one of them has 90 connections and the other one just 10. Some (not the best) info here.

----------

## ArmorSuit

Flow classifiers, thanks!

It looks to me SFQ is what I need, optionally combining it with a HTB to separate bw per service. I found this article, very good    http://www.opalsoft.net/qos/DS-25.htm  explains differences and uses between qdiscs quite well, bonus points for neat graphics.

SFQ seems best because it deals with "sessions", with hashes based on source and dest ip so basically it should facilitate fair queuing between visitors, not between pure connections. As you say, one may open 4 and one may open 10 connections, but the bw will be spread evenly between two, not between the 14, as long as their connections are hashed into respective buckets.

----------

## Hu

 *ArmorSuit wrote:*   

> So if there are 99 connections open, I want their max throughput to be 1/99th of total available bandwidth. So if one connection uses far less -- and as far as I understand it uses less tokens than available for it -- the unused tokens could/would be given to other connections? Is that possible?

 

I think we may have a terminology difference here.  To me, saying you want the max throughput to be 1/99th means that you want to set an absolute ceiling that, irrespective of what the other 98 connections are doing, that 1 connection will never get more than 1/99th of the available bandwidth.  That is what I argued against in my earlier post, because it is wasteful.  Your later sentences convey that you want what I suggested, which is that each connection gets a guaranteed amount of 1/Nth of the total, with the potential to exceed that if sending only the guaranteed amount would leave the link idle.

It looks like you may have already figured out this latter part, but for the benefit of other readers: rate sets the guaranteed amount, and ceil sets an absolute ceiling.  The absolute ceiling can be useful if you want to emulate a slower connection than you have, or if you need to ensure an upstream device does not begin queuing packets on your behalf.

----------

## ArmorSuit

 *Hu wrote:*   

> 
> 
> It looks like you may have already figured out this latter part, but for the benefit of other readers: rate sets the guaranteed amount, and ceil sets an absolute ceiling.  The absolute ceiling can be useful if you want to emulate a slower connection than you have, or if you need to ensure an upstream device does not begin queuing packets on your behalf.

 

It could be, yes, that I'm using wrong terminology, after all I'm totally new to QoS. Thing is, as far as I understand this, I can't set guaranteed rate because I am limited with absolute pipe width of 10Mbit. If I set guaranteed rate to say 100k (HTB?), then that would guarantee 100 connections, each @100k. But what happens if I had 200 connections, do they split equally to 50k each? Or do 100 of them get dropped in order to serve guaranteed 100k?

As far as I understand SFQ, it round robins among "sessions" each based on a hash including source and destination IPs, so in most cases representing individual users, and this is what I want. If one user uses a download manager and sets 10 parallel connections, and another with big enough pipe uses one, I want them fairly split, and unless I'm understanding SFQ wrong, it would dequeue a packet for first visitor, then a packet for second, then a packet for first again, etc in rr fashion.

----------

## Hu

Traffic shaping will never drop a connection on its own, though I suppose a sufficiently limited connection might timeout and die.  I have not looked at flow classifiers, but for basic per-protocol shaping, your first paragraph goes from a false base assumption.  When doing per-protocol shaping, the traffic shaper assigns bandwidth to the qdisc, not to specific connections.  So if you gave a qdisc 100k, it would have 100k total, to be spent on however many or few connections had traffic assigned to that qdisc.  Absent some mechanism such as SFQ to ensure fairness, it might be possible for one connection to starve out the others.

Using SFQ is useful for your purpose, since it encourages the shaper to spread the bandwidth among the connections in a roughly fair fashion.

----------

## ArmorSuit

Yes, I understand now. A HTB qdisc would guarantee rate but only to the packets filtered toward that queue in whatever fashion (u32 or iptables with marks). This means I can use that to separate priorities and total bandwidth assigned to port-based services, and under each class have SFQ deal with individual sessions. As suggested here:

http://lartc.org/howto/lartc.qdisc.classful.html#AEN1072

Thanks for your help, I understand it all better now. Now off to experimenting and finding optimum settings.

----------

