# QoS Expert needed!!!! HTB and tc command stuff......

## johnny_martins00

Hi!i wanna try some QoS rules to my Vpn Server, but dont know were to start...   :Question:   :Question:   :Question:   i want to use HTB with classes were i can reserve determinated bandwidth to each client, but at same time reserv some to a control channel. Also want to have to ensure minimal response time....

Sorry for my bad english, believe me my portuguese its better   :Very Happy: 

Anyone has ideia were to start??

Thk, Regards

----------

## feld

the openwrt project has some nice pre-configured QoS rules that you could probably borrow from. or learn from.

----------

## EPrime

The first step is identifying and classifying packages, that is, assigning them a specific priority. If you're on a moderately recent kernel the best way of doing this is using iptables, like this:

```

# SSH

iptables -t mangle -A OUTPUT -p tcp --dport 22 -j MARK --set-mark 1

iptables -t mangle -A OUTPUT -p tcp --dport 22 -j RETURN

# ICMP (ping)

iptables -t mangle -A OUTPUT -p icmp -j MARK --set-mark 1

iptables -t mangle -A OUTPUT -p icmp -j RETURN

```

Note the "--set-mark 1": this assigns the arbitrary number 1 to those packages. We can use this later on to filter packages into our queues. Note that if you don't have a firewall already, you should grab and use FireHOL. It's just plain amazing how little configuration you need to write to set up a proper firewall, and it has support for package marking built in. Not sure if newer versions also do QoS stuff, as I'm still using my good old tc script, which looks something like this:

```

#!/bin/bash

CEIL=250

INET=eth1

# Setting up classes and qdiscs

tc qdisc add dev ${INET} root handle 1: htb default 789

tc class add dev ${INET} root classid 1:1 htb rate ${CEIL}kbit ceil ${CEIL}kbit burst 24k cburst 24k

tc class add dev ${INET} parent 1:1 classid 1:123 htb rate 30kbit ceil 200kbit prio 0 burst 8k cburst 8k

tc class add dev ${INET} parent 1:1 classid 1:456 htb rate 100kbit ceil 200kbit prio 1 burst 8k cburst 8k

tc class add dev ${INET} parent 1:1 classid 1:789 htb rate 120kbit ceil 200kbit prio 2 burst 8k cburst 8k

tc qdisc add dev ${INET} parent 1:123 handle 123: pfifo limit 1000

tc qdisc add dev ${INET} parent 1:456 handle 456: sfq perturb 10

tc qdisc add dev ${INET} parent 1:789 handle 789: sfq perturb 10

# Marked packets (handle value is the iptables/netfilter mark value)

tc filter add dev ${INET} parent 1: protocol ip prio 1 handle 1 fw classid 1:123

tc filter add dev ${INET} parent 1: protocol ip prio 2 handle 2 fw classid 1:123

tc filter add dev ${INET} parent 1: protocol ip prio 3 handle 3 fw classid 1:123

tc filter add dev ${INET} parent 1: protocol ip prio 4 handle 4 fw classid 1:456

tc filter add dev ${INET} parent 1: protocol ip prio 5 handle 5 fw classid 1:456

tc filter add dev ${INET} parent 1: protocol ip prio 6 handle 6 fw classid 1:456

tc filter add dev ${INET} parent 1: protocol ip prio 7 handle 7 fw classid 1:789

tc filter add dev ${INET} parent 1: protocol ip prio 8 handle 8 fw classid 1:789

tc filter add dev ${INET} parent 1: protocol ip prio 9 handle 9 fw classid 1:789

# Create the ingress qdisc for policing/shaping incoming traffic 

tc qdisc add dev ${INET} handle ffff: ingress

# filter *everything* to it (0.0.0.0/0), drop everything coming in too fast

tc filter add dev ${INET} parent ffff: protocol ip prio 50 u32 \

   match ip src 0.0.0.0/0 police rate 500kbps burst 32k drop flowid :1

```

You'll need to edit CEIL and INET to match your actual bandwidth and internet/external interface. Also note that the 3 rate lines should add up to the same number as CEIL (or you'll grant a total that doesn't match the bandwidth).

What it does is that it creates a single HTB queue with 3 sub-channels, called 123, 456 and 789 respectively. The "handle X" is used to filter the packages based on the mark we set using iptables into the correct queue, which means that I'm able to use mark values 1 through 9, even though I only have 3 queues (I figured this would enable me to create more queues if needed, but never got around to that). The high-priority queue uses a FIFO scheduler, whereas the two others use SFQ.

There's also ingress shaping, though you really cannot do much except to drop packages if the interface starts to get overloaded. Again, you'll need to adjust this to your actual bandwidths.

I hope this will be enough to get you started - have fun  :Very Happy: 

----------

## johnny_martins00

Wohh, thanks for your post, but its just to much information at the same time for me   :Surprised:  ... i'll go step by step 

 *Quote:*   

> 
> 
> ....
> 
> identifying and classifying packages,....
> ...

 

my rules are for a ISEC traffic, for what i understand its that assuming that a client wants his traffic comming from the port X with the best service that i can offer , i have to classify the packages outgoing to port X with set-mark 1???(thk for the firewall idea but i think i dont need it   :Very Happy:  )

About your script, i going to test it in a Lan, 100mbits, and have some questions...(a lot of questions    :Confused:  )

```

......

# Setting up classes and qdiscs

tc qdisc add dev ${INET} root handle 1: htb default 789

tc class add dev ${INET} root classid 1:1 htb rate ${CEIL}kbit ceil ${CEIL}kbit burst 24k cburst 24k

tc class add dev ${INET} parent 1:1 classid 1:123 htb rate 30kbit ceil 200kbit prio 0 burst 8k cburst 8k

tc class add dev ${INET} parent 1:1 classid 1:456 htb rate 100kbit ceil 200kbit prio 1 burst 8k cburst 8k

tc class add dev ${INET} parent 1:1 classid 1:789 htb rate 120kbit ceil 200kbit prio 2 burst 8k cburst 8k

tc qdisc add dev ${INET} parent 1:123 handle 123: pfifo limit 1000

tc qdisc add dev ${INET} parent 1:456 handle 456: sfq perturb 10

tc qdisc add dev ${INET} parent 1:789 handle 789: sfq perturb 10 

```

for what i understood this rules creates 3 classes for 3 clients???each one with a determinated BW???what are the pfifo and sfq options???

the next question is 

```

.....

# Marked packets (handle value is the iptables/netfilter mark value) 

........

```

what those commands do it??

Thk for your reply an sorry for the questions, i may appear some stupid but QoS stuff its new to me    :Confused:   :Confused: 

----------

## EPrime

I'll answer some of your questions, and then try a more detailed walk-through of the QoS script  :Smile: 

 *Quote:*   

> 
> 
> my rules are for a ISEC traffic, for what i understand its that assuming that a client wants his traffic comming from the port X with the best service that i can offer , i have to classify the packages outgoing to port X with set-mark 1???(thk for the firewall idea but i think i dont need it   )
> 
> 

 

Basically it works like this: packets either arrive or are generated by local applications. All of these packages are handled by the Linux network/firewall subsystem, which you configure using the iptables command. Network traffic passes through something that iptables calls chains, and all outgoing packages that are to leave the current PC pass through the OUTPUT chain. Since we're doing QoS, we need to look at outgoing packages, and therefore apply "marks" to all packages in the OUTPUT chain. This is what the iptables rules above do (for SSH and ping traffic). Obviously, you'll need to come up with additional rules to match the traffic you want to shape. The SSH rules demonstrate how to mark packages by port number.

The mark values we use are just arbitrary numbers. If you're a developer, think of them as an enumeration of values, like so:

```
enum PriorityMark { High = 1, Normal = 2, Low = 3 }
```

I use 9 values to mark packages, which means I can differentiate between 9 different priorities later on. I use 1 for highest priority and 9 for lowest priority.

 *Quote:*   

> for what i understood this rules creates 3 classes for 3 clients???each one with a determinated BW???what are the pfifo and sfq options???

 

Think of it like a set of rivers: You have multiple streams of water/traffic flowing along seperate smaller rivers, yet eventually they all flow into the ocean (the lan) at the same spot (your outgoing network interface). You can construct your own river network, and it can be as nested and complex as you need. For every river you must decide on a scheduling algortithm that decides how the packets flow along that river. This is the htb, pfifio and sfq stuff. Pfifo stands for priority first-in-first-out (and apart from respecting priority doesn't do any scheduling). SFQ stands for stochaistic (sp?) fairness queue, and it tries to give everybody a fair chance of a slice of bandwidth. Now, at every intersection of two rivers you get a new river that holds the combined traffic, and you must assign a scheduler to this also.

In my case I have 3 individual rivers, which all merge at a single intersection, and flow together from there into the ocean (aka onto the lan). The merged river uses HTB scheduling whereas the 3 smaller ones use the other two scheduling algorithms. Note that using HTB as the scheduler for the merged river allows water from all 3 smaller rivers to flow, and hence ensures that no process is entirely starved.

Lets have a look at the QoS script again:

 *Quote:*   

> 
> 
> tc qdisc add dev ${INET} root handle 1: htb default 789
> 
> tc class add dev ${INET} root classid 1:1 htb rate ${CEIL}kbit ceil ${CEIL}kbit burst 24k cburst 24k
> ...

 

This creates the outer queue (the merged river), and defines some properties for the underlying network link. HTB is great in that it is able to scale the rate/ceil  values, but I've always preferred not to rely on that and put in my actual bandwidth (the rate and ceil parameters). Ceil is your absolute maximum. For a LAN it might be wise to set the rate somewhat lower, so that the link isn't saturated.

 *Quote:*   

> 
> 
> tc class add dev ${INET} parent 1:1 classid 1:123 htb rate 30kbit ceil 200kbit prio 0 burst 8k cburst 8k
> 
> tc class add dev ${INET} parent 1:1 classid 1:456 htb rate 100kbit ceil 200kbit prio 1 burst 8k cburst 8k
> ...

 

Think of this as the intersection where the 3 inner rivers meet up. We need to control the flow from each river into the merged river, which is what this does. We define the rates nominally allocated to each river, along with a maximum flow (ceil) for the river. HTB allows one channel to borrow available bandwidth from other channels, and it uses the prio property to decide who's first in line to borrow. Above, the high-priority channel only has 30kbit reserved, but can borrow up to 170kbit extra when it needs it, at the cost of the other channels.

 *Quote:*   

> 
> 
> tc qdisc add dev ${INET} parent 1:123 handle 123: pfifo limit 1000
> 
> tc qdisc add dev ${INET} parent 1:456 handle 456: sfq perturb 10
> ...

 

This creates the inner queues (the 3 smaller rivers). This is where we will add our packets (as in, where the water in the rivers emerged from). We can set some additional properties here, and the inner scheduler used for the traffic before it gets to the river intersection mentioned above. Note that we have named these three rivers as "123", "456" and "789". These named were picked because that's how I'll be using the mark values later, and so made it easy to remember what marks should go into which rivers/queues.

At this point all we've done is to create the queues we'll be using for traffic shaping. Now we need to identify the packets and assign them to the queues above:

 *Quote:*   

> 
> 
> # Marked packets (handle value is the iptables/netfilter mark value)
> 
> tc filter add dev ${INET} parent 1: protocol ip prio 1 handle 1 fw classid 1:123
> ...

 

Since we've already used iptables to mark packets, we can use these marks to identify the packets we want. There is one rule for every mark value we're using, which in my case means we need 9 rules to catch all packets. Note that unclassified packets also get stuck into a queue (the 789 one, as defined by the first tc line in the script).

You can also use tc directly for packet filtering, but tc is much more obscure and weird than iptables, and hence it is much nicer just to use the mark values and do the pre-filtering using iptables.

 *Quote:*   

> 
> 
> # Create the ingress qdisc for policing/shaping incoming traffic
> 
> tc qdisc add dev ${INET} handle ffff: ingress
> ...

 

This bit creates a seperate queue for incoming traffic. Since we have no control over incoming traffic, all we can do is drop packets if too many are arriving. This helps avoid network congestion and keeps traffic flowing smoothly. Set the rate to 95% of your capacity (or perhaps lower, you may need to experiment to see what works best) and that's it.

Hope this helped   :Laughing: 

----------

## johnny_martins00

thk for your excelent reply   :Cool:   :Laughing:  . just more questions: Can i set the rules of priority by (ip,port) instead of just the Port??How do i set a minimal response time??? using the priority of the packages?? is there any way of have a control channel? i red one HTB howto and they talked about a control channel but didnt understand it to well :S:S:S 

Thk, Regards

----------

## EPrime

 *Quote:*   

> Can i set the rules of priority by (ip,port) instead of just the Port??

 

You can do anything that iptables can, which is basically every conceivable thing you can imagine. For the specifics on the syntax, I'd recommend looking at the man page for iptables, as there's way too many options for me to list them all.

I do have a few more "marking rules" from my own iptables script though - perhaps they can be useful as inspiration:

```

# mark packets by their TCP flags

iptables -t mangle -A PREROUTING -p tcp --tcp-flags SYN,RST,ACK SYN -j MARK --set-mark 1

# mark traffic by destination IP

iptables -t mangle -A OUTPUT -d 10.10.10.10 -j MARK --set-mark 3

# mark packets smaller than 100 bytes

iptables -t mangle -A OUTPUT -m length --length 0:100 -j MARK --set-mark 3

# mark packets by outgoing interface, source address and destination port range

iptables -t mangle -A OUTPUT -p tcp -o eth2 -s 10.10.3.0/24 --dport 4660:4664 -j MARK --set-mark 9

```

As you can see, anything is possible - but you'll need to learn what iptables can do to achieve it  :Wink: 

 *Quote:*   

> How do i set a minimal response time??? using the priority of the packages??

 

You can never set the time, since this is not "QoS" in the sense of guaranteed response times but rather in the sense of "prioritized delivery". You control delivery by constructing the nested queues, assigning schedulers to each of them, and by putting packages into the right queue. Contructing the queues and figuring out where to place packages depends on your needs, and is something you need to figure out  :Smile: 

 *Quote:*   

> is there any way of have a control channel? i red one HTB howto and they talked about a control channel but didnt understand it to well :S:S:S

 

I never heard of it, but then again I'm no expert on this either. Figured it out a couple of years back and haven't really touched it since  :Cool: 

You'll probably want to check out some of these sites for more information:

http://lartc.org/lartc.html

http://www.iptables.org/documentation/index.html#documentation-howto

And a small reminder: using FireHOL (http://firehol.sourceforge.net/) will likely save you some work, since it's much easier to write marking rules using the language/syntax it provides than directly using iptables (although the latter is probably more flexible).

Good luck  :Wink: 

----------

## johnny_martins00

Thk for your help but im thinking on a scheme diferent from yours. I thinking in have a root class + 2 classes and under one of the subclass have another 2 subclasses. can i do this??? it's like this module 

[IMG]http://img294.imageshack.us/img294/2050/modeloqoswb9.th.jpg[/IMG]

Regards

----------

## EPrime

Sure you can - just create the classes you desire. You automatically get the tree structure by specifying the correct parent for the child class.

----------

## johnny_martins00

Hi! Wich modules did you put in your kernel configs?? im trying to put the HTB and the SFQ as a modules but every times that my kernel boots gives me a  kernel panic when trys to give me an ip address... 

Another thing :

```

localhost linux # tc qdisc add dev eth0 root handle 1: htb default 60

RTNETLINK answers: Invalid argument

```

is it due to the kernel settings missing??

Thk

----------

## EPrime

You'll need to enable traffic shaping and the schedulers used, either as modules or built-in.

I've put up my kernel config so you can check against what I'm using.

----------

