Basic RouterOS Traffic Queueing

In this post, we will look at a basic traffic queuing configuration. Our example network will be comprised of two subnets, one for wired computers (staff and patron) and one for wireless computers (accessed via a wireless AP plugged into port 5).

Our wired computers will all be within the 192.168.88.0/24 network on port2, and our wireless network will use 192.168.188.0/24 on port5.

Here is a rundown on the port 5 setup:

  1. change it from a slave of master port 2 to an independent port (Master Port: none)
  2. assign it an IP address of 192.168.188.1.

We are going to assume that the wired network is divided by IP addresses such that:

  • Patron machines should all be within the range of 192.168.88.100 – 192.168.88.150
  • Staff machines should all be within the range of 192.168.151 – 192.168.88.200
This could be accomplished by running DHCP service in one range and manually assigning static IPs in the other, or by VLAN configuration, etc.

Before we begin, let’s talk about the CLI in RouterOS. We can access a command line interface either by opening a terminal within Winbox or by Telnet / SSH directly (assuming those services are running and allowed). Here is a screenshot of a terminal in Winbox:

I prefer operating through SSH using PuTTY as my client. We can of course use both the Winbox GUI and PuTTY at the same time, which might be preferred when just starting out. If you are logging in via SSH and the default out-of-the-box values are still loaded, then the username is admin and the password field is blank.

I will give example code that can be copied / pasted into a terminal screen from here on. If you move your mouse over to the top right corner in these examples, you should hopefully get a little menu to view/copy/print to make life easier 🙂

First, lets look at the IP > Firewall > Address Lists area. We want to create two lists, one each for wired staff and patrons:

[sourcecode language=”plain”]
/ip firewall address-list
add address=192.168.88.151-192.168.88.200 comment="Staff IP Addresses" disabled=no list=staff
add address=192.168.88.100-192.168.88.150 comment="Patron IP Addresses" disabled=no list=patron
[/sourcecode]

Pretty straight forward so far. Now, lets look into the IP > Firewall > Mangle area. This is where things get interesting. We need to accomplish three things:

  1. discern the difference between patron and staff traffic within the same shared subnet (192.168.88.0/24) on port 2
  2. discern the difference between port 2 traffic (wired machines) and port 5 traffic (wireless machines)
  3. mark packets as being either patron, staff, or wireless

First, we need to differentiate between wired patron and staff connections based on their IP ranges given in the lists we created earlier. We need to create entries for both up and down traffic as well. Let’s mark the staff connections first:

[sourcecode language=”plain”]
/ip firewall mangle
add action=mark-connection chain=forward comment="Staff Connections Up" disabled=no new-connection-mark=\
staff_conn_up passthrough=yes src-address-list=staff
add action=mark-connection chain=forward comment="Staff Connections Down" disabled=no dst-address-list=\
staff new-connection-mark=staff_conn_down passthrough=yes
[/sourcecode]

What we have done is told the router to mark any traffic originating from IP addresses that are in the staff list (192.168.88.151 to 192.168.88.200), and that it should mark said traffic as either ‘staff_conn_up’ or ‘staff_conn_down’ based on whether the traffic is going to those addresses (dst-address-list) or coming from those addresses (src-address-list).

Now, let’s mark the patron traffic in the same manner, using the patron list for src and dst and naming the connection marks accordingly:

[sourcecode language=”plain”]
/ip firewall mangle
add action=mark-connection chain=forward comment="Patron Connections Up" disabled=no new-connection-mark=\
patron_conn_up passthrough=yes src-address-list=patron
add action=mark-connection chain=forward comment="Patron Connections Down" disabled=no dst-address-list=\
patron new-connection-mark=patron_conn_down passthrough=yes
[/sourcecode]

Next, we need to mark the actual packets. We create mangle entries that are based off of the connection marks created above for each of the four flows (patron up, patron down, staff up, staff down):

[sourcecode language=”plain”]
/ip firewall mangle
add action=mark-packet chain=forward comment="Staff Traffic Up" connection-mark=staff_conn_up disabled=no \
new-packet-mark=staff_traffic_up passthrough=yes
add action=mark-packet chain=forward comment="Staff Traffic Down" connection-mark=staff_conn_down disabled=\
no new-packet-mark=staff_traffic_down passthrough=yes
add action=mark-packet chain=forward comment="Patron Traffic Up" connection-mark=patron_conn_up disabled=no \
new-packet-mark=patron_traffic_up passthrough=yes
add action=mark-packet chain=forward comment="Patron Traffic Down" connection-mark=patron_conn_down \
disabled=no new-packet-mark=patron_traffic_down passthrough=yes
[/sourcecode]

But wait a minute! We forgot about our wireless network on port 5! This one is easier as we already know that all of its traffic should be flowing in and out of port 5 and we don’t need to differentiate between any IP ranges. So, we don’t need any connection mark entries just packet mark entries:

[sourcecode language=”plain”]
/ip firewall mangle
add action=mark-packet chain=forward comment="Wireless Up" disabled=no in-interface=ether5-wireless \
new-packet-mark=wireless_up out-interface=ether1-gateway passthrough=no
add action=mark-packet chain=forward comment="Wireless Down" disabled=no in-interface=ether1-gateway \
new-packet-mark=wireless_down out-interface=ether5-wireless passthrough=no
[/sourcecode]

Notice that we did use the out-interface option to denote packet flow for the wireless!

OK, we got our connections and packets bagged & tagged. Let’s head over to the Queue area and put that to use. We have two basic forms of queuing at our disposal, simple and queue tree. We are going with queue tree to have more granular control. Within the queue tree we will use the pcq type (per connection queuing).

Edit 7/17/12: I glossed over the fact that we selected pcq as our algorithm above because I had yet to find a good, simple explanation for someone new to wrap their head around. Today, I ran across a thread in the Mikrotik forums wherein one of the forum gurus, fewi, gave a simple and concise explanation of pcq in comparison to other available algorithms. It’s about as good an explanation as I have seen so far:

“PCQ is often used because it has one huge advantage over the other available queuing systems: it can dynamically create sub-streams. If you have 10 end users and want to grant a certain bandwidth to each for upstream and downstream traffic, you don’t need to specify 10 leaves, one for each user – you just create one PCQ leaf that automatically creates the 10 substreams based on the IP address of the flow. Better yet, if you add an 11th user he automatically gets treated the same. Since in many situations PCQ greatly simplifies QoS configuration, it’s the queueing discipline most widely chosen. However, if you don’t have a situation in which you need dynamic substreams one of the other disciplines may be a better fit.”

The forum thread can be found here: http://forum.mikrotik.com/viewtopic.php?f=9&t=36896

First, we create our queue type entries. We’ll do the staff download queue first:

[sourcecode language=”plain”]
/queue type
add kind=pcq name=staff_down pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
dst-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
[/sourcecode]

We have left all of the default parameters for now. The only thing we have really specified is that we want a pcq type algorithm named ‘staff_down’. Following this template, we create entries for our other five queues that we need:

[sourcecode language=”plain”]
/queue type
add kind=pcq name=staff_up pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
src-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
add kind=pcq name=patron_down pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
dst-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
add kind=pcq name=patron_up pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
src-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
add kind=pcq name=wireless_down pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
dst-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
add kind=pcq name=wireless_up pcq-burst-rate=0 pcq-burst-threshold=0 pcq-burst-time=10s pcq-classifier=\
src-address pcq-dst-address-mask=32 pcq-dst-address6-mask=64 pcq-limit=50 pcq-rate=0 \
pcq-src-address-mask=32 pcq-src-address6-mask=64 pcq-total-limit=2000
[/sourcecode]

So now we have six queue type entries, one each for up and download, three sets total for staff, patron, and wireless. Now we construct our queue tree. In this example, we want to control a 1.5Mb T1 line, giving more bandwidth to staff, less to patrons, and the least amount to wireless in situations where heavy usage occurs. First, we create create two parent queues:

[sourcecode language=”plain”]
/queue tree
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=0 max-limit=1500k name=parent_down \
packet-mark="" parent=global-out priority=8
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=0 max-limit=1500k name=parent_up \
packet-mark="" parent=ether1-gateway priority=8
[/sourcecode]

Notice that the parent entries. For the upload side of things, we specify the actual interface ether1-gateway. However, for the download side we specify global-out. This has to do with how the router handles packet flows as can be seen in this diagram:

Now that we have our parent queues set up, we create the leaf or child nodes:

[sourcecode language=”plain”]
/queue tree
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=400k max-limit=1200k name=\
patron_down_q packet-mark=patron_traffic_down parent=parent_down priority=8 queue=patron_down
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=400k max-limit=1200k name=\
patron_up_q packet-mark=patron_traffic_up parent=parent_up priority=8 queue=patron_up
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=600k max-limit=1200k name=\
staff_down_q packet-mark=staff_traffic_down parent=parent_down priority=6 queue=staff_down
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=600k max-limit=1200k name=staff_up_q \
packet-mark=staff_traffic_up parent=parent_up priority=6 queue=staff_up
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=200k max-limit=1200k name=\
wireless_down_q packet-mark=wireless_down parent=parent_down priority=8 queue=wireless_down
add burst-limit=0 burst-threshold=0 burst-time=0s disabled=no limit-at=200k max-limit=1200k name=\
wireless_up_q packet-mark=wireless_up parent=parent_up priority=8 queue=wireless_up
[/sourcecode]

In each entry, we see the culmination of our earlier setup coming in to play: packet-marks and queue types. Also, we set minimum and maximum rates for each and attach them to the corresponding parent queue. . Each queue two rate limits:

–CIR (Committed Information Rate) – (limit-at in RouterOS) worst case scenario, flow will get this amount of traffic no matter what (assuming we can actually send so much data)
–MIR (Maximal Information Rate) – (max-limit in RouterOS) best case scenario, rate that flow can get up to, if there queue’s parent has spare bandwidth
We also left some overhead in the max-limit rather than setting it to the total parent max-limit.

Now, let’s explore what should happen as a result of our efforts. In the case of low bandwidth usage (i.e. we are not maxing out our pipe), any computer can use up to the maximum limit of 1200k if it is available. However, when heavy traffic occurs, we are splitting up the available bandwidth such that all of the staff machines get a minimum of 600k to share among them, the patrons get 400k, and the wireless gets 200k. This is the minimum bandwidth available to each of our three groups under load. It will in turn be equally divided among the group i.e. 400k patron bandwidth gets equally divided among however many patron computers.

………

What we have not covered will probably fill another post or three. We have left out burst settings, priority settings, and have glossed over exactly how the bandwidth was divided up. The real mechanism working behind the scenes is actually the HTB (hierachical token bucket) packet scheduler. Information on that can be found here: http://luxik.cdi.cz/~devik/qos/htb/manual/theory.htm

………

These are other sources I used:

Mikrotik Wiki