Wednesday, June 9, 2010

QoS for VoIP

QoS gives special treatment to certain traffic at the expense of others. Using QoS in the network has several advantages:
  • Prioritizes access to resources, so that critical traffic can be served.
  • Allows good management of network resources.
  • Allows service to be tailored to network needs.
  • Allows mission-critical applications to share the network with other data.

People sometimes think that there is no need for QoS strategies in a LAN. However, switch ports can experience congestion because of port speed mismatches, many people trying to access the switch backbone, and many people trying to send traffic to the same switch port (such as a server port).


QoS Actions

Three QoS strategies are commonly implemented on interfaces where traffic enters the switch:
  • Classification—Distinguishing one type of traffic from another. After traffic is classified, other actions can be performed on it. Some classification methods include access lists, ingress interface, and NBAR.
  • Marking—At layer 2, placing 802.1p class of service (CoS) value within the 802.1Q tag. At layer 3, setting IP Precedence or Differentiated Services Code Point (DSCP) values on the classified traffic.
  • Policing—Determining whether or not a specific type of traffic is within preset bandwidth levels. If so, it is usually allowed and might be marked. If not, the traffic is typically marked or dropped. CAR and class-based policing are examples of policing techniques.

Other QoS techniques are typically used on outbound interfaces:
  • Traffic shaping and conditioning—Attempts to send traffic out in a steady stream at a specified rate. Buffers traffic that goes above that rate and sends it when there is less traffic on the line.
  • Queuing—After traffic is classified and marked, one way it can be given special treatment is to be put into different queues on the interface to be sent out at different rates and times. Some examples include priority queuing, weighted fair queuing, and custom queuing. The default queuing method for a switch port is FIFO.
  • Dropping—Normally interface queues accept packets until they are full and then drop everything after that. You can implement prioritized dropping, so that less important packets are dropped before more important ones—such as with Weighted Random Early Detection (WRED).

DSCP Values

Differentiated services provide levels of service based on the value of certain bits in the IP or ISL header or the 802.1Q tag. Each hop along the way must be configured to treat the marked traffic the way you want—this is called per-hop behavior (PHB).

In the Layer 3 IP header, you use the 8-bit ToS field. You can set either IP Precedence using the top 3 bits or Differentiated Services Code Points (DSCP) using the top 6 bits of the field. The bottom 2 bits are set aside for congestion notification. The default DSCP value is zero, which corresponds to best-effort delivery.

The six DSCP bits can be broken down into two sections: The first 3 bits define the DiffServ Assured Forwarding (AF) class, and the next 2 bits define the drop probability within that class. The sixth bit is 0 and unused. AF classes 1–4 are defined, and within each class, 1 is low drop probability, 2 is medium, and 3 is high (meaning that traffic is more likely to get dropped if there is congestion). These are shown in Table 7-1. Each hop still needs to be configured for how to treat each AF class.



Voice bearer traffic uses an Expedited Forwarding value of DSCP 46 to give it higher priority within the network.


Trust Boundaries

When IP traffic comes in already marked, the switch has some options about how to handle it. It can:
  • Trust the DSCP value in the incoming packet, if present.
  • Trust the IP Precedence value in the incoming packet, if present.
  • Trust the CoS value in the incoming frame, if present.
  • Classify the traffic based on an IP access control list or a MAC

Mark traffic for QoS as close to the source as possible. If the source is an IP telephone, it can mark its own traffic. If not, the building access module switch can do the marking. If those are not under your control, you might need to mark at the distribution layer. Classifying and marking slows traffic flow, so do not do it at the core. All devices along the path should then be configured to trust the marking and provide a level of service based on it. The place where trusted marking is done is called the trust boundary.

1 comment:

  1. Plenty of useful information has been rendered here. We support all these technologies including IPSLA based VOIP performance analysis.

    Thanks
    Raj
    ManageEngine NetFlow Analyzer.
    www.netflowanalyzer.com

    ReplyDelete