So we have a Cisco phone system here where I work. It’s an all Cisco shop, 4900’s at the core, 2960’s for access layer. We are running Call Manager 9 and using a mix of Cisco 7945’s, 65’s, and 8841’s (3 so far). Call Manager system is all virtulized with 1 publisher and 1 subscriber node for CUCM and the same for Unity Messaging. External phone calls for our two locations are sent out through a 2951 and 2911 via a T1 PRI.

When I started about a year ago, we were on version 9.1.1.xxxxx. We were running out of phones to issue, and had 2 boxes of brand new 8841’s that had been purchased, but were not supported by our version of Call Manager. I downloaded the latest version of 9.1.2.15900 and worked with TAC to install. Did the install of CUCM one night after work, which took about 4.5 hrs. Rebooted both servers and made sure telephony services worked. The following weekend, did the same with Unity. That update took overnight to install and the reboot of each server took over 40 minutes each. The upgrade process seemed really slow for what Cisco called a “minor” upgrade.

Everything appeared to be working. About a month after the updates, we started noticing random “glitching” when listening to voicemail (both greetings on other peoples mailboxes, and listening to messages in our own). Didn’t think much of it. Then end users started to complain. It’s gotten to the point where this problem is prevalent and now there are issues with external calls and (very rarely) internal calls.

I’ve got a network monitor running and none of our network links are getting saturated. I’ve run the RTMT and resources on the servers for Call Manager and Unity Messaging are not pegging out. I’m wondering where I should look from here? Anybody else seen this?

4 Spice ups

The “concept” is difficult for some people to grasp but the call manager isn’t actually involved at all during calls. It sets up the call and tears it down but during the call, the call manager isn’t playing a role at all. Rather a phone is talking directly to a phone or a gateway to get to a PRI channel (via and RTSP stream of UDP packets that transport the voice as data) so voice break up / drop outs are a network problem.

Is QoS configured on all your switches and routers?

1 Spice up

That explains a lot of confusion I had with this system! Also, even though the RTMT said all was ok, I’m seeing that the voicemail server (Unity) is pegging at 95-100% cpu according to Vcenter. I’m going to try and restart it this evening after hours.

QOS is configured on most, but I’ll do some more digging into the switches/routers.

RTMT is fancy but sometimes the simpler things can be more revealing. Most phones like 7945 will collect a lot of useful statistics while a call is in progress under Status → Call Statistics. Pay particular attention to jitter and packet loss during calls you consider of GOOD quality and also compare to calls you consider POOR.

Having links that aren’t saturated is fine but if the network doesn’t see voice traffic as taking priority then the voice stream gets treated like any other kind of traffic and packets will be delayed or even dropped.

Many people throw gigabit switching ($$$) at the problem and often find it did them no good. Likely the problem is that QoS just isn’t configured or properly configured somewhere along the way.

We have Gigabit switching everywhere, and much of our core is 10Gbe. I’ll look into the call statistics you were talking about and see what I can find out.

Our QOS settings are somewhat disjointed and not applied uniformly across the network. I’m wondering, should I just remove all the current QOS and use the Cisco Auto-QOS function? Seems like it should work being we have a Cisco phone system.

That’s kind of up to you… I prefer to “hand craft” my QoS configurations but then we are doing security video and video conferencing as well so we have a lot of different types of traffic to consider. As a result, I don’t know how good the quality of the decisions that Cisco’s “AutoQoS” makes :slight_smile:

We do video conferencing and surveillance as well, but on a separate VLAN.

Here’s a dump of our Global QOS settings on our switches:

mls qos map policed-dscp 24 26 46 to 0
mls qos map cos-dscp 0 8 16 26 32 46 48 56
mls qos srr-queue output cos-map queue 1 threshold 3 5
mls qos srr-queue output cos-map queue 2 threshold 3 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 2 4
mls qos srr-queue output cos-map queue 4 threshold 2 1
mls qos srr-queue output cos-map queue 4 threshold 3 0
mls qos srr-queue output dscp-map queue 1 threshold 3 40 41 42 43 44 45 46 47
mls qos srr-queue output dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue output dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63
mls qos srr-queue output dscp-map queue 3 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 3 threshold 3 32 33 34 35 36 37 38 39
mls qos srr-queue output dscp-map queue 4 threshold 1 8
mls qos srr-queue output dscp-map queue 4 threshold 2 9 10 11 12 13 14 15
mls qos srr-queue output dscp-map queue 4 threshold 3 0 1 2 3 4 5 6 7
mls qos queue-set output 1 threshold 1 150 150 100 150
mls qos queue-set output 1 threshold 2 120 120 100 235
mls qos queue-set output 1 threshold 3 40 70 100 275
mls qos queue-set output 1 threshold 4 40 70 100 240
mls qos queue-set output 1 buffers 20 20 20 40
mls qos

And per port:

switchport voice vlan 19
switchport priority extend cos 1
srr-queue bandwidth share 10 20 20 50
srr-queue bandwidth shape 10 20 0 0
priority-queue out
mls qos cos 1
mls qos trust device cisco-phone
mls qos trust cos

Thanks for the help!

-Matt

Some tweaking is definitely in order. Consider:

QoSTable.PNG

and mls qos map policed-dscp 24 26 46 to 0 which is not what we want to do.