Probing Diagnostics – Use Case 1

Reclaiming the “P” Word

In our last post, Making the “P” Word G-Rated, we questioned the authority of monitoring tools vendors that celebrate their lack of probe technology. Then we set the stage for four upcoming use cases to highlight where probes bring value to UC diagnostics.  In this post, we’ll discuss the first use case, Real-Time Voice Quality.

Information Is Only As Valuable As the Source

Most UC monitoring tools rely on data from the UC platform they are monitoring.   They capture the UC platform data and present it in a way that attempts to help enterprises and service providers better monitor the calls from the underlying UC platform.  Although re-framing data from the UC Platform can have value, relying on the platform for data does not tell the entire story.  It’s almost like an asking an organization to police itself; there are inevitably gaps and blind spots.

Diagnosing Voice Quality – In Real Time

By observing every packet in a UC conversation and capturing network call metrics, probes track key measures of call quality like MOS, jitter, latency, and packet loss.  Furthermore, probes track all of these measurements in real-time throughout the duration of the call. Most UC platforms neither capture, nor pass on this level of real-time detail, so a probe is the only way to gather it.

The Rest of the Story of a Poor Call

Real time QoS tracking allows the enterprise or their service provider to fully understand their users’ experience.  It provides a more exact MOS measurement than post-call average reporting and demonstrates how much of the call was impacted and whether the impact was at the beginning, in the middle, at the end, or intermittent throughout the call.  Without probes, the IT Pro is left trying to extrapolate details from an incomplete vague “picture” of the call.  An example of why this is important is shown in the figured below:

Although Session A and Session B have the same average MOS, 3.77, the user experience is very different:

  • Session A:
    • Near the beginning of the call, there was a period of poor audio quality.
    • After the initial period of poor audio quality, the conversation continued for 8 minutes without an issue.
    • When the users ended the call, it is unlikely the momentary quality issue at the beginning of the call impacted their perception of the call
  • Session B:
    • The period of poor audio quality occurred at the end of the call and may have caused the users to terminate the call.
    • The users are likely to perceive the call as poor quality and be dissatisfied with their experience.

From looking at only the average MOS reported at the end of each call, support personnel could not differentiate between these two call scenarios and the very different user experiences.  Ironically, a 3.77 is generally considered a “Fair MOS”, so neither call would have triggered an alert in those systems that only gather post-call averages.

When users call with a bad experience, do you want to rely on the rough overview of the call the UC platform presents, or the detailed, real time view painted by an effectively placed UC diagnostics probe?

Nectar can help your company acquire modern UC solutions. Contact us today to learn more about our platform.