Ventana Research’s benchmark research into agent performance management shows that most companies recognize the vital role contact center agents play in creating good customer experiences and thus good business outcomes. The research also shows that only the most mature companies have put in place processes and metrics that encourage behaviors that deliver such business outcomes. Furthermore, the research shows that companies are held back from adopting more customer-related metrics because they don’t have performance management tools that can help them create such metrics; instead most rely heavily on spreadsheets. Thus I was encouraged to hear during a recent roundtable discussion sponsored by Merced Systems from two customers that have used the Merced Performance Suite to institute a more rigorous, metrics-driven approach to improving agent performance.
The speakers indicated that a company cannot significantly improve contact center performance solely by deploying new technology. Rather, it can reach its ultimate goals only by changing processes and people – mainly by training and coaching agents. To do this in an effective manner requires a deep analysis of agent-by-agent performance and a system that points managers and supervisors to areas that need improving. Without this individualized analysis, training and coaching tends toward a “one size fits all” situation that doesn’t address individual agents’ needs. Companies therefore need to adopt a system that suggests which calls evaluators should listen to so they can quickly identify areas of weakness – for example, some agents may perform poorly during the greeting, or fail to give callers the required compliance information.
Another important message from the discussion is that companies must review their key performance metrics regularly and modify them to better reflect the organization’s business goals and desired outcomes. This often is not done: in our benchmark research into contact center analytics the number-one performance metric in the contact center is average handling time, which doesn’t connect directly to the most important metric to executives, which is customer satisfaction scores.
Both speakers were adamant that managing to averages doesn’t work and said that companies would do better to focus on the best and worst performers: the first to set goals that others should aspire to, and the second to assess where the most training and coaching is needed. It is also important to manage to trends; that is, a metric by itself is of limited use, but implementing training and coaching to reverse a trend or improve performance is likely to be effective.
This led to a discussion of key experiences with the performance management application. First and foremost, it needs to be widely adopted, which one of the speakers admitted did require a little “encouragement” for some reluctant supervisors and agents. The key to adoption is that everyone trusts the outcomes and they be consistent. This way users don’t feel Big Brother is watching for ways to take away performance-related pay, but that supervisors are honestly looking for genuine ways to improve performance. Sharing performance information with everyone, subject to some confidentiality restrictions, can produce an environment where everyone is trying to improve their own performance.
Finally, the speakers insisted that any program must be a continuous improvement process. Despite expressing pride in their processes and agents, they acknowledged room for improvement, which can be brought about only by more targeted coaching. One company thus implemented a closed-loopmetrics-driven quality monitoring process that uses analytics to identify areas where agents need to improve, targeted coaching to address those issues and trend analysis to ensure that the coaching is effective.
Do you use any form of analytics to drive your quality monitoring or performance management processes? If so, please tell us about them, and come collaborate with me and discuss your efforts.
Richard Snow – VP & Research Director