Article : Ensuring That Technology Enhances Customer Service - Risks & Solutions
Advances in self-service customer care technology such as tone, voice response and now natural language speech recognition provide the opportunity to automate many customer care functions for both cost savings and improved caller satisfaction. Yet with the loss of the human factor provided by the traditional customer service representative, these new technology options require more thorough testing than ever to insure the application is providing a positive caller experience.
Earlier generation automated systems often provided less than ideal caller impressions. Unfortunately applications of this type continue to be built either because managers are rewarded more for lowering expenses than for improving the caller experience, or because the new application was activated without sufficient testing.
The evidence is dramatic that good client care outweighs call center cost control in importance. For example, from a group of surveys Vocal Laboratories conducted on five different companies, we discovered that after a single outstanding call into customer service callers were:
After a single poor contact experience, callers were:
What more compelling reasons can there be that good service is good business, and that improving user satisfaction should be a critical focus in the design of any self-service application?
DESIGN AND TESTING STEPS
Employees, the original design team, or a focus group are sometimes used to check over the new application.
However, these groups are poor choices for an accurate assessment of real world performance. Insiders familiar with company jargon or what the application is supposed to accomplish simply do not mimic the experience of the typical caller. As an alternative, some do a controlled release of the new application, perhaps sampling customers for their reactions to the service.
PROBLEMS WITH FOLLOW UP SURVEYS
Past callers resist being surveyed for a variety of reasons that include the imposition on their time, the belief that the company isn't sincere in asking, and that they have already demonstrated their loyalty to a company by doing business in the first place. A consequence is that only very happy or very angry callers have sufficient motivation to provide feedback. The combined result is sample bias, inaccurate survey results and a false picture of overall caller satisfaction.
People also start to forget details almost immediately, and some will tend to make positive comments instead of truthful ones because they want to avoid having to defend their answers.
Then there is the issue of accuracy related to group size. For example, if there is an application problem that will affect just 1% of calls (which can still be a significant number of real customers when the application is fully deployed), 20 test calls will find it in only 2 of 10 cases. Even if such a small test call sample does catch an application glitch; probability is high that the significance of the issue can't be assessed.
A larger test of 100 calls will still miss a 1% problem more than 1/3 of the time, and answers to a survey of 100 callers has a margin of error of plus or minus 10%.
Launching an application based on caller satisfaction results with a 20% accuracy spread isn't very confidence inspiring. It takes 500 test calls from objective participants to find a 1% problem with 99.4% reliability, and 500 participants will drop the response margin of error to 4.5%.Thus the project manager responsible for introduction of the new application is confronted with a classic "Catch 22". He or she must risk activating an insufficiently tested system and attempt to gather meaningful feedback from the very same callers the application is to assist and who resist providing that feedback.
Our experience has found that regardless of the caliber of the design, and increasing with the complexity of the application, there will be user interface problems uncovered only when large numbers of typical callers try the application.
Substituting follow up surveys with a large controlled study utilizing test callers is the better answer. Real people without insider knowledge most accurately reflect typical caller experiences. Sample bias and survey participation resistance problems are minimized. An immediate survey of these test callers eliminates the memory loss problem and allows the asking of more penetrating questions. New applications can be tested as prototypes without risking actual customer attitudes. And automating the survey process allows large surveys necessary for low error margins to be conducted cost effectively.
TESTING IN-SERVICE APPLICATIONS
Such studies can either be one shot examinations, or an ongoing benchmark performance tool far more reliable than generic industry averages. The method is also of great value in testing traditional live agent call centers, both as a measure of caller satisfaction and as a tool to uncover what services might lend themselves to automation.
Commonly, clients with self-serve applications in place for some time report that call lengths and transfers to agents are increasing and no one can put their finger on why the application is no longer meeting caller needs as well. This "application drift" can be due to any of a number of internal or external changes, and a usability study designed to uncover tune up items will pinpoint the problem areas. These can usually be fixed with only minor adjustments.
CALLER SATISFACTION TESTING TO QUANTIFY GOOD SERVICE
Yet a correlation between company revenue and quality client care can be established with a caller satisfaction study. By asking questions about purchase plans, caller attitudes and brand loyalty before a contact and repeating the questions afterward, the opinion shifts due to good (or poor) client service applications can be measured and the revenue impact to the host company proven.
A quality study and assessment of true user satisfaction requires:
Using technology to both reduce operating costs and improve service are not mutually exclusive goals. A design that prioritizes cost over caller satisfaction will likely alienate callers and cause lost business. On the other hand, a well designed self-serve customer care application will help solidify brand loyalty, retain customers, lower costs and enhance company revenues.
About The Author
About The Company
Today's Tip of the Day - Attention To Detail
More Editorial From Vocal Laboratories
Published: Monday, October 13, 2003