The IVR mindset is shifting. Originally, the essential goals of a company’s IVR were containment and automation. But now there’s a lot more an IVR needs to do to keep customers satisfied and allow companies to reap the business benefits of using an automated system.
Many times customers call into an IVR with the express intent of speaking to a live agent, and 39 percent of customers complain not being able to reach a real person through the IVR system is a top (and regular) frustration. Consumers now view the IVR as a gateway to the call center.
The IVR is increasingly being used as an escalation channel, where consumers only call in when they have already tried to find a solution on their own, and were unsuccessful. So by the time they call the IVR, they’re already frustrated and impatient. In this environment, the purpose of the IVR is increasingly shifting. Its purpose is to provide the call center with immediate information in an effort to route callers to the right agent and reduce call handle times.
With this new purpose in mind, call center metrics are more important than ever. We need to know if agents use the information passed to them by the IVR, whether calls are being routed to the correct agent, and evaluate the length of wait and agent handle times.
There are great tools to evaluate how callers get through the IVR, how many callers are self-served, and how many callers are routed correctly.
But here’s the issue: There’s a dead space between the IVR and the call center, and we currently can’t connect the dots between. Contact center managers need to be able to measure results – the IVR’s direct impact on the call center – and thus the costs to provide customer service.
With the IVR now playing the role as gatekeeper for the call center, the two should be more integrated. This is something the industry will need to embrace as we enter the future of a phone-based customer experience.
The best strategy at this point in time is to treat the entire caller experience as a single call, because for the callers, it is.
Don’t stop tracking the call after it exits the IVR. Following the call into the call center allows companies to monitor the effectiveness more acutely, showing wait time, whether the right agent was reached, whether a resolution was reached, etc. This knowledge allows you to focus on those areas in the IVR that have the most impact on your specific call center, and tailor the experience to meet your goals.
I envision a future with a logging standard, designed to log the agent actions and how they affect the overall call result. This would mean there is one repository for data, and companies can receive full-bodied information with more meaningful insights into their customer actions and adjust the experience accordingly to save money and increase customer satisfaction.
To properly judge a company’s IVR, it’s essential that businesses understand critical metrics such as agent-to-agent transfers and agent handle time. There must be an ongoing industry discussion to address how we can best connect the IVR and the call center. Although this is in the future, there are things companies can be doing now to gain valuable insight into what’s happening in the gap between the call center and IVR.
Publish Date: May 24, 2016 5:00 AM
I’ve been a writer since I was very young. In fact, long before I wrote my first book I won an employee Valentine’s Day poetry contest while working for Harrah’s Casino in San Diego. I even remember winning a poetry contest in elementary school. The poem was about the peace that I felt as a child when walking through the forest, listening to nature. I have come a long way since then and my transition into becoming a non-fiction writer is something that, believe it or not, happened completely by accident.
It was 2011 and I had just moved to Hawaii to renew my sense of self and purpose when I wrote my first book, “Aloha Joe in Hawaii: ‘A guided journey of self-discovery and Hawaiian adventure’.” At the time, I was writing by hand with pen and paper, which I felt was easier than typing on a computer. It took me a full year to write my first book by hand and another full year to type it on a computer. With that much time invested, I believed that there had to be a better and more efficient way to write.
Soon after, I had just purchased a new computer and heard about this new technology called “voice recognition software.” I was really excited to hear about this and after doing some research, I settled on purchasing Dragon Naturally Speaking. It took me a little time to get the commands down and for the software program to adapt to my voice. To speed this process up, I would read a few books aloud with the software turned on. I still remember when I told my kids about my experience trying the software for the first time. Of course, they just laughed at me because apparently they had already been using this technology for several years at school.
Up until that point, I had been taking notes for a number of years on other topics that I wanted to write about, but since it had taken me so long to type the first book, I couldn’t imagine how long it would take for me to type up all of this new information. But after playing with Nuance’s Dragon Software, I decided to find out just how much faster I could produce my second book. The short answer is: three days.
When I gave the finished book to my editor she was pleasantly surprised. After going through everything that I had “composed” in those three days, she figured out that I had actually created enough material for not one, but TWO books! My second book, published in 2014, was titled: “Stories I Can’t Tell My Kids – Yet.” And the third, “Your Brain is the Key to the Universe: A Comprehensive Guide to Manifesting Your Ideal Reality and World Harmony,” was recently up for a Pulitzer Prize in the General Non-Fiction category this year. It is in this book that I reflected on many current issues and also wrote about my experience using Dragon software.
I really do owe Nuance a debt of gratitude for pushing the research needed to advance this field. I am a disabled Marine veteran and I have had many injuries in my life, including disabilities in my hands, fingers, wrists, and more. Dragon software has allowed me to realize my dreams of becoming a published author.
Joe Holt (aka Aloha Joe) is a Pulitzer Prize-nominated author, artist, photographer and life coach. He is the author of Aloha Joe in Hawaii: “A guided journey of self-discovery and Hawaiian adventure” (August 5, 2013), Stories I Can’t Tell My Kids – Yet (August 11, 2014), Your Brain is the Key to the Universe (March 4, 2015). Holt’s fourth-coming book, Godfather of Fisherman’s Wharf, is currently in production, and he just finished his fifth book, a fictional children’s story entitled “Gordy the Ferret.” Both were developed using Dragon Naturally Speaking. Click here to learn more about Joe Holt.
Publish Date: May 23, 2016 5:00 AM
In my last post, I discussed how human agents and human assisted virtual agents (HAVAs) can work together when machine learning and artificial intelligence are applied to customer care systems. Now let’s take it a step further.
In machine learning you often need to compare or “match” things. For example, when you are looking for the right answer in a database, you compare the question to the possible answers stored there. If you want to sort intents into buckets (so-called clustering) you need to compare them with each other and see how similar they are. In many modern approaches you do this by only looking for words to be there (or not) at face value, and in any order, an approach that is also called quite intuitively “bag of words.” If two sentences or texts are roughly composed of the same words, so the intuition, they are probably similar and capture a similar meaning. This approach works surprisingly well for many tasks (classical Internet search relies on this approach), although it seems to ignore that language is actually more than a bag-of-words: sentences have a structure and words have a meaning. Let’s look at these example sentences.
From the superficial bag-of-word perspective these look very different, although intuitively they capture a similar request or meaning and customers would expect an HAVA to understand that. Purely statistical approaches solve this by making the observation (after looking at thousands and thousands of texts) that the words “oil” and “lubricant” often appear in similar contexts and in that way implicitly learn the meaning of a word by identifying it with the contexts a word typically appears in.
However, there is a very old tradition within Computational Linguistics and Symbolic AI to capture aspects like structure and meaning of language more explicitly. For one, you try to capture the structure by assigning a syntax tree to a sentence, or an utterance. One class of such structures, so-called dependency trees, starts from the observation that the core of a sentence is the verb and the other words “depend” on the verb; similarly adjectives and other modifiers depend on the noun they are next to. Simplified dependency trees for (1) and (2) above could look like this:
And if you look at the parts circled in red you can see that they have become similar in structure. So if only we knew that change/replace, oil/lubricant and motor/engine mean the same or at least similar things, we would be there. In fact, many efforts have been made to capture such similarities, to sort words into buckets of similar meaning and organize these buckets in hierarchies of concepts. Not the first but a well-known one was Roget’s Thesaurus. Its modern, machine-readable equivalent is WordNet, a collection of 155,287 words mapped to 117,659 concepts (as of today!). And if we look at what it has to say on “engine,” we will see that it lists “motor” as “sister” term to “engine.”
S: (n) engine (motor that converts thermal energy to mechanical work)
In WordNet lingo that means “engine” and “motor” are in the same “synset,” we could also say they represent the same concept. So, if we now replace words by “synsets” in our two trees, they will become very similar or even identical in the relevant area. That way, measuring the similarity of text passages will be a lot more precise (as we will see later).
Now, the use of lexicons and syntactic structures will strike some people as a little old-school, pitting Symbolic Processing against Machine Learning.
But we at Nuance think differently: why not combine Machine Learning and symbolic processing? Enriching the raw data with syntactic and semantic information helps to turn mere “big data” (think of it as lots and lots of “bags of words”) into “Big Knowledge.” This can then be applied to HAVAs for a better customer interaction. We will explore what else this means for customer service in our third and last post of this series.
Publish Date: May 18, 2016 5:00 AM
Care team communication technology is key for physician efficiency and patient care, as the current amount of time physicians and nurses waste trying to coordinate care without these tools is staggering
This is part of our series highlighting apps that power physicians with voice using the new Dragon Medical One cloud platform.
Care Thread is on a mission to eliminate miscommunication and medical errors in healthcare. They do this with a secure mobile communications platform for hospitals and health systems that allows clinicians to communicate securely and accurately about patient care in real time from any mobile device or web browser. The Care Thread platform, now integrated with Nuance Dragon® Medical speech recognition, is used by all types of clinically-trained professionals to better coordinate care across the continuum while improving the clinician’s experience and patient care.
Jonathon Dreyer: What challenges in the healthcare industry drove you to build Care Thread?
Nick Adams: We were compelled to reduce the sheer number of serious medical errors that directly result from miscommunication. Growing up in the healthcare industry, my co-founder and I witnessed the staggering amount of time physicians and nurses waste keeping track of information, playing phone tag and generally trying to coordinate care. We knew this had to be a contributing factor to the miscommunication.
JD: What inspires you when creating an app?
NA: We build our platform and communication application for all types of professionals who make up patient care teams. Part of our mission of eliminating miscommunication in healthcare is to build digital tools that actually improve the experience of being a clinician today. That is what drives us in everything we do.
JD: Why is it so hard for clinicians to communicate in healthcare?
NA: It’s not that clinicians are bad communicators, but rather there is such a large amount of information to keep track of that there needed to be a better way to stay on top of the information beyond securing text message apps and other disparate modes of communication. We realized communication technologies in healthcare are totally separated from EMR systems so clinical care teams are stuck using old modalities that further the challenge by wasting time and creating delays rather than fixing the issue.
JD: How does your app help enhance physician-to-patient communication?
NA: Care Thread saves physicians the time it takes to get in sync with care teams about each patient. By spending less time gathering information and coordinating care, physicians can spend more time providing care to patients, including directly communicating with them. Additionally, the platform can enable care team-to-patient secure communication, accurately show the patient who their care team is, and make the patient feel that their physicians and broader care teams are working together and are in sync.
JD: How will Nuance technology and the power of voice enhance Care Thread?
NA: By integrating Nuance Dragon Medical and Care Thread, physicians will have the anywhere, anytime ability to dictate communication messages, notes, forms and even Macros templates, back into the EMR. All of the dictation is medically accurate, secure and patient-specific.
JD: What is your vision for Care Thread in the next 5 years?
NA: We see the need for full EMR integration which will enable predictive communications that engage the right people at the right time so every patient is digitally managed. This includes analysis of unstructured text messaging and conversations of care per patient per disease state, to identify the presence (or lack thereof) of pertinent clinical discussion topics.
JD: What do you think the future of mobile health will look like?
NA: The future of mobile health will become predictive, enabling anywhere, anytime patient care that is both proactive and preventative because of the ability of mobile to reach everyone who has a smart device and needs healthcare engagement.
To learn more about Care Thread, please visit http://www.carethread.com/.
To learn more about Nuance Dragon Medical One, please visit www.nuance.com/dragonmedicalone.
Publish Date: May 18, 2016 5:00 AM
Last month, we talked about the reasons millennial employees are more environmentally aware and tech-savvy than other generations in the workplace. We also discussed ways companies can put those strengths to good use in an effort to advance green workplace initiatives. Of course, millennials can help inspire and the lead the way, but to truly achieve a green workplace requires the participation and commitment of every employee.
In the second half of this two-part blog series, we reveal five easy, practical tips that millennials can implement in order to help create a greener workplace.
By implementing a reliable document management solution and mobile connectivity, you can provide employees with the foundation for a greener workplace and better productivity. Think of the savings that can result from reducing paper, toner cartridges, and other waste materials , not to mention the time saved from manual processes. Don’t let your company’s inability to move processes into a digital format be what holds them back.
Publish Date: May 18, 2016 5:00 AM
In my last blog post, I explained how we use different types of Neural Networks for both ASR and NLU. We already touched upon DNNS, RNN, and NeuroCRF, and I did not even mention that we use CNNs (Convolutional Neural Networks) for the “intent” discovery aspect of NLU. Does this sound confusing? Fortunately for end-users everywhere, you don’t have to worry about keeping all of the terminology and machine learning concepts straight – you just see the added benefits of increasingly accurate ASR and NLU.
Now, here is even better news: if you are a developer who wants to create a great app for the Internet of Things using speech technology (such as ASR and NLU), you no longer have to worry about the mechanics behind advanced concepts like machine learning. The reason is that we have done the heavy lifting for you. Through Nuance Mix, we are able to utilize our knowledge and expertise around neural networks of various types and how to apply them to specific tasks in order to create intuitive spoken interactions.
This new developer platform provides you with everything you need to quickly create, assess and refine your own speech application ideas. Perhaps most importantly, it gives you an easy to use interface for the setup and maintenance of your speech application’s ontology. What this really means is that you can determine what the app is to be used for, and provide your own sample utterances as the nucleus of training data. Once you’re past this stage, you can then apply the machine learning training machinery, with just the press of a button. Now that you have trained models unique to your app (which are basically the NNs we discussed earlier), you can deploy to a cloud based runtime environment and have your app up and running. Because you don’t have to be an expert in machine learning to use Mix, my colleague Kenn Harper called it “the democratization of voice technology” recently.
By taking a lot of the hard work out of integrating speech into your app, we allow you to focus your creativity on the app you want to create- an area in which you are the expert. And a creative approach is especially important now, as more and more devices enter the IoT sphere that can make sense of speech and natural language. To help spark that creativity, we are holding a series of “hackathons” and similar events, addressing both the needs of industrial users as well as enabling students to experiment and innovate with speech technology.
We recently partnered with DFKI (the German Research Center for Artificial Intelligence), which is located on campus at the University of Saarland, to host a hackathon of our own. Having been a proud stakeholder in DFKI since 2014, and understanding the way in which DFKI can bring AI into the German industry, we knew we would see some exciting projects. On the first day, we saw great participation by industrial partners who learned first-hand how to use mix from Mix Masters Nirvana Tikku and Samuel Dion-Girardeau. After a thorough workshop, the group gave it a try on their own, having the chance to test out our web-based developer platform.
The second portion of this event was a student hackathon, which my colleagues Christian Gollan and Hendrik Zender, DFKI alumni, have just returned from. Running from 5:00 PM Friday until 5:00 PM Saturday, the students engaged in a 24 hour coding spree to speech enable devices using Nuance Mix and SIAM-dp (DFKI’s own dialog platform). Having seen university students create some amazing championship winning inventions such as Lisa the robot, we had high expectations. We weren’t disappointed as every team involved came up with impressive solutions that would help address existing problems or areas of need by using speech, natural language and DFKI’s multimodal dialog platform.
Overall, the event resulted in a number of captivating applications that worked to simplify the interactions between people and technology. However, especially of note were our prize-winning teams. Our top winners were as follows: in third place a chatbot that could act as a personal assistant; in second place a speech-enabled robot that could help children learn how to do math; and, in first place, an intelligent home solution that enabled would-be houseguests to use a voicemail box for when no one is home. For the announcement of the winning teams and the award ceremony, we were joined by Professor Wolfgang Wahlster, CEO and Scientific Director of DFKI. He congratulated the students for their excellent results and emphasized the importance of speech interfaces and artificial intelligence for the ongoing transformation of how people will interact with the technology that surrounds them. He also stressed the pivotal role that the collaboration between DFKI and Nuance plays in this transformation.
We agree and think this event gave students with an interest in speech technology the opportunity to learn and work with cutting-edge tools in a fun, yet challenging environment. Besides winning prizes, eating pizza and drinking a lot of coffee, everybody involved exemplified the ways in which tools such as Nuance Mix and SIAM-dp could very well help build the intelligent, interactive solutions of our future.
Publish Date: May 17, 2016 5:00 AM
Customer experience is a prime differentiator for many organizations. Many products and services are becoming commoditized and today, the experience a company provides can set them apart. This was recently showcased in the Temkin report on Experience Ratings which highlighted companies and industries at the top and bottom of the customer experience spectrum and considered their performance based on three components: Success, Effort, and Emotion.
But this got me thinking: why should a company have to wait to see their customer experience ranking until a report is released? So I determined six call center metrics that really matter in judging the effectiveness of your own customer’s experience, so you can track how you’re performing on an ongoing basis.
When we call a company to resolve an issue we just want it fixed. That’s all we, as customers, care about: a successful resolution. The questions any organization needs to ask themselves then are:
The ‘Success’ metrics that address these questions are ultimately the most critical areas of focus.
First Call Resolution (FCR): This is one of the most important metrics for any company. First call resolution (FCR) is how well your company takes care of the customer on their first attempt to resolve an issue. It’s calculated formulaically as number of calls resolved / all incoming calls.
Why it matters – FCR is important both as an indicator of external customer satisfaction but also an internal metric for effectiveness of your company’s processes and technology. Get this wrong and customers must call in multiple times – putting a strain on their patience and your systems.
Containment: This is a surprisingly straightforward measurement. All call center executives want to improve the ability of their IVR to accurately and effectively answer customer questions without having to reach a live agent. That is keep them within the IVR, i.e. containment. Containment is measured by the number of incoming calls resolved within the IVR as a percentage of total inbound calls. If the IVR is poorly designed and confusing, customers will not progress and instead “zero out” to a live agent. We’ve all been through that scenario.
Why it matters – Getting containment right keeps other metrics on track. Increasing the number of people who effectively self-serve increases their satisfaction and helps the company’s bottom line. Customers are happier, agents are happier due to decreased call volumes, and CFOs are happier due to decreased need for investments.
Nobody wants to spend a ton of time dealing with issues with their bank, insurance company, or TV provider. If this becomes necessary, we want to minimize how much time we put into it. Our effort must be low. And in fact, research shows the lower the effort, the greater the loyalty and satisfaction a customer will show to a company. Consumers like to be delighted with minimal effort and reduced friction on the way to problem resolution.
Misroutes: Put simply, misroutes occur when a company’s IVR sends a caller to the incorrect destination. When someone calls a customer service line and ends up someplace they didn’t intend, it’s usually the work of a misroute. Misroutes occur for a variety of reasons, including outdated technology that incorrectly recognizes speech or confusing phone menus that force annoyed customers to ask for a live person.
Why it matters – Misroutes directly increase the effort required to close a query. Each stop along the way creates more work and extends the call. Key metrics eroded by misroutes include average handle time, containment, first contact resolution, and more. Plus, misroutes dramatically increase costs and irritate customers, decreasing satisfaction and driving churn.
Average handle time: Some calls seem to take forever, going on and on with pushing buttons and repeating information. Looking at an aggregate view of all calls together allows a company to track the average handle time (AHT), or length of time a customer is on the phone. This is a very popular call center metric and is traditionally measured from the moment the customer calls to the time they hang up – including hold times.
Why it matters – In addition to wanting to lower handle times to improve the customer satisfaction, AHT is a prime factor when deciding call center staffing levels. Knowing the typical duration of a call allows companies to successfully model the number of agents they’ll need and how best to balance workloads during peak hours.
We live in a world driven by feelings. Consumers want “Likes” on their Facebook posts. They enjoy videos showing the good in people. They are quick to rave – or rant – on social media about how a company made them feel. Organizations that tap into these emotional needs positively will generate great interest in their brand.
Customer satisfaction: “Cust Sat”. NSAT. CSAT. The shorthand and acronyms vary and every company uses one or another. No matter which one is chosen, the two most important aspects are to 1) know that it’s the measure of the overall satisfaction of the interaction or service and 2) to get it right.
Why it matters – Customer satisfaction is the number one indicator of how well you are doing to satisfy your customers. It’s also a great way to gain insight into customers’ thoughts on the products you offer today as well as identify future direction for product development and feature updates. By keeping tabs on overall customer satisfaction, companies can make adjustments quickly to improve service levels, reduce wait time, or address frequent queries. Call centers are often the front line of issues and companies can get instant feedback as to how they are doing.
Net promoter score: If customer satisfaction is the number one indicator of IF your customer likes you, then Net Promoter Score (NPS) helps you understand just HOW much they like you. Customers may like your product or service after they get off the phone with you. But if they really liked it, they’ll pass it along to friends or post on social media. The Net Promoter Score essentially allows you to measure customer loyalty. It classifies customers into one of three categories:
The Net Promoter Score is derived by subtracting the percentage of detractors from promoters to get an overall NPS result.
Why it matters – As you’d guess, the more detractors you have the lower your NPS and the increased likelihood that your service isn’t very good. Detractors are more likely to spread negative word of mouth and do so much faster than if they receive average or great service. A continually low NPS score will spell trouble and ultimately impact the brand. Companies that successfully track NPS and spark action from a high number of Promoters can improve customer loyalty and drive long term growth.
Understanding and effectively balancing the metrics based on Success, Effort, and Emotion will help you achieve your IVR goals.
Publish Date: May 17, 2016 5:00 AM
A recent profile in the Wall Street Journal shows how the NBA Champion, Golden State Warriors, – for many years a league doormat – used statistical analysis to determine its traditional strategy of working all of the 24-second shot clock for a chance close to the basket (“down in the paint” in basketball parlance) was costing them points and victories. Instead, the numbers said players should be attempting more long range three-point shots. Many more.
Based on this, the Warriors redesigned their offense. Instead of making multiple passes and cuts in an attempt to get a lay-up (Figure 1), the Warriors get the ball to their best three-point shooters as quickly as possible, even if it meant they’d be taking the longer shot (Figure 2).
The result? After winning the NBA title in 2015, the Warriors went on to win a record 73 games during this year’s regular season, and having won their first two series are heavy favorites to repeat as champions.
When you examine the design of a typical IVR application, it looks remarkably like the complex, old-style basketball play:
The customer starts at the top, with a prompt that says something like “please listen carefully, our menu options have changed” and then proceeds to hear at least a half-dozen options that might, or might not, match what they’re calling about. If they make a wrong choice, they either have to go back to the top and start over, or they call “TIME OUT” and press 0 for an operator.
When management looks at the performance of such a design, they have to wonder, as the Warriors did, if there is a better way.
There is. It’s called Conversational IVR. Instead of reciting a long list of options, hoping the customer will find what they’re looking for, Conversational IVRs simply ask, “How can I help you?” Using speech recognition and natural language understanding, the IVR is then able to determine the reason for the call and provide either the right answer or a way to get something done.
Statistical analysis of Conversational IVRs show they increase containment and task completion. We’re seen companies leveraging Conversational IVR show a 5-15 percent increase in containment. Just like a Warrior being given the ball at the three-point line and launching his shot immediately. SCORE!
Of course, neither the Warriors nor the Conversational IVR would win if they didn’t put the right players on the floor. The Warriors have built their team around two of the best long distance shooters in the game, Steph Curry and Klay Thompson, with a supporting cast of top-notch pros who understand the strategy and execute it flawlessly.
In the same way, your Conversational IVR needs the best speech recognition and natural language understanding capability, with applications designed by pros who know how to get the customer from “hello” to “happy” as quickly as possible on platform that performs without a hitch.
Who are you going to pick for your team?
Publish Date: May 12, 2016 5:00 AM
This post is part of a series that explores the use of human assisted virtual agents, and how machine learning and artificial intelligence are being applied to ultimately improve the customer experience.
Customer support automation is an important playing field for today’s Artificial Intelligence and Machine Learning systems. This no longer means primarily call center automation, but rather users expect and use a mix of channels, including web and chat. And all of these can be automated. Some may ultimately wonder if a human or an automated customer service agent is better, but from where we sit – it’s not an “either, or.” Instead human and automated service agents might actually cooperate to get things done for the customer and ultimately offer a better experience. In other words, a human-assisted virtual agent (HAVA).
Before we look at two different ways of doing that, we first need to understand what the actual tasks are that we are trying to solve. And that starts with the fact that customers will have different problems to solve; these are called “intents.” There may be hundreds of them and they will range in complexity. The simple ones are requests for information (“Do you have details on product X,” “how do I switch feature Y on”?), which can be solved by finding the right answer in a data base of documents available to both the human agent and the automated agent. The more challenging ones will involve access to multiple backend-databases and involve doing transactions on customer data (“please change my payment scheme from monthly to quarterly”). In the scenarios we are looking at we may have automated some of the intents, but not all (yet).
In our first scenario, there is a chat going on between customer and agent. The virtual customer service agent sits behind the agent, but follows the conversations, and for intents where she can generate the answers she will do and suggest these to the human agent (for example, by quickly populating the agent’s screen). That way the agent can be much more efficient, and only has to focus on the more challenging aspects. Also she can check if the suggested answer is correct, which provides good feedback to our HAVA for getting better at her task (and we’ll come back to that below).
In the second scenario, it is actually the virtual agent who performs the chat conversation with the customer. Where she is confident she can answer the request, she will do so right away. But for intents not covered by her knowledge base or if in doubt if she has the right answer, she can involve a human agent in the background. Note that it will still be our virtual agent who gives the answer back to the customer, and this highlights an advantage of this model: the customer will experience an apparently perfect system from day one, even if she is still in her learning phase. And as she gets better she’ll just have to ask for help less and less, but the customer experience will stay the same. And of course the “getting better” is the other interesting point here.
The virtual agent, uses machine learning to get better at things, but most machine learning techniques work in so-called “supervised“ mode. That is, not only do you need data to learn on (lots and lots of data actually) but it also has to be hand labeled with the right answer. If you want to train a Neural Net to recognize faces, you need pictures with faces labeled with the correct name, for doing speech recognition we use thousands of hours of labeled or “annotated”, as we call it, speech. The nice thing in our two scenarios here is that we get data suitable for supervised learning for free: the virtual agent has access not only to the customer requests, but also to the correct answers as an agent provides them.
So, by creating a useful virtual assistant tool that can be refined by the customer service agent, we’ve solved several typical problems associated with virtual assistants: 1) We’ve reduced negative user experiences since we have a human to step in when the virtual assistant invariably makes errors. 2) Customer service agents are correcting the virtual assistant while doing their typical work: they do not have to be re-assigned to label lots of data to create the virtual assistant, instead they are just answering user questions. Finally, as the customer service agent is answering questions, they are also creating labeled training data that is exactly in the format that sophisticated deep learning techniques require, which will lead to a virtual assistant that performs closer and closer to state-of-the-art.
In Part 2 of this blog series, we will have a closer look at two specific tasks in this context and how we solve them with machine learning techniques using this data.
Publish Date: May 11, 2016 5:00 AM
Temkin Group recently released their 2016 Temkin Experience Ratings, which grades companies across different industries on the basis of customer experience. It may not come as a surprise to my colleagues in the healthcare industry that health plans were rated lowest of all 20 industries evaluated. (No health plans were even ranked among the top 50 companies, though Tricare and Kaiser can claim the top two spots within the payer vertical.)
This doom and gloom comes as consumers have raised their expectations around consumer experience, based on the online self-service standards now set by retailers, delivery services, banks, and credit card companies. In addition, more than 16 million new consumers have enrolled in health insurance through Affordable Care Act-related programs (like the exchanges), and these members are often novices when it comes to health benefits, needing more guidance than a commercial member and more interaction with the plan, by default. Combine this with the growing emphasis on member engagement and chronic disease management, and health plans have their work cut out for them.
But all is not lost! I thought Kelly Rakowski’s recent article in Managed Healthcare Executive did a nice job of laying out three areas in which health plans can improve the member experience and potentially move up in the Temkin Ratings (perhaps because they’re topics I’ve steadily beat the drum about over the past couple of years):
There’s a lot at stake here. Exchange members are increasingly fickle and bring a different set of expectations for service than what health plans are used to, and this results in lower retention rates and higher shopping rates than health plans are used to in their commercial business.
As a JD Power report recently put it, “health plans need to take a more customer-centric approach and keep their members engaged through regular communications about programs and services available through their plan. When members perceive their plan as a trusted health partner, there is a positive impact on loyalty and advocacy.”
Here’s hoping the next year brings an elevated member experience, and a spirited climb up the basement stairs into the daylight!
Publish Date: May 10, 2016 5:00 AM
It’s sad but true: Data breaches have become a way of life, but unless you’ve experienced one, it’s difficult to realize just how painful they can be. Yet with research from the Ponemon Institute that shows that the average cost of each stolen record can be as much as $200 – and can forever tarnish a company’s reputation – it’s clear that organizations of all sizes need to do all they can to protect their data.
To that end, companies are looking for consistent and effective ways to safeguard client or proprietary data. Implementing traditional security tools and attempting to anticipate hacking problems can dramatically reduce vulnerabilities. A recent article from Workflow highlights four additional steps your organization can take to improve security efforts even more.
1. Focus on what you should keep. One essential way to lower your overall risk is to create clear policies about what types of documents or records you keep in the first place. Not every record needs to be kept, and when you think about it, not every part of a record needs to be kept either.
Adopting a partial document archiving strategy can keep more sensitive data – such as social security numbers or credit card information – out of internal systems when documents are stored. This step can successfully limit what hackers can get their hands on if they’re able to access your network.
2. Take advantage of redaction. One of the hot trends in document archiving today is the storage of captured documents, scanned documents or other documents that are stored as images. These digital images have a wealth of information in them, but the risk is that hackers can get all of this information if they get their hands on these records.
One effective solution is redaction. Automatic redaction features can help users block access to their most sensitive information. For example, specific identifiers like social security numbers may have little or no business value inside a document management or archival system – and may present added risk. If users can use PDF solutions to automatically redact this information, hackers can’t access this data. Business-class PDF solutions enable users to search for words or phrases or even patterns as the basis for redaction – critical steps in saving time and improving security.
3. Protect endpoints, including multifunction printers (MFPs). We’ve written previous articles on the security risk MFPs pose, and unfortunately, the risk still exists. You also need to guard system endpoints. Look at USB ports and other access ports, and monitor them better to prevent certain kinds of insider threats and data loss. Network management systems also help. By keeping an eye on the system, and tracking sensitive data as it goes through the system, businesses can decrease the chances of being involved in an embarrassing and costly data breach.
4. Keep up on security best practices. Having an ear to the ground also helps when trying to safeguard business systems. Keeping up with items like the SANS 20, a list of recommended security controls, or news from places like Kaspersky Lab, can help business leaders to circle the wagons in an age of prevalent cybercrime.
Admittedly, it’s challenging to stay ahead of hackers, but following these best practices will help level the playing field and help you improve your overall security.
Publish Date: May 5, 2016 5:00 AM
If you manage – or ever even touch – your company’s IVR, you know the ‘classic three’ KPIs: misroutes, call containment, and first call resolution (FCR). Nearly every contact center executive depends on them, as they have a direct impact on customer satisfaction and controlling costs. But what’s the relationship between the three and how do organizations make sure they aren’t putting too much emphasis on one, which could adversely affect another?
Many companies have learned this lesson the hard way over the years. And although fundamental, businesses need to be reminded from time to time to review how they are using the metrics and increase focus on getting to know their customers – the why, when and how they contact them. This continuous effort needs to be part of the metrics discussion to create a clear path for IVR design and improvements.
The best way to learn is by looking at others already leading the way. Consider the success of Amtrak, which receives more than 20 million calls per year. You can be sure they keep an eye on ways to efficiently automate as many calls as possible, while making sure they strike the right balance between customer satisfaction and cost savings. In fact, nearly 25 percent of all calls are handled within its IVR – and of the people that choose to self-serve, 54 percent reach full resolution within the automated system. By analyzing the right mix of metrics and applying tweaks to their self-service, Amtrak has seen a 53 percent lift in customer satisfaction and massive cost savings.
Enterprises and agencies globally (like American Airlines, NYC311, The State of Michigan Office of Child Services and Delta) are also reaping the benefits of analyzing KPIs and caller population to make important design changes to their IVR. The analysis can reveal important ways in which newer technologies should be used to reduce misroutes, contain calls and costs, and complete a query in the first call. By applying predictive logic, voice biometrics, proactive notifications and conversational use of natural language and dialog, interactions become intelligent, streamlined, easy, and agile – all imperatives for a great caller experience.
Join me for a Nuance-hosted webinar, “A New Twist on Self-Service Metrics: How to Get More CSAT and Cost Control from Classic KPIs”, where we’ll share more tips in greater detail on:
Publish Date: May 5, 2016 5:00 AM
During the Nuance Automotive Innovation Day held March 7th in Palo Alto, CA, I had the opportunity to connect with Margarete Wies, Vice President Advanced UX Design at Mercedes-Benz Research & Development North America Inc. Here’s what she had to say about the future of the connected car.
Fatima Vital: What is top of mind for automakers when it comes to next generation infotainment?
Margarete Wies: People are used to taking their digital lifestyles wherever they are. They don’t want their digital life to stop when entering a car. Our customers expect in-car infotainment that supports a seamless integration – an expectation accelerated by the Internet of Things. The vehicle infotainment system becomes the digital core of the car offering technologies to enhance safety, privacy and convenience. Future Mobility is not just about cars, it is about ecosystems.
FV: How have you viewed this rapid evolution of the connected car over the last few years and more importantly, the consumer appetite?
MW: Streamlining our lives by connecting the car to the Internet of Things is a rapid evolution we experienced in the last few years. As a reaction to our customers’ appetite we launched the Mercedes-Benz Companion App. This is a great example of how we integrated contextual intelligence into customers’ habits and are evolving with their digital lifestyle. In addition to door-to-door navigation, the new Companion App uses machine learning to provide a personalized user experience by learning from customers’ actions and their environment. It extends customers’ existing behavior on their mobile phone seamlessly into Mercedes-Benz vehicles.
With our Apple Watch app, we are extending the user experience even further by assisting our customers when they are outside of the vehicle as well. The user receives important notifications about his car, walking directions to the final destination and to the parking location of the car.
FV: There is a lot of content and information being brought in, built in and beamed into the car. Where do you see the biggest challenge from a UX point of view? And how is Mercedes addressing it?
MW: Managing complexity is a big challenge. Digitalization enables us to reduce complexity, to create more convenience, better accessibility and ease of use. Our user experience design plays a significant role and follows the Mercedes-Benz design philosophy of sensual purity, which conveys a sense of simple, purist modernity.
So for our role as creators and product makers, how can we create a better user experience while living with technology? We are addressing it through a natural interaction between the human and the machine, utilizing artificial intelligence for contextualization and personalization – providing technology only when needed. The car anticipates my actions and knows me in ways that delight. I’m still in control of the experience, supported by the intelligence.
Take the sensitive Touch Controls in the steering wheel of the new E-Class. Like a smartphone interface, they allow the driver to control the entire infotainment system using finger swipes without having to take their hands off the steering wheel, and with minimum driver distraction.
FV: As you look 5-10 years out, what is Mercedes’ vision for the connected and autonomous car?
MW: We see the car growing beyond its role as a mere means of transport. The interior of the vehicle becomes a contextual and highly personalized digital living space. The intelligence of the car allows for continuous exchange of information between vehicle, passengers and the outside world. The passengers in self-driving cars can use their newly gained free time while traveling for relaxing or working as they please.
Mercedes-Benz is setting the pace in Autonomous Driving: In 2013 with the “Bertha Benz Drive,” when the S 500 INTELLIGENT DRIVE research vehicle drove 100 kilometers along a historic route to demonstrate the feasibility of autonomous driving on both interurban and urban routes; In 2016 with the new E-Class, the world’s first series vehicle with test license for autonomous driving in Nevada; With the Mercedes-Benz F 015 Luxury in Motion research car and its groundbreaking aesthetics and technology, which we demonstrated at CES 2015 in Las Vegas to show what the self-driving car might look like once its shape has been emancipated from the need to have a driver.
New technologies such as artificial intelligence give us the opportunity to focus on human needs even more. We can use the time in our moving space in a more valuable way. The future car will offer you more flexibility and more quality time to do what you want.
FV: What would you personally enjoy doing in a fully autonomously driving car?
MW: Well, what makes our time in the car special? Maybe it’s the time we spend interacting with others. I would use my car as my “third place,” spending quality time with my family, using it for relaxing and entertainment, as well as for my office space.
Publish Date: May 5, 2016 5:00 AM
Although it had been a while since I saw the original Star Wars trilogy, within minutes of watching The Force Awakens I immediately was reminded of the essential and integral role robots play in the films. Not only do they have distinct personalities, they have emotional intelligence, guiding and complementing their human counterparts, and acting (both literally and figuratively) as wingmen. From the co-piloting abilities of the droids to the etiquette protocol and translation assistance that C-3PO provides, the assistive nature of these machines beckons a future with even tighter, more natural integration between technology and society.
We have entered what MIT professors Erik Brynjolfsson and Andrew McAfee term “the second machine age,” an era in which we are learning to harness the power of digital technologies to apply massive data sets, algorithms, and machine-learning capabilities that improve how we do things. Although still nascent, the robots and virtual assistants currently being developed leverage large amounts of data and knowledge to determine a person’s intent, process the request, and then respond and react appropriately.
But today’s technology doesn’t stop with call-and-response activities. One of the goals of machine-learning and AI is to automate certain tasks by replicating how a human would process, handle, and perform them. This means that as denizens of “the second machine age,” we are beginning to design new systems that can successfully handle complex problems and find alternative options when an unforeseen complication arises.
This human-machine experience becomes even richer when systems are able to recognize and leverage interpersonal factors such as body language and tone to understand and emulate human intention, instead of requiring specific directives. Machines that can learn the subtleties of human behavior and simply know what action best matches the environment and situation have endless potential. These systems are not only assistive and able to solve complex problems, but are also able to classify information as positive or disappointing and respond empathetically in tone and gesture. Starting to sound like a familiar gold-plated droid?
We are quickly approaching this level of contextual interaction and problem-solving capabilities, which is, at its core, what C-3PO and the protocol droids do—they empathetically assist. And it is the way they respond that help foster deeper relationships between man and machine, not only helping us accomplish tasks, but providing a sounding board, a second opinion, or basic advice. As we continue to make advancements in robotics, sensors, cognitive computing, and artificial intelligence, we are charting toward a future where intelligent machines not only exist, but become an extension of ourselves, working toward a common goal. We are entering a new era—one full of partnership and promise…
May the Force be with you.
Publish Date: May 4, 2016 5:00 AM
In 2014, Pharrell Williams sang and danced his way into our collective consciousness with his infectious hit “Happy”, inviting us to clap along if we felt “like a room without a roof,” urging us to emotional heights that could not be contained by four walls and a ceiling. If only consumers felt that way about the companies they do business with.
While most companies who sell to consumers claim that when interacting with them, their customers are able to accomplish what they set out to do without too much effort, these companies are failing customers emotionally.
When the Temkin Group asked consumers how they felt about their most recent interactions with any of the 294 companies included in the 2016 Temkin Experience Ratings survey, no company got an excellent rating, synonymous with customer delight. Even more disappointing, 40 percent of companies were rated as poor on the emotion metric, indicating their customers were left feeling at least somewhat upset by their interaction with the business.
You might think if a customer can do what they need to without breaking a sweat, that would be enough. Yet providing an emotionally-positive experience is the most important factor in earning and keeping loyal customers. How consumers feel about an experience heavily influences their future purchasing behavior – and Forrester has found that negative emotions have a much bigger impact than positive ones. Make a customer angry and they’re not only likely to stop doing business with you, they’ll tell everyone they can about their negative experience. In a recent survey, Wakefield Research found 78 percent of customers will take action after a single bad interaction!
On the flip side, positive experiences can go a long way when it comes to creating the coveted “promoter” customer, the one who readily tells friends and family that they should also patronize your brand. Eighty-eight percent of consumers will take positive action if they have a good experience with your business.
Want more promoters and fewer detractors? Of course you do. Here are three things that can improve the emotional impact of your customer interactions:
One of the most effective ways to improve your customers’ experience is to predict their needs and proactively offer solutions before they experience a negative consequence. A good example of a company that does this is Southwest Airlines, which got the highest rating of any airline in the Temkin survey. When Southwest has to cancel a flight, they immediately reach out to their passengers using text and voice messages that offer rebooking options. The result for Southwest is happier customers and a reduced influx of calls into their contact center.
Some interactions are more likely to spawn negative emotions than others. Attempting to collect a past due account is one of these. Customers are often defensive when it comes to their finances, especially when dealing with a cash flow problem. Having a highly motivated but inexperienced employee calling them to make a demand for payment can start an argument faster than you can say “you can’t get blood out of a turnip.” A better approach is to use a digital communication such as a voice or text message that objectively presents the fact that an account is late and provides interactive response options for resolving the issue. If the customer then selects the option of speaking with a representative, it’s their choice and more likely to result in a positive outcome, from both a transactional and emotional perspective.
It’s been said that complaints are like medicine, sometimes hard to swallow but good for you. That’s true, but only if you resolve the issue raised by the customer and then work to change the behaviors that generated the complaint in the first place. To do this, you need an effective complaints management program. Your program should not only deal with formal complaints submitted directly to you or to third parties like the Consumer Financial Protection Bureau, but also be capable of identifying negative emotions expressed by a customer during an interaction that don’t result in a formal complaint.
These are ticking time bombs that you want to root out and defuse like a champion minesweeper. To do this, you need tools like speech analytics that examine your customer interactions for keywords, phrases, and in the case of telephone contacts, acoustic characteristics that indicate customer stress or dissatisfaction. The most sophisticated of these tools allow supervisors to intervene while the customer is still engaged, providing real-time coaching to customer service agents or taking over if necessary.
If you do these three things, will your customers start dancing and singing like Pharrell? Your “voice of the customer” surveys are a good place to look for the answer. But don’t stop with just these three initiatives. Use the feedback to create a model of what matters to your customers, and keep a short list of pain points to eliminate or fix to keep them happy.
Publish Date: May 3, 2016 5:00 AM