Nuance - ContactCenterWorld.com Blog Page 7
If you’ve been reading my series, you know that AI and machine learning (ML) can have a powerful impact on delivering the best possible customer care experience. Specifically, we’re applying “big knowledge” for customer service tasks. What does this mean?
The first task we want to look at is “passage retrieval,” or finding relevant text parts that contain the answer to a question. It helps to solve the “simple” intents in the customer service application we mentioned above, where customers ask for something that is (hopefully) contained in the document data base. And instead of searching for words only, and hoping that customer question and target document use the same language, we will apply what we learned in the last part of this post.
As the diagram shows, the trick is that we run both the database of documents and the question through the Natural Language (NL) pipeline and generate enhanced dependency trees. The former is done offline to compile an index of such trees and the question is processed at run time. The best matching trees are selected as the answer candidates and the text passages are then ranked, and the best candidate reads back to the customer. When we tested this at a customer we found that it worked much better than their legacy search tool, which was based on traditional word-level search. That worked reasonably well when customers used an appropriate keyword (it would find the right answer in 84% of the cases) but it degraded a lot when people were using full natural language queries (54% success). Our new solution scored 96% and 81% respectively.
Similarly, we are now using this for another typical ML task, “clustering.” As I mentioned above, when customers contact the agents they may have one of several different “intents” or buckets of tasks in mind. How do we know which “buckets” exist? Of course you can do a manual analysis, which may be very time consuming. Instead, you can also use ML with methods that try to find “clusters” of things that look similar compared to the rest of the data. In our case, imagine you look at 100,000 requests that came in, can you find 100 or 200 or 500 “buckets” that can be mapped to request types or intents? If you do this automatically the additional benefit is that you can monitor how requests change over time, as what your customers want from you may also change over time. The naive approach is to apply standard machine learning clustering approaches on the initial requests that customers have, at world level. But given what we learned in this blog series we can improve over this in two ways: First, we will not only use the initial user utterance. Instead as we can observe the entire interaction, like what the human (and automatic) agents actually do with the request, we should take that entire interaction into account when clustering. And secondly, again we will use our NLU pipeline to transform mere words into semantically enriched trees and runs the clustering algorithms in these.
Both of these approaches again allow us to reduce annotation time before a technology is put in use, and allow us to take advantage of the unstructured data that so many enterprises have readily available. The virtual assistant is in essence not only doing something useful for the end user, but also helping to translate a company’s unstructured data into exactly the kind of labeled big data that will allow the virtual agent to move towards state-of-the-art AI learning.
So, big knowledge means big changes coming to customer care through human assisted virtual agents (HAVAs). With the right methods in place, they can drive a more collaborative engagement between humans and machines to create an effective and efficient customer experience for people around the world.
Publish Date: June 7, 2016 5:00 AM
Virtually any article on customer service or customer experience these days contains one or several of the following key words: multi-channel, cross-channel, and omni-channel. Though these words are often used interchangeably, each has a different definition and application. To demystify the jargon, here’s a quick guide to what these terms actually mean and how they can be applied to your business’ customer interactions.
A multi-channel approach simply means that you are using more than one channel to reach customers. This concept isn’t necessarily new, right? Whether it was telephone, mail, or in-person, there’s almost always been more than one way to reach a company. However, with the rapid proliferation of channels in recent years – from text messaging to social media – the notion of multi-channel is more important than ever. It’s expected there will be an average of nine customer experience channels for more organizations by 2017. As customers now have more channels at their fingertips, businesses must follow suit to keep up with customer preferences.
An example of a multi-channel approach is when a company offers customers multiple mediums to choose from to reach the business – whether email, phone, text, or website – and customers are given the ability to select whichever option they’d prefer to communicate through.
Cross-channel refers to the idea that a customer uses a combination of channels to perform a task. A cross-channel approach builds off the multi-channel strategy. Since customers now have access to more channels, it’s natural for them to use more than one to accomplish their goals. This approach allows customers to use multiple channels in tandem.
For example, a customer who has a question regarding a product may start by looking at the company website’s FAQ, then call a customer service representative to gather further specifics.
Just as cross-channel builds on multi-channel, the omni-channel approach builds upon both of the previously stated strategies. Omni-channel recognizes that customers are actively using multiple channels (often simultaneously) and aims to streamline those interactions. By saving customer preferences and information, and referencing that data in all subsequent interactions, an omni-channel approach creates a seamless and consistent customer experience across channels. Essentially, it means that all the channels a business employs work together.
For example, let’s say a customer makes a payment online. But the next time around, the customer calls a representative to make the payment. In this scenario, under an omni-channel approach, the customer service rep will be able to immediately reference the information from the past payment interaction. The customer doesn’t have to repeat all of their account information and preferences, because the representative already has it.
But why does this all matter? Simply put, customers don’t live in a single-channel world. From mobile phones to smart watches, the way customers communicate is different than it used to be. People expect and demand to be able to use which channel they want, when they want it, where they want it. And if businesses don’t comply, they’ll fall behind to competitors who are more than willing to meet customers where they want to be. Furthermore, businesses who integrate these strategies will not only boost customer satisfaction, but they can save time and money while doing so by streamlining their customer interactions.
So with these definitions in mind, revitalize your customer service and put these terms to work!
Publish Date: June 6, 2016 5:00 AM
The future is upon us. Companies are now starting to adopt technologies that can verify identity which were once thought overly futuristic (HAL had no problem identifying Dave in 2001: A Space Odyssey). Fingerprints are no longer the only unique body identifier: irises and even ears are now on the list of biometric authentication tools. But the identification opportunity that’s the least understood is voice biometrics, mostly due to misconceptions about what voice biometrics actually is and whether it is secure.
Let’s take a couple minutes to dispel some of these misunderstandings, so you can determine if voice biometrics is the best way for you to provide a seamless authentication experience for customers.
Myth #1: People can overhear me and will be able to steal or use my password.
For more than 20 years the internet has been telling people to “Never give out your username and password or very bad things will happen.” Even my own son was leery of typing his password with me looking over his shoulder (granted, that may have been due to good reasoning). Keeping one’s password safe is just good sense. Why would I speak my password somewhere where fraudsters could hear me?
The important distinction with this misconception is your voice is your password. Voice biometrics leverages more than 100 unique speech characteristics to create a unique voiceprint (just as individual as a fingerprint) for each customer. The words you speak do not grant you access to your account – the unique characteristics of your speech pattern are your password. There are both physical and behavioral characteristics of a person’s voice. Physical characteristics such as the shape of your vocal tract, how your mouth moves when you speak, and the size and shape of your nasal passages are unique to a given individual. In addition, voice biometrics recognizes unique behavioral traits such as pronunciation, speed of speech, pitch, and accents. No one can steal your account information simply by hearing you speak a passphrase.
Myth #2: Everyone says I sound just like my Dad; wouldn’t he be able to log into my account?
You may sound just like your Dad to everyone around you, but to the voice biometrics system you are two distinct individuals. As a matter of fact, WIRED Magazine recently put Nuance’s voice biometrics to the test by comparing famous people to master mimics – including Kevin Spacey. While the mimics sound just like the originals, voice biometrics was not fooled.
Myth #3: My voices changes all the time. I’m worried I won’t be able to get into my account if I have a cold.
This misconception is one of the most common ones, and many CIOs, contact center managers and other people consider it a show-stopper. But numbers prove otherwise.
For example, Nuance’s VocalPassword solution has delivered successful authentication rates within customer-facing IVRs 97 percent of the time. On average, a person with a cold tends to experience an error rate that is about double the average. As such, a person with a cold has a 94 percent chance of getting successfully authenticated, which is still significantly higher than the 40 to 60 percent success rate customers typically experience with a PIN or password. The high success rate for people that have a cold is made possible by Nuance’s approach of analyzing more than 100 aspects of each caller’s voice, and a cold affects only a handful of those.
Myth #4: If someone hacks the company’s database they will have access to my voiceprint.
After the Target security breach of 2014, everyone has been talking about what happens when a hacker gets your credentials. Unlike with a username and password, the hacker cannot use what they have stolen because it requires the back-end to process the voiceprint. Even if they were able to steal the recorded voice, Nuance provides playback detection to protect from spoofing. This feature tests incoming audio to see if it represents live speech or if it fraudulently uses a recording of an authorized speaker, mitigating the risk of fraudsters using voice recordings of legitimate speakers.
Infiniti Research estimates that voice biometrics can address 90 percent of fraud in a voice channel, as well as address 80 percent of fraud in a mobile channel. So even with hackers on the prowl, your data is safer with voice biometrics.
Myth #5: I don’t like biometrics because it is based on something that cannot change (Fingerprint, iris, voice) and if I need to change my password it cannot be done!
Unlike fingerprints or iris’, which are static biometric credentials, voice biometrics is a dynamic biometric credential. A static biometric, like a fingerprint, is unchangeable, while a dynamic biometric is constantly evolving.
Most of us have ten fingers, so there is a small amount of variability that is possible: if you enroll your right index finger to authenticate into a system, and a hacker compromised your fingerprint, you could enroll another finger. But at the end of the day, you have a maximum of ten possible credentials with fingerprint biometrics. With iris, that number drops down to two. With voice biometrics, you have an infinite amount of possible voiceprints.
Let’s say that you have the following voiceprint to authenticate into your account: “My voice is my password at VB Bank.” Should a malicious individual record you saying this passphrase, you could revoke this credential and create a new one where you say “At VB Bank, my voice is my password.” You can easily see how there are an infinite amount of possibilities with voice, and so it’s important not to lump all biometric technologies into the same boat. Irrevocability is only an issue with static biometrics.
Publish Date: June 2, 2016 5:00 AM
The world of customer service looks vastly different than it did 20 years ago. With the pace of change, new channels and higher expectations have forced a change from what defined industry leading service even five years ago. Consumers are communicating in fundamentally different ways and the number of touch points continues to explode.
Gaining massive popularity are messaging apps such as WhatsApp, Facebook Messenger, WeChat and Viber, which tout a combined user base of roughly 2.9 billion. These apps are particularly popular among Millennials, as 50 percent of the users for the ten leading chat apps are under the age of 35. Similarly, chatbots are the hot topic in customer service right now. Companies from Facebook to Google to Microsoft are deploying these all-in-one virtual assistants as new ways to communicate with consumers. And finally, we can’t forget about text messaging. In a recent survey, more than 42 percent of Americans reported wanting to communicate with businesses via text message, but only 7 percent received reminders and notification on this channel. Another 25 percent wished they could conduct customer service via two-way text.
Taken together, these trends paint a clear picture: The market is changing. Message-based communication is what consumers want. To stay relevant, businesses must adapt again.
To address this shift in consumer preference, Nuance announced Nina for Messaging, leveraging Nuance’s Natural Language Understanding and conversational technologies to provide an intelligent, automated experience on popular messaging channels, through a common platform. With Nina for Messaging, customers more easily find answers, solve complex problems and execute purchases via in-app messaging, conversational text messaging, and within apps such as Facebook Messenger. The solution leverages a common mulitichannel platform, allowing businesses to cost-effectively extend a consistent self-service experience across messaging channels, while maintaining control of both data and security. In doing so, Nina for Messaging increases customer satisfaction by creating personalized, effortless experiences that allow consumers to conduct business quickly and easily.
The market momentum speaks for itself – message-based customer service will rapidly represent a quality experience. But this shift doesn’t need to represent added cost to the enterprise. Fifty-nine percent of consumers agree that automated self-service options have improved customer service, according to a recent Wakefield survey. Nina for Messaging is designed to provide the self-service experiences your customers want, on the new channels they prefer.
Publish Date: May 27, 2016 5:00 AM
We all know how good customer service looks. That thoughtful touch. The extra mile. Added efficiency. And effortless interactions. But what specifically distinguishes the customer service leaders from the laggards? What do the leaders do differently? And how can other companies replicate the success model to achieve better customer service results?
The industries which consistently receive high customer service scores may surprise you. Supermarkets, fast food, banking and retail are all at the top of the list.
What do those industries have in common? And why do they hold the key to the treasure chest of customer service secrets? Across these leading industries for customer service, clear patterns emerge in how these industries help customers accomplish their goals and build an experience to admire.
Here are their secrets to success.
- Self-service options. Not only do customers increasingly prefer self-service options (especially Millennials), automated self-service also saves businesses resources. Customer service leaders provide valuable tools which allow customers to solve their own problems and find their own answers, such as self-checkout lines, ATMs, drive-thru lanes, mobile apps, information kiosks, and downloadable coupons.
- User control. Effective customer service allows the customer to be in the driver’s seat. Let your customers do what they want to do, when they want to do it, how they want to do it and then store those decisions as future preferences.
- Personalization. Everyone likes to feel special. Whether it’s a store employee who knows you by name or a barista who remembers your last order, customers enjoy that personal, individualized treatment. Companies with successful customer service often use recommendation engines to help tailor the experience by predicting your next action based on previous interactions and uncovered behavioral patterns.
- Relevant choices. You know the feeling – you go into the department store looking for shoes, then the salesperson spends 30 minutes trying to sell you a watch. Being presented with irrelevant, unwanted options is frustrating. When you have a goal in mind, what can help you make the right choice is critical information and a list of available options. Companies that offer great customer service cut out what’s unnecessary, and only present you with the choices relevant to your current situation.
- Multichannel support. Customers don’t rely on one communication channel alone. They use phone, email, text, web, and mobile apps (sometimes simultaneously). Companies need to leverage an encompassing view of customers, across all channels and contact points, and must consider face-to-face, inbound, and outbound interactions.
These patterns aren’t unique to the supermarket, fast food, retail or banking industries. These principles can be applied and implemented in any industry in order to achieve superior customer service success.
Publish Date: May 26, 2016 5:00 AM
The IVR mindset is shifting. Originally, the essential goals of a company’s IVR were containment and automation. But now there’s a lot more an IVR needs to do to keep customers satisfied and allow companies to reap the business benefits of using an automated system.
Many times customers call into an IVR with the express intent of speaking to a live agent, and 39 percent of customers complain not being able to reach a real person through the IVR system is a top (and regular) frustration. Consumers now view the IVR as a gateway to the call center.
The IVR is increasingly being used as an escalation channel, where consumers only call in when they have already tried to find a solution on their own, and were unsuccessful. So by the time they call the IVR, they’re already frustrated and impatient. In this environment, the purpose of the IVR is increasingly shifting. Its purpose is to provide the call center with immediate information in an effort to route callers to the right agent and reduce call handle times.
With this new purpose in mind, call center metrics are more important than ever. We need to know if agents use the information passed to them by the IVR, whether calls are being routed to the correct agent, and evaluate the length of wait and agent handle times.
There are great tools to evaluate how callers get through the IVR, how many callers are self-served, and how many callers are routed correctly.
But here’s the issue: There’s a dead space between the IVR and the call center, and we currently can’t connect the dots between. Contact center managers need to be able to measure results – the IVR’s direct impact on the call center – and thus the costs to provide customer service.
With the IVR now playing the role as gatekeeper for the call center, the two should be more integrated. This is something the industry will need to embrace as we enter the future of a phone-based customer experience.
The best strategy at this point in time is to treat the entire caller experience as a single call, because for the callers, it is.
Don’t stop tracking the call after it exits the IVR. Following the call into the call center allows companies to monitor the effectiveness more acutely, showing wait time, whether the right agent was reached, whether a resolution was reached, etc. This knowledge allows you to focus on those areas in the IVR that have the most impact on your specific call center, and tailor the experience to meet your goals.
I envision a future with a logging standard, designed to log the agent actions and how they affect the overall call result. This would mean there is one repository for data, and companies can receive full-bodied information with more meaningful insights into their customer actions and adjust the experience accordingly to save money and increase customer satisfaction.
To properly judge a company’s IVR, it’s essential that businesses understand critical metrics such as agent-to-agent transfers and agent handle time. There must be an ongoing industry discussion to address how we can best connect the IVR and the call center. Although this is in the future, there are things companies can be doing now to gain valuable insight into what’s happening in the gap between the call center and IVR.
Publish Date: May 24, 2016 5:00 AM
I’ve been a writer since I was very young. In fact, long before I wrote my first book I won an employee Valentine’s Day poetry contest while working for Harrah’s Casino in San Diego. I even remember winning a poetry contest in elementary school. The poem was about the peace that I felt as a child when walking through the forest, listening to nature. I have come a long way since then and my transition into becoming a non-fiction writer is something that, believe it or not, happened completely by accident.
It was 2011 and I had just moved to Hawaii to renew my sense of self and purpose when I wrote my first book, “Aloha Joe in Hawaii: ‘A guided journey of self-discovery and Hawaiian adventure’.” At the time, I was writing by hand with pen and paper, which I felt was easier than typing on a computer. It took me a full year to write my first book by hand and another full year to type it on a computer. With that much time invested, I believed that there had to be a better and more efficient way to write.
Soon after, I had just purchased a new computer and heard about this new technology called “voice recognition software.” I was really excited to hear about this and after doing some research, I settled on purchasing Dragon Naturally Speaking. It took me a little time to get the commands down and for the software program to adapt to my voice. To speed this process up, I would read a few books aloud with the software turned on. I still remember when I told my kids about my experience trying the software for the first time. Of course, they just laughed at me because apparently they had already been using this technology for several years at school.
Up until that point, I had been taking notes for a number of years on other topics that I wanted to write about, but since it had taken me so long to type the first book, I couldn’t imagine how long it would take for me to type up all of this new information. But after playing with Nuance’s Dragon Software, I decided to find out just how much faster I could produce my second book. The short answer is: three days.
When I gave the finished book to my editor she was pleasantly surprised. After going through everything that I had “composed” in those three days, she figured out that I had actually created enough material for not one, but TWO books! My second book, published in 2014, was titled: “Stories I Can’t Tell My Kids – Yet.” And the third, “Your Brain is the Key to the Universe: A Comprehensive Guide to Manifesting Your Ideal Reality and World Harmony,” was recently up for a Pulitzer Prize in the General Non-Fiction category this year. It is in this book that I reflected on many current issues and also wrote about my experience using Dragon software.
I really do owe Nuance a debt of gratitude for pushing the research needed to advance this field. I am a disabled Marine veteran and I have had many injuries in my life, including disabilities in my hands, fingers, wrists, and more. Dragon software has allowed me to realize my dreams of becoming a published author.
About Joe Holt:
Joe Holt (aka Aloha Joe) is a Pulitzer Prize-nominated author, artist, photographer and life coach. He is the author of Aloha Joe in Hawaii: “A guided journey of self-discovery and Hawaiian adventure” (August 5, 2013), Stories I Can’t Tell My Kids – Yet (August 11, 2014), Your Brain is the Key to the Universe (March 4, 2015). Holt’s fourth-coming book, Godfather of Fisherman’s Wharf, is currently in production, and he just finished his fifth book, a fictional children’s story entitled “Gordy the Ferret.” Both were developed using Dragon Naturally Speaking. Click here to learn more about Joe Holt.
Publish Date: May 23, 2016 5:00 AM
In my last post, I discussed how human agents and human assisted virtual agents (HAVAs) can work together when machine learning and artificial intelligence are applied to customer care systems. Now let’s take it a step further.
In machine learning you often need to compare or “match” things. For example, when you are looking for the right answer in a database, you compare the question to the possible answers stored there. If you want to sort intents into buckets (so-called clustering) you need to compare them with each other and see how similar they are. In many modern approaches you do this by only looking for words to be there (or not) at face value, and in any order, an approach that is also called quite intuitively “bag of words.” If two sentences or texts are roughly composed of the same words, so the intuition, they are probably similar and capture a similar meaning. This approach works surprisingly well for many tasks (classical Internet search relies on this approach), although it seems to ignore that language is actually more than a bag-of-words: sentences have a structure and words have a meaning. Let’s look at these example sentences.
- How do I change the motor oil?
- Tell me how the engine lubricant gets replaced.
From the superficial bag-of-word perspective these look very different, although intuitively they capture a similar request or meaning and customers would expect an HAVA to understand that. Purely statistical approaches solve this by making the observation (after looking at thousands and thousands of texts) that the words “oil” and “lubricant” often appear in similar contexts and in that way implicitly learn the meaning of a word by identifying it with the contexts a word typically appears in.
However, there is a very old tradition within Computational Linguistics and Symbolic AI to capture aspects like structure and meaning of language more explicitly. For one, you try to capture the structure by assigning a syntax tree to a sentence, or an utterance. One class of such structures, so-called dependency trees, starts from the observation that the core of a sentence is the verb and the other words “depend” on the verb; similarly adjectives and other modifiers depend on the noun they are next to. Simplified dependency trees for (1) and (2) above could look like this:
And if you look at the parts circled in red you can see that they have become similar in structure. So if only we knew that change/replace, oil/lubricant and motor/engine mean the same or at least similar things, we would be there. In fact, many efforts have been made to capture such similarities, to sort words into buckets of similar meaning and organize these buckets in hierarchies of concepts. Not the first but a well-known one was Roget’s Thesaurus. Its modern, machine-readable equivalent is WordNet, a collection of 155,287 words mapped to 117,659 concepts (as of today!). And if we look at what it has to say on “engine,” we will see that it lists “motor” as “sister” term to “engine.”
S: (n) engine (motor that converts thermal energy to mechanical work)
- direct hyponym / full hyponym
- part meronym
- direct hypernym / inherited hypernym / sister term
- S: (n) motor (machine that converts other forms of energy into mechanical energy and so imparts motion)
In WordNet lingo that means “engine” and “motor” are in the same “synset,” we could also say they represent the same concept. So, if we now replace words by “synsets” in our two trees, they will become very similar or even identical in the relevant area. That way, measuring the similarity of text passages will be a lot more precise (as we will see later).
Now, the use of lexicons and syntactic structures will strike some people as a little old-school, pitting Symbolic Processing against Machine Learning.
But we at Nuance think differently: why not combine Machine Learning and symbolic processing? Enriching the raw data with syntactic and semantic information helps to turn mere “big data” (think of it as lots and lots of “bags of words”) into “Big Knowledge.” This can then be applied to HAVAs for a better customer interaction. We will explore what else this means for customer service in our third and last post of this series.
Publish Date: May 18, 2016 5:00 AM
Care team communication technology is key for physician efficiency and patient care, as the current amount of time physicians and nurses waste trying to coordinate care without these tools is staggering
This is part of our series highlighting apps that power physicians with voice using the new Dragon Medical One cloud platform.
Care Thread is on a mission to eliminate miscommunication and medical errors in healthcare. They do this with a secure mobile communications platform for hospitals and health systems that allows clinicians to communicate securely and accurately about patient care in real time from any mobile device or web browser. The Care Thread platform, now integrated with Nuance Dragon® Medical speech recognition, is used by all types of clinically-trained professionals to better coordinate care across the continuum while improving the clinician’s experience and patient care.
Jonathon Dreyer: What challenges in the healthcare industry drove you to build Care Thread?
Nick Adams: We were compelled to reduce the sheer number of serious medical errors that directly result from miscommunication. Growing up in the healthcare industry, my co-founder and I witnessed the staggering amount of time physicians and nurses waste keeping track of information, playing phone tag and generally trying to coordinate care. We knew this had to be a contributing factor to the miscommunication.
JD: What inspires you when creating an app?
NA: We build our platform and communication application for all types of professionals who make up patient care teams. Part of our mission of eliminating miscommunication in healthcare is to build digital tools that actually improve the experience of being a clinician today. That is what drives us in everything we do.
JD: Why is it so hard for clinicians to communicate in healthcare?
NA: It’s not that clinicians are bad communicators, but rather there is such a large amount of information to keep track of that there needed to be a better way to stay on top of the information beyond securing text message apps and other disparate modes of communication. We realized communication technologies in healthcare are totally separated from EMR systems so clinical care teams are stuck using old modalities that further the challenge by wasting time and creating delays rather than fixing the issue.
JD: How does your app help enhance physician-to-patient communication?
NA: Care Thread saves physicians the time it takes to get in sync with care teams about each patient. By spending less time gathering information and coordinating care, physicians can spend more time providing care to patients, including directly communicating with them. Additionally, the platform can enable care team-to-patient secure communication, accurately show the patient who their care team is, and make the patient feel that their physicians and broader care teams are working together and are in sync.
JD: How will Nuance technology and the power of voice enhance Care Thread?
NA: By integrating Nuance Dragon Medical and Care Thread, physicians will have the anywhere, anytime ability to dictate communication messages, notes, forms and even Macros templates, back into the EMR. All of the dictation is medically accurate, secure and patient-specific.
JD: What is your vision for Care Thread in the next 5 years?
NA: We see the need for full EMR integration which will enable predictive communications that engage the right people at the right time so every patient is digitally managed. This includes analysis of unstructured text messaging and conversations of care per patient per disease state, to identify the presence (or lack thereof) of pertinent clinical discussion topics.
JD: What do you think the future of mobile health will look like?
NA: The future of mobile health will become predictive, enabling anywhere, anytime patient care that is both proactive and preventative because of the ability of mobile to reach everyone who has a smart device and needs healthcare engagement.
To learn more about Care Thread, please visit http://www.carethread.com/.
To learn more about Nuance Dragon Medical One, please visit www.nuance.com/dragonmedicalone.
Publish Date: May 18, 2016 5:00 AM
Last month, we talked about the reasons millennial employees are more environmentally aware and tech-savvy than other generations in the workplace. We also discussed ways companies can put those strengths to good use in an effort to advance green workplace initiatives. Of course, millennials can help inspire and the lead the way, but to truly achieve a green workplace requires the participation and commitment of every employee.
In the second half of this two-part blog series, we reveal five easy, practical tips that millennials can implement in order to help create a greener workplace.
- Using mobile devices for display. Millennial employees (or anyone for that matter) should challenge themselves to minimize printing and instead, preview documents on a smartphone or tablet. Additionally, they can use digital editing tools to suggest changes, insert comments and generally collaborate. Changing these types of behaviors can help the organization reduce the amount of printed documents and its use of paper.
- Make the electronic version the first option. In another recent blog post, we talked about converting invoices to electronic documents for circulating through approval rather than paper bills. They may seem like small steps, but if employees can transform a paper process into an electronic one, they can have a positive impact on the workplace. Office memos, company newsletters, even employee reviews are all ripe for an electronic-first approach.
- Challenge the routine of “when in doubt, print it out.” How often have you printed an email because you wanted to keep a copy on file? Or are you guilty of printing documents as a reminder to complete tasks, review information, or take work home? By encouraging other employees to convert paper to an electronic document – just as easily as they could print one – millennials can challenge the routine of printing information for safekeeping.
- Route printed documents to the right recipient. Converting printed documents to electronic makes it far more likely that the intended recipient will receive the intended information. Millennials can start by taking advantage of these electronic workflows in your organization, and help prevent too much printing as information tries to make its way to the right person.
- Encourage other employees to propose green ideas. No one is more familiar with the processes that drive your company than your employees. Employees should encourage other workers to come up with other good ideas, and the organization should recognize them for these efforts. For example, why not reward them for converting previously paper processes to electronic? For example, an employee may propose that all agendas for team meetings now be shared as electronic PDFs. Or a manager may encourage his team to carry smartphones or tablets to meetings instead of notebooks. Ideas like these that come from employees are much more likely to be successfully adopted.
Support green initiatives
By implementing a reliable document management solution and mobile connectivity, you can provide employees with the foundation for a greener workplace and better productivity. Think of the savings that can result from reducing paper, toner cartridges, and other waste materials , not to mention the time saved from manual processes. Don’t let your company’s inability to move processes into a digital format be what holds them back.
Publish Date: May 18, 2016 5:00 AM
In my last blog post, I explained how we use different types of Neural Networks for both ASR and NLU. We already touched upon DNNS, RNN, and NeuroCRF, and I did not even mention that we use CNNs (Convolutional Neural Networks) for the “intent” discovery aspect of NLU. Does this sound confusing? Fortunately for end-users everywhere, you don’t have to worry about keeping all of the terminology and machine learning concepts straight – you just see the added benefits of increasingly accurate ASR and NLU.
Now, here is even better news: if you are a developer who wants to create a great app for the Internet of Things using speech technology (such as ASR and NLU), you no longer have to worry about the mechanics behind advanced concepts like machine learning. The reason is that we have done the heavy lifting for you. Through Nuance Mix, we are able to utilize our knowledge and expertise around neural networks of various types and how to apply them to specific tasks in order to create intuitive spoken interactions.
This new developer platform provides you with everything you need to quickly create, assess and refine your own speech application ideas. Perhaps most importantly, it gives you an easy to use interface for the setup and maintenance of your speech application’s ontology. What this really means is that you can determine what the app is to be used for, and provide your own sample utterances as the nucleus of training data. Once you’re past this stage, you can then apply the machine learning training machinery, with just the press of a button. Now that you have trained models unique to your app (which are basically the NNs we discussed earlier), you can deploy to a cloud based runtime environment and have your app up and running. Because you don’t have to be an expert in machine learning to use Mix, my colleague Kenn Harper called it “the democratization of voice technology” recently.
By taking a lot of the hard work out of integrating speech into your app, we allow you to focus your creativity on the app you want to create- an area in which you are the expert. And a creative approach is especially important now, as more and more devices enter the IoT sphere that can make sense of speech and natural language. To help spark that creativity, we are holding a series of “hackathons” and similar events, addressing both the needs of industrial users as well as enabling students to experiment and innovate with speech technology.
We recently partnered with DFKI (the German Research Center for Artificial Intelligence), which is located on campus at the University of Saarland, to host a hackathon of our own. Having been a proud stakeholder in DFKI since 2014, and understanding the way in which DFKI can bring AI into the German industry, we knew we would see some exciting projects. On the first day, we saw great participation by industrial partners who learned first-hand how to use mix from Mix Masters Nirvana Tikku and Samuel Dion-Girardeau. After a thorough workshop, the group gave it a try on their own, having the chance to test out our web-based developer platform.
The second portion of this event was a student hackathon, which my colleagues Christian Gollan and Hendrik Zender, DFKI alumni, have just returned from. Running from 5:00 PM Friday until 5:00 PM Saturday, the students engaged in a 24 hour coding spree to speech enable devices using Nuance Mix and SIAM-dp (DFKI’s own dialog platform). Having seen university students create some amazing championship winning inventions such as Lisa the robot, we had high expectations. We weren’t disappointed as every team involved came up with impressive solutions that would help address existing problems or areas of need by using speech, natural language and DFKI’s multimodal dialog platform.
Overall, the event resulted in a number of captivating applications that worked to simplify the interactions between people and technology. However, especially of note were our prize-winning teams. Our top winners were as follows: in third place a chatbot that could act as a personal assistant; in second place a speech-enabled robot that could help children learn how to do math; and, in first place, an intelligent home solution that enabled would-be houseguests to use a voicemail box for when no one is home. For the announcement of the winning teams and the award ceremony, we were joined by Professor Wolfgang Wahlster, CEO and Scientific Director of DFKI. He congratulated the students for their excellent results and emphasized the importance of speech interfaces and artificial intelligence for the ongoing transformation of how people will interact with the technology that surrounds them. He also stressed the pivotal role that the collaboration between DFKI and Nuance plays in this transformation.
We agree and think this event gave students with an interest in speech technology the opportunity to learn and work with cutting-edge tools in a fun, yet challenging environment. Besides winning prizes, eating pizza and drinking a lot of coffee, everybody involved exemplified the ways in which tools such as Nuance Mix and SIAM-dp could very well help build the intelligent, interactive solutions of our future.
Publish Date: May 17, 2016 5:00 AM
Customer experience is a prime differentiator for many organizations. Many products and services are becoming commoditized and today, the experience a company provides can set them apart. This was recently showcased in the Temkin report on Experience Ratings which highlighted companies and industries at the top and bottom of the customer experience spectrum and considered their performance based on three components: Success, Effort, and Emotion.
But this got me thinking: why should a company have to wait to see their customer experience ranking until a report is released? So I determined six call center metrics that really matter in judging the effectiveness of your own customer’s experience, so you can track how you’re performing on an ongoing basis.
When we call a company to resolve an issue we just want it fixed. That’s all we, as customers, care about: a successful resolution. The questions any organization needs to ask themselves then are:
- How well does a customer successfully solve their issues in our call center?
- How well do they navigate our IVR?
The ‘Success’ metrics that address these questions are ultimately the most critical areas of focus.
First Call Resolution (FCR): This is one of the most important metrics for any company. First call resolution (FCR) is how well your company takes care of the customer on their first attempt to resolve an issue. It’s calculated formulaically as number of calls resolved / all incoming calls.
Why it matters – FCR is important both as an indicator of external customer satisfaction but also an internal metric for effectiveness of your company’s processes and technology. Get this wrong and customers must call in multiple times – putting a strain on their patience and your systems.
Containment: This is a surprisingly straightforward measurement. All call center executives want to improve the ability of their IVR to accurately and effectively answer customer questions without having to reach a live agent. That is keep them within the IVR, i.e. containment. Containment is measured by the number of incoming calls resolved within the IVR as a percentage of total inbound calls. If the IVR is poorly designed and confusing, customers will not progress and instead “zero out” to a live agent. We’ve all been through that scenario.
Why it matters – Getting containment right keeps other metrics on track. Increasing the number of people who effectively self-serve increases their satisfaction and helps the company’s bottom line. Customers are happier, agents are happier due to decreased call volumes, and CFOs are happier due to decreased need for investments.
Nobody wants to spend a ton of time dealing with issues with their bank, insurance company, or TV provider. If this becomes necessary, we want to minimize how much time we put into it. Our effort must be low. And in fact, research shows the lower the effort, the greater the loyalty and satisfaction a customer will show to a company. Consumers like to be delighted with minimal effort and reduced friction on the way to problem resolution.
Misroutes: Put simply, misroutes occur when a company’s IVR sends a caller to the incorrect destination. When someone calls a customer service line and ends up someplace they didn’t intend, it’s usually the work of a misroute. Misroutes occur for a variety of reasons, including outdated technology that incorrectly recognizes speech or confusing phone menus that force annoyed customers to ask for a live person.
Why it matters – Misroutes directly increase the effort required to close a query. Each stop along the way creates more work and extends the call. Key metrics eroded by misroutes include average handle time, containment, first contact resolution, and more. Plus, misroutes dramatically increase costs and irritate customers, decreasing satisfaction and driving churn.
Average handle time: Some calls seem to take forever, going on and on with pushing buttons and repeating information. Looking at an aggregate view of all calls together allows a company to track the average handle time (AHT), or length of time a customer is on the phone. This is a very popular call center metric and is traditionally measured from the moment the customer calls to the time they hang up – including hold times.
Why it matters – In addition to wanting to lower handle times to improve the customer satisfaction, AHT is a prime factor when deciding call center staffing levels. Knowing the typical duration of a call allows companies to successfully model the number of agents they’ll need and how best to balance workloads during peak hours.
We live in a world driven by feelings. Consumers want “Likes” on their Facebook posts. They enjoy videos showing the good in people. They are quick to rave – or rant – on social media about how a company made them feel. Organizations that tap into these emotional needs positively will generate great interest in their brand.
Customer satisfaction: “Cust Sat”. NSAT. CSAT. The shorthand and acronyms vary and every company uses one or another. No matter which one is chosen, the two most important aspects are to 1) know that it’s the measure of the overall satisfaction of the interaction or service and 2) to get it right.
Why it matters – Customer satisfaction is the number one indicator of how well you are doing to satisfy your customers. It’s also a great way to gain insight into customers’ thoughts on the products you offer today as well as identify future direction for product development and feature updates. By keeping tabs on overall customer satisfaction, companies can make adjustments quickly to improve service levels, reduce wait time, or address frequent queries. Call centers are often the front line of issues and companies can get instant feedback as to how they are doing.
Net promoter score: If customer satisfaction is the number one indicator of IF your customer likes you, then Net Promoter Score (NPS) helps you understand just HOW much they like you. Customers may like your product or service after they get off the phone with you. But if they really liked it, they’ll pass it along to friends or post on social media. The Net Promoter Score essentially allows you to measure customer loyalty. It classifies customers into one of three categories:
- “Promoter” – customers are enthusiastic and loyal, continually buy from the company and ‘promote’ the company to their friends and family.
- “Passive” – customers are happy but can easily be tempted to leave by an attractive competitor deal. Passive customers may become promoters if you improve your product, service or customer experience.
- “Detractor” – customers are unhappy, feel mistreated and their experience is going to reduce the amount of which they purchase from you.
The Net Promoter Score is derived by subtracting the percentage of detractors from promoters to get an overall NPS result.
Why it matters – As you’d guess, the more detractors you have the lower your NPS and the increased likelihood that your service isn’t very good. Detractors are more likely to spread negative word of mouth and do so much faster than if they receive average or great service. A continually low NPS score will spell trouble and ultimately impact the brand. Companies that successfully track NPS and spark action from a high number of Promoters can improve customer loyalty and drive long term growth.
Understanding and effectively balancing the metrics based on Success, Effort, and Emotion will help you achieve your IVR goals.
Publish Date: May 17, 2016 5:00 AM
A recent profile in the Wall Street Journal shows how the NBA Champion, Golden State Warriors, – for many years a league doormat – used statistical analysis to determine its traditional strategy of working all of the 24-second shot clock for a chance close to the basket (“down in the paint” in basketball parlance) was costing them points and victories. Instead, the numbers said players should be attempting more long range three-point shots. Many more.
Based on this, the Warriors redesigned their offense. Instead of making multiple passes and cuts in an attempt to get a lay-up (Figure 1), the Warriors get the ball to their best three-point shooters as quickly as possible, even if it meant they’d be taking the longer shot (Figure 2).
The result? After winning the NBA title in 2015, the Warriors went on to win a record 73 games during this year’s regular season, and having won their first two series are heavy favorites to repeat as champions.
Very interesting, but what does it have to do with my IVR?
When you examine the design of a typical IVR application, it looks remarkably like the complex, old-style basketball play:
The customer starts at the top, with a prompt that says something like “please listen carefully, our menu options have changed” and then proceeds to hear at least a half-dozen options that might, or might not, match what they’re calling about. If they make a wrong choice, they either have to go back to the top and start over, or they call “TIME OUT” and press 0 for an operator.
When management looks at the performance of such a design, they have to wonder, as the Warriors did, if there is a better way.
There is. It’s called Conversational IVR. Instead of reciting a long list of options, hoping the customer will find what they’re looking for, Conversational IVRs simply ask, “How can I help you?” Using speech recognition and natural language understanding, the IVR is then able to determine the reason for the call and provide either the right answer or a way to get something done.
Statistical analysis of Conversational IVRs show they increase containment and task completion. We’re seen companies leveraging Conversational IVR show a 5-15 percent increase in containment. Just like a Warrior being given the ball at the three-point line and launching his shot immediately. SCORE!
Of course, neither the Warriors nor the Conversational IVR would win if they didn’t put the right players on the floor. The Warriors have built their team around two of the best long distance shooters in the game, Steph Curry and Klay Thompson, with a supporting cast of top-notch pros who understand the strategy and execute it flawlessly.
In the same way, your Conversational IVR needs the best speech recognition and natural language understanding capability, with applications designed by pros who know how to get the customer from “hello” to “happy” as quickly as possible on platform that performs without a hitch.
Who are you going to pick for your team?
Publish Date: May 12, 2016 5:00 AM
This post is part of a series that explores the use of human assisted virtual agents, and how machine learning and artificial intelligence are being applied to ultimately improve the customer experience.
Customer support automation is an important playing field for today’s Artificial Intelligence and Machine Learning systems. This no longer means primarily call center automation, but rather users expect and use a mix of channels, including web and chat. And all of these can be automated. Some may ultimately wonder if a human or an automated customer service agent is better, but from where we sit – it’s not an “either, or.” Instead human and automated service agents might actually cooperate to get things done for the customer and ultimately offer a better experience. In other words, a human-assisted virtual agent (HAVA).
Before we look at two different ways of doing that, we first need to understand what the actual tasks are that we are trying to solve. And that starts with the fact that customers will have different problems to solve; these are called “intents.” There may be hundreds of them and they will range in complexity. The simple ones are requests for information (“Do you have details on product X,” “how do I switch feature Y on”?), which can be solved by finding the right answer in a data base of documents available to both the human agent and the automated agent. The more challenging ones will involve access to multiple backend-databases and involve doing transactions on customer data (“please change my payment scheme from monthly to quarterly”). In the scenarios we are looking at we may have automated some of the intents, but not all (yet).
In our first scenario, there is a chat going on between customer and agent. The virtual customer service agent sits behind the agent, but follows the conversations, and for intents where she can generate the answers she will do and suggest these to the human agent (for example, by quickly populating the agent’s screen). That way the agent can be much more efficient, and only has to focus on the more challenging aspects. Also she can check if the suggested answer is correct, which provides good feedback to our HAVA for getting better at her task (and we’ll come back to that below).
In the second scenario, it is actually the virtual agent who performs the chat conversation with the customer. Where she is confident she can answer the request, she will do so right away. But for intents not covered by her knowledge base or if in doubt if she has the right answer, she can involve a human agent in the background. Note that it will still be our virtual agent who gives the answer back to the customer, and this highlights an advantage of this model: the customer will experience an apparently perfect system from day one, even if she is still in her learning phase. And as she gets better she’ll just have to ask for help less and less, but the customer experience will stay the same. And of course the “getting better” is the other interesting point here.
The virtual agent, uses machine learning to get better at things, but most machine learning techniques work in so-called “supervised“ mode. That is, not only do you need data to learn on (lots and lots of data actually) but it also has to be hand labeled with the right answer. If you want to train a Neural Net to recognize faces, you need pictures with faces labeled with the correct name, for doing speech recognition we use thousands of hours of labeled or “annotated”, as we call it, speech. The nice thing in our two scenarios here is that we get data suitable for supervised learning for free: the virtual agent has access not only to the customer requests, but also to the correct answers as an agent provides them.
So, by creating a useful virtual assistant tool that can be refined by the customer service agent, we’ve solved several typical problems associated with virtual assistants: 1) We’ve reduced negative user experiences since we have a human to step in when the virtual assistant invariably makes errors. 2) Customer service agents are correcting the virtual assistant while doing their typical work: they do not have to be re-assigned to label lots of data to create the virtual assistant, instead they are just answering user questions. Finally, as the customer service agent is answering questions, they are also creating labeled training data that is exactly in the format that sophisticated deep learning techniques require, which will lead to a virtual assistant that performs closer and closer to state-of-the-art.
In Part 2 of this blog series, we will have a closer look at two specific tasks in this context and how we solve them with machine learning techniques using this data.
Publish Date: May 11, 2016 5:00 AM
Temkin Group recently released their 2016 Temkin Experience Ratings, which grades companies across different industries on the basis of customer experience. It may not come as a surprise to my colleagues in the healthcare industry that health plans were rated lowest of all 20 industries evaluated. (No health plans were even ranked among the top 50 companies, though Tricare and Kaiser can claim the top two spots within the payer vertical.)
This doom and gloom comes as consumers have raised their expectations around consumer experience, based on the online self-service standards now set by retailers, delivery services, banks, and credit card companies. In addition, more than 16 million new consumers have enrolled in health insurance through Affordable Care Act-related programs (like the exchanges), and these members are often novices when it comes to health benefits, needing more guidance than a commercial member and more interaction with the plan, by default. Combine this with the growing emphasis on member engagement and chronic disease management, and health plans have their work cut out for them.
But all is not lost! I thought Kelly Rakowski’s recent article in Managed Healthcare Executive did a nice job of laying out three areas in which health plans can improve the member experience and potentially move up in the Temkin Ratings (perhaps because they’re topics I’ve steadily beat the drum about over the past couple of years):
- Application and enrollment – As Rakowski states, “(payers need to) provide personalized, two-way communication to educate consumers about healthcare plan options that meet their individualized cost and care needs, and guide them through the application and enrollment process.” Plans need to ensure their websites are informative and easy to navigate throughout the “shopping” experience. Some plans are leveraging tools like web virtual assistants and chat for this, and many are employing proactive engagement channels like text, email and automated voice to inform applicants of status changes and actions required.
- Operations and claims management – As it stands, members have a difficult time getting consistent claims information across various “channels”: whether on the phone, in the IVR, or through the member portal. And with the rise in individual members and high-deductible plans, this component is more important than ever. Rakowski is right that “providing clear information and appropriate levels of service and support to answer customers’ questions…is paramount to an excellent consumer experience.” Plans need to employ the same omni-channel approach that banks do, ensuring that data is shared across systems (whether mobile, web, IVR, etc.) and that members have easy access to help through virtual assistants and other intelligent self-service capabilities.
- Care management and member support – Rakowski accurately points out that “(payers) should create vehicles to engage early with members, utilizing technology as well as clinically trained customer care agents…” More members are new to their respective plans these days and those with chronic or other complex care needs must be identified and interacted with early. I’m surprised at how many top plans are just now discussing the use of text messages and automated voice calls to improve enrollment in programs, boost adherence around scheduled nurse and health coach calls, or drive compliance with quality measure–related services. Some progressive plans are using self-service methods like virtual health coaches to provide guidance and education around chronic diseases. And, this is one area where member interaction is frequent enough to warrant voice biometrics (using your voice as your password) as a tool for increased security and enhanced member experience.
There’s a lot at stake here. Exchange members are increasingly fickle and bring a different set of expectations for service than what health plans are used to, and this results in lower retention rates and higher shopping rates than health plans are used to in their commercial business.
As a JD Power report recently put it, “health plans need to take a more customer-centric approach and keep their members engaged through regular communications about programs and services available through their plan. When members perceive their plan as a trusted health partner, there is a positive impact on loyalty and advocacy.”
Here’s hoping the next year brings an elevated member experience, and a spirited climb up the basement stairs into the daylight!
Publish Date: May 10, 2016 5:00 AM