Punxsutawney, a small town in Pennsylvania, draws thousands of visitors every year on February 2nd when Phil the Groundhog predicts the weather. On February 2nd, 2018 it will be the 132nd time. According to a legend, if Punxsutawney Phil sees his shadow, there will be six more weeks of winter weather. If he does not see his shadow, there will be an early spring. This legend originates from the German-speaking areas where a badger used to forecast the weather.
But no matter which animal is used, those predictions are rarely accurate. In fact, Phil is only right 39% of the time. In order to predict the weather, or anything else for that matter, you need data. And the only data animals have is how much longer they want to sleep. That’s why actual weather forecasts utilize data from the past to predict the upcoming changes for the wind direction, likelihood of rain, etc.
This is very similar for brands who want to utilize predictive targeting for their customer engagement. Without historic data there is nothing on which a prediction can be based. Before thinking about adding a prediction engine to customer service, brands have to take a close look at the data they have available. This can be recordings from calls that have been transcribed, chat scripts, customer journey data, etc. The more the better. This allows the prediction to be more accurate over time.
If there is not much historical data available, brands can use current information from their customer engagement tools. For example, implementing a virtual assistant and a live chat in several digital channels allows the brand to gather new data and insights. These can then be leveraged to improve the prediction over time.
The best way to create a great customer engagement experience is to continually gather customer data. Every bit of information that can be received during conversations can be utilized for valuable and meaningful insights, which then result in a better optimization process that then allows the brand to predict customer behavior better and better.
This information loop can be augmented by humans who help analyze the data, adjust it to put into the right context and, in addition, help the prediction engine and the underlying machine learning algorithm to learn what to look for. The combination of both automation and humans drives higher accuracy and a better experience for the user.
This said, I’ll still be hoping that that Phil doesn’t see his shadow.
Publish Date: February 2, 2018 5:00 AM
Working in Marketing requires us to read about what’s going on in the market. This can be challenging at times, sifting through the massive volumes of content, especially when you deal with technology that is a hot trend right now, like AI. But every now and then you stumble over great pieces like an article by VentureBeat’s Blair Hanley Frank, who states, “People don’t go and buy two quarts of AI. They buy a product to solve a problem […]”
We couldn’t agree more. The problem with delivering a great customer experience has always been the same, AI or no AI: understanding how technology addresses customer needs while solving a business problem. Unfortunately, technology can be blinding. It promises so much but, if used incorrectly, can come with a lot of sorrows. One of them is that your customers may not like it, thus won’t use it, which most likely will result in their seeking alternatives from you or, worse, seeking alternatives from your competitors.
Instead of running down the rabbit hole, let’s take a step back and think about the actual business problem. What do your customers want? Most likely they want a fast answer if they have a problem. They also want efficient customer service. No matter if they want to buy something new, add a new feature to their plan or ask a question about the latest bill, they want it done in the easiest way possible.
The first step to addressing either of these is to ensure that connecting with you is easy; therefore, letting your customer search for a phone number or an email address for too long won’t help.
Step two is making sure that existing data about your customers is fully utilized. For example, if you know that your customer called you about the same question three weeks ago, don’t make them repeat the question. Instead delight them by using that knowledge to streamline their engagement. And if you have to transfer them, for example from the IVR to an agent, also transfer the context. Strong integration with your CRM system (or any other system that you use to store customer data) is a must across all your automated or human-assisted interaction channels.
Now comes the fun part, the one that you’ve probably heard before: using artificial intelligence to improve the customer’s experience. One of the most common scenarios is predicting a customer’s intent. It’s like having a personal assistant that tells you exactly what you need in the moment it is needed. Let’s say you receive a notification telling you about a roaming upgrade to your phone plan (because the system realized that you are going to Europe next month). You call the number that is displayed in the outbound notification, and the IVR greets you with:
“Hi Chris, are you calling about upgrading your plan for your Europe trip next month?”
“Great! How long are you staying in Europe?”
“About three weeks.”
“We can add the roaming option for you and automatically remove it once you’re back. Do you want to add the option with a start date of February 4th and end date February 25th?”
“Yeah, that would be great.”
“You’re all set, Chris. Enjoy the trip.”
Several things will change as soon as this technology is implemented. First, your customer will view your customer service as both fast and efficient. No need for them to remember to call you - you will proactively reach out ahead of time. Kudos, for sure. In addition, it will help streamline your contact center operations as callers won’t need to take time working through IVR menus or being transferred to other departments. Or better still, they may not even need to call at all. Both of which mean less cost for you. Finally, your own CRM system will become smarter by learning what does and doesn’t work with customers, driving even further speed and efficiency improvements in the future.
Does this all sound like something from the far future? It’s not as far away as you think. The technology exists today to put these solutions into action. Let us show you how we can use AI to improve customer service, streamline your contact center, and create more efficient digital channels. And, of course, become the psychic your customers will love.
Publish Date: January 19, 2018 5:00 AM
A recent Finnish university study on voice biometrics has been making headlines – and most of those news stories have been inaccurately summarizing the results with concerns as in our title above, leading many to believe that cyber crooks can compromise even the best speech recognition systems.
Before commenting on the article and the study, I feel it is important to highlight that Nuance’s voice biometrics solutions have secured over five billion transactions to date, and not once has an impersonation attack been reported. We have conducted several voice impersonation attacks with famous voice impersonators in the US and the UK, and none proved successful.
So why are the news stories missing the mark? The real story? Let’s start with the conclusion.
“The results indicate a slight increase in the equal error rate (EER). Speaker-by-speaker analysis suggests that the impersonations scores increase towards some of the target speakers, but that effect is not consistent.”
So how could the researchers write that “Voice impersonators can fool speaker recognition systems”? To understand that, you need to dig deeper into the study. Here are the actual data points:
So what does this data mean? Let’s start with some definitions.
Text Independent- This is passive voice biometrics where a voiceprint is created from listening in on a normal conversation and that voiceprint is compared to a voiceprint on file.
Same Text- This is active voice biometrics where the user is given a specific phrase to repeat. (Often it is “My voice is my password”.) Once enrolled the user is asked to speak the phrase and then this new voiceprint is compared to the voiceprint on file.
False Accept Rate- This is the percentage of times a system incorrectly matches to another individual’s existing biometric. Example: fraudster calls claiming to be a customer and is authenticated.
False Reject Rate- This is the percentage of times the system does not match the individual to his/her own existing biometric template. Example: customer claiming to be themselves is rejected.
Equal Error Rate or EER- The EER is the location on the graph curve where the false accept rate and false reject rate are equal. In general, the lower the equal error rate value, the higher the accuracy of the biometric system. Note, however, that most operational systems are not set to operate at the “equal error rate”, so the measure’s true usefulness is limited to comparing biometric system performance.
GMM-UBM; i-vector Cosine; & i-vector PLDA- These are three different algorithmic approaches to voice biometrics. Notice that the latest technology, Deep Neural Networks, is not tested.
Now that we have that, the data showcases the following:
Finally, and maybe most importantly, the researchers did not perform the tests with Nuance voice biometric technology. This is evident by the very high EER rates reported by the study as a “baseline” result, ranging from 4.26% EER to 10.83% EER. No tests were conducted on deep-neural-network based voice biometric algorithms, the technology used by Nuance and deployed through scores of enterprises worldwide.
In conclusion, although this topic does merit additional research, Nuance will continue its focus on improving our ability to address actual fraud attack vectors, including brute force attacks, voice imposters, and recording attacks while continuously improving the voiceprint and also improving mitigating strategies for future attack vectors that we believe will eventually be used by fraudsters such as synthetic speech attacks.
Contact us if you would like to learn more about the great strides Nuance has made in Voice Biometrics.
Publish Date: January 9, 2018 5:00 AM
2017 was a record year for hacks of personal customer details. These breaches give fraudsters access to our identities including the answers to those annoying security questions. One thing the fraudsters can’t do much with? Voice data. And that is why banks and telcos are increasingly replacing security questions with biometrics.
With a few words of speech, voice biometrics can confirm you are who you say you are at accuracy and security levels better than pins, passwords and security questions. And it knows how to detect recordings from real, live speech – rendering the data useless to fraudsters in the case of a breach.
Conversational AI breakthroughs have led to a new generation of VAs specific to your bank, your telco and your pizza ordering, all providing personalized, concierge-like service. In 2018, this generation of VAs will be made even more effective, through technology called HAVA (Human Assisted Virtual Assistant). HAVA adds a human-in-the-loop capability, first to help answer new questions the VA may not know, but more importantly to provide a learning loop that updates the VA’s “brain” in real time.
In 2017, Facebook Messenger, Line, Kik and more added capabilities for their users to “friend” organizations and companies, and late in the year, Apple announced Apple Business Chat, which will do the same for Apple Messages. In 2018 you will start engaging brands in the same way you talk to friends – in your messaging app, through SMS and even inside your banking and telco apps. And AI will allow each brands’ VA engine to respond to you in a personalized way, referencing past engagements you have had across other channels.
Customer service creates a ton of data. In 2018 this data will be harnessed more than ever to fuel new AI engines. Predictive customer service will let brands anticipate what you need or may do, before you even know, by analyzing and detecting the patterns of billions of customer engagements over time.
Digital customer engagement combined with mobile devices, tablets and data lines will lead to less calls. A lot less. In 2018 you will engage with a virtual assistant and if they can’t resolve an issue, you will be seamlessly texting with a live contact center agent. If the issue is really complicated and can’t be resolved through messaging, you still won’t call the 800 number. In 2018, that step will be integrated through advanced technologies like WebRTC and IVR-to-digital, allowing the contact center agent to connect with you by voice or video within the app, on your laptop, even through your TV screen or smart speaker.
Publish Date: January 5, 2018 5:00 AM
“Over” was short for “over to you” indicating that its your turn to talk on a short wave radio or walkie-talkie (or any half duplex comm tech for you nerds out there). Smart speakers are super cool and a step forward in voice – but they’re still half duplex, klunky, unnatural voice interfaces. We’ll all look back one day and remember how quaint today’s smart speakers were - like we remember morse code, tape players, and VCRs. Try turn taking a face-to-face conversation or conference call sometime and you’ll get a feel for what smart speakers, and all voice interfaces for that matter, are missing out on. There’s a whole field of study around the protocols and rules of human conversation called “Pragmatics” that study how humans interact one to one, one to many and many-to-many.
For example – I’ll say, “Alexa, play ‘Fool in the Rain’ by Led Zeppelin on Spotify,” and wait the requisite 3 seconds of silence so Alexa knows I’m done talking (might be easier to just say “over”). Then Alexa says, “I’m sorry, I can’t find ‘Fool in the Rain’ by Led Zeppelin on Spotify.” I’ll remember I cancelled Spotify and try to correct myself by speaking over Alexa “No, play it on Amazon Music.” It’s natural to do this – a person wouldn’t miss a beat having the same conversation.
In addition to the half duplex limitation – Alexa also can’t understand multiple speakers. Even the best user interfaces today employ turn taking to manage the conversations and don’t work at all with more than 2 speakers in a conversation. For example, if my children interrupt Alexa while she’s playing ‘Fool in the Rain’ and ask her to play “Space Unicorn“, a song that can make you insane after hearing it for the 400th time, I typically respond by shouting “Laa Laa Laa Laa!!” to confuse Alexa and keep her playing good music.
Managing the turn taking in a conversation with multiple speakers is no simple task. It requires that you listen while you talk and also respond to visual queues (in a face-to-face conversation). For example, Japanese speakers often produce backchannel expressions such as un or sō while their partner is speaking. They also tend to mark the end of their own utterances with sentence-final particles, and produced vertical head movements near the end of their partner’s utterances. See Turn-taking - Wikipedia for a long description of the complexity. The listen and talk problem gets exponentially worse when you add more speakers. A bot will need to know whether its having a friendly conversation and should wait until the person is done talking or if the bot is arguing and should cut into the rant. For more detail on that complexity read this article.
Recognizing these short-comings is the first step in over-coming them. Nuance R&D is working on these problems and others to transform the way people interface with technology.
Stay tuned for parts 3 and 4 as we catalog the technical problems when telling your customer to “talk to the IVR like you would a human”.
Publish Date: January 3, 2018 5:00 AM
Today, many (if not most) companies face many different challenges in managing IT infrastructure, including systems, applications and hardware. This is especially true when it comes to their fleets of printers, scanners, and multifunction printers (MFPs) as well as any related software or workflow solutions.
To be more specific, these challenges can include:
Or if companies are still relying on outdated technology – especially older printers and MFPs – they may have a much hard time managing and securing their entire IT environment. As a result, they may be subjecting themselves to even more security risks and potential compliance issues.
The good news is that there are extremely effective ways to overcome all of these challenges, and in doing so, provide better user experiences, workflows, and security.
For example, external terminals, such as the new Nuance® Edge™ for Copitrak terminal, already provide a much better UI and enhance the speed, functionality and quality of related processes, such as scanning.
It’s an important advantage, especially when you consider terminals like this unify the overall experience, which could be different – and confusing – from device to device. For example, according to recent research from Salesforce.com, 83 percent of users report that “a seamless experience across all devices” is extremely important to them.
By giving employees a unified – and much better – UI, external terminals no longer “force” users to adapt to the different screens and steps they’re sure to find in a mixed-MFP environment. This alone helps employees work much faster, smarter and more effectively.
Better, more intuitive UI can also help employees with critical work task such as scanning. For example, today’s external terminals can provide powerful tools to fine-tune resolution, DPI, contrast, brightness, auto-color correction and more. These features and functionality help minimize time lost in steps like post image processing, saving time and freeing employees to become much more productive.
And when it comes to security, external terminals such as the Nuance Edge come installed with the latest Windows 10 operating system and other security tools. This helps any organization administer the latest network security polices to improve overall security and compliance efforts.
Publish Date: November 29, 2017 5:00 AM
Me: “Alexa - what’s the temperature going to be today?”
Alexa: “Right now the temperature is 56 degrees today in with cloudy skies. Today you can expect clouds and showers with a high of 60 degrees and a low of 44 degrees. ”
Me: “What about tomorrow?”
Alexa: [blank stare]
Me: “Ugh - Alexa - what will the temperature be tomorrow?”
Voice as a computer interface has come a long way, but it’s still clunky and nothing like talking to another person. Our amazement with how far the technology has come since voice recognition in IVRs came on the scene in the 1980s can make us forget the remaining problems we have to tackle to get to human-level interactions. In this blog series, I’m going to take each remaining hurdle and talk about where we are today, where we’re going and how Nuance is leading the way.
“Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo.” Believe it or not, this is a grammatically correct sentence and illustrates why automating natural language processing and conversation is hard. If you’re wondering what the Buffalo sentence means you can click the link and read about it (helpful tip - take an Advil). The tl;dr (too long; didn’t read) version is that the word “buffalo” can be a proper noun, noun, or a verb, so the sentence translates to something about how buffalo from Buffalo bully (aka buffalo) buffalo, etc…
This is obviously an extreme example, but it just goes to show that there is plenty of meaning and “nuance” hidden in the words people choose that computers haven’t been “taught” to understand yet.
Here’s an example that may resonate more with English speakers:
SHE never told him that she loved him. (but someone else did)
She NEVER told him that she loved him. (zero times in their entire relationship)
She never TOLD him that she loved him. (she showed it but never said it out loud)
She never told HIM that she loved him. (but told everybody else)
She never told him that SHE loved him. (but that someone else did)
She never told him that she LOVED him. (only that he liked him and thought was funny)
She never told him that she loved HIM. (she said she loved someone else)
As a live, English-speaking human, you would catch the subtle changes in meaning just by placing inflection on different words. However, artificial intelligence would have to be taught that kind of nuance.
Another great illustration of the complexity of language can be seen in a video of physicist Richard Feynman, apparently being condescending to his interviewer: Richard Feynman Magnets - YouTube. The interviewer is simply asking Dr. Feynman to explain magnetism to him, and Dr. Feynman refuses and dismisses the question, saying that the interviewer won’t understand. The net of the video is that Dr. Feynman can’t explain magnetism in a meaningful way without a shared frame of reference – and he and the interviewer don’t share one. The interviewer doesn’t have the degrees that Dr. Feynman has, so he equates it to explaining to an alien why his wife is in the hospital with a broken leg. Well, she slipped and fell. Why did she slip and fall? Well, she was walking on ice. Why is ice slippery? …etc., on down into deeper and deeper levels of complexity – for seven minutes – and never answers the magnetism question. (One viewer posted, “This is why no one talks to you at parties.”)
This complexity is at the core of the problem we need to solve for computers to “learn” how to converse with humans. Nuance is making great advances in automating conversation. Currently the state of the art in this area is still Simple Question Answering (essentially Enterprise Search front-ended with Natural Language Understanding). See Paul Tepper’s Post on advances in automating conversation. Nuance is working internally and with research partners on encoding the general knowledge that computers need in order to decipher the buffalo sentence and to have a frame of reference to converse with humans.
So, just in case you didn’t have a frame of reference when reading this blog post, go back and read the Wikipedia entry on the buffalo sentence and watch the Dr. Feynman video. Then you’ll understand the monstrous task we have in bringing voice technology up to human-level interactions.
Next time: Part 2: Sentiment and Emotion in Voice – “Your customer seems angry – umm – now what?”
Publish Date: November 21, 2017 5:00 AM
It seems intuitive that an IVR that features a user-friendly, speech-enabled menu would deliver improved performance and customer experience over antiquated touch-tone systems. Well now we have the research to prove it.
In the past year Nuance hired a third-party research ﬁrm to evaluate the IVR customer experience offered by 50 leading companies in the Fortune 250 to see how well their IVRs perform. The results are surprising and unsurprising. Spoiler alert: the unsurprising part is how well speech-enabled IVRs worked compared to touch-tone.
Using a rating scale of 1–5, third-party researchers evaluated each of the 50 companies across six key criteria to assess the state of their IVR and their ability to help customers resolve issues quickly and painlessly. The six criteria were:
Researchers compiled all the results across these six criteria and generated an average score for each IVR.
Across all 50 IVRs evaluated the average score was 2.3, indicating that there is much room for improvement in how IVRs support callers.
The unsurprising results
No surprise to us is that the IVRs in the top ﬁve performing industries below scored a whopping 35% higher than the bottom ﬁve. Is your company in one of these industries?
What makes the leaders stand out? No surprise that the reason for their improved IVR performance was that they invested in speech-enabling their IVRs at a much higher rate. 67% of the top performing companies adopted speech-enabled IVRs.
Speech-enabled IVRs — whether standalone or combined with Dual-tone Multi-frequency signaling (DTMF) — provide higher quality experiences than DTMF systems alone. As shown in the graphic below, companies with speech IVRs had significantly higher average scores:
The surprising results
The data above was no surprise at all. But what was surprising? Two things stand out:
First, 53% of the companies still employ an old-fashioned touch-tone IVR. Yes, DTMF and “Press 1 for Service” still lives on in the majority of companies we called. That’s great news for anyone who loves the 80’s but not so great for everyone looking to move into the future.
Second, industries that are heavily reliant on their IVR and contact center fell into the lower performing category. Companies that sell Insurance (Financial Services), Healthcare, and Health Insurance all scored significantly below the average. Given how important the phone is for engaging customers in these industries, it is curious to see them perform below average.
Is your company in the bottom tier? Still rely on DTMF? Then please read on!
With the rise of voice-activated smart assistants in our phones, cars, and homes, the power of voice is on the rise with no sign of slowing down. So why have your customers greeted with technology from 1988? Your IVR is one of your most important channels, and it makes sense to start the move to speech today. Today’s modern, conversational IVRs use powerful speech recognition and natural language so callers can engage the IVR and simply say whatever they’d like – in their own words – and be directed to the right resource. Imagine your customers’ delight when they can stop pushing buttons and start using their own words.
Check out the full research infographic to review the results in more depth, and then contact Nuance to see how we can help you be a top performer.
Publish Date: November 14, 2017 5:00 AM
It was a month like we’d never seen before. As we watched Hurricanes Harvey, Irma, Jose and Maria impact the US and Carribbean, and a massive earthquake hit Mexico City, a series of questions may have run through our heads. How can we help those people? How do they rebuild? How can we better prepare? As a society, we’ve talked a lot in recent years about upgrading our infrastructure, and that goes beyond roads, bridges and power grids. It’s likely owing to my profession, but I believe the modernization of communications that can be leveraged during disasters like these can literally be life savers for those communities threatened by situations like these.
The timing seems right – what used to be unsophisticated outbound technologies like “robo calls” are now going through a renaissance as more advanced vendors orchestrate multiple proactive engagement channels like text messaging, push notifications, email and automated voice, coordinating with IVR and digital through an omni-channel fabric and improving ease-of-use through cloud platforms. Using outbound notifications before, during and after an emergency like tornado or flooding should be seen as the first line of defense for local governments. Often, the first thing that happens as regions gird themselves for a disaster is a massive increase in inbound calls to customer service lines. Citizens are demanding timely answers about what they should do, and call centers can quickly become overwhelmed as wait times grow.
By combining voice, text and other channels in an integrated fashion, residents get the information they need through the channels they prefer, extending the reach of critical messages like incident preparedness, evacuation routes and shelter locations.
We all know this type of communication is important – and may become increasingly vital – so, what should we look for in an outbound communications platform?
We don’t know when or where “the next one” is coming, but we do have concrete steps we can take to limit loss of life and property. Now is the time to take that first step.
Publish Date: November 7, 2017 5:00 AM
Halloween is a time of frights and scares. Zombies, goblins, witches, and monsters are let loose on the public to scare and haunt them. And a good scare is tons of fun this time of year and makes us scream in delight. Good scares get our adrenaline going. But on the flipside, bad experiences that cause us to scream with anger get our blood boiling. That’s never good, but unfortunately it happens every day when consumers receive frustrating and ‘scarily bad’ customer service experiences.
Read on, if you dare, for three of the scariest customer service experiences we believe are guaranteed to make any customer scream.
There it is. Popping out of your phone like a monster bursting from behind a wall. The outdated IVR menu. (Cue the Jamie Lee Curtis in “Halloween” scream!) In a world of cool, new voice-enabled applications and assistants, the old fashioned IVR terrifies your customers.
Nobody wants to wade through endless mazes of touch-tone based options and push buttons like it’s 1978. Customers will scream in frustration. Surprisingly there are still many enterprises still using this old-fashioned technology today. Our research into 50 of the Fortune 250 IVRs shows that a scarily high 53% of the companies still employ an old fashioned touch-tone IVR. Hard to believe and yet so easy to fix.
Today’s modern conversational IVRs delight callers with powerful speech recognition and natural language so callers can simply say whatever they’d like – in their own words – to get directed to the right resource. Satisfaction goes up, and frustrating screams go down.
Nothing sets me off quite like the random challenge questions to prove I am who I say I am. Most of the time they are asking me something I answered five years ago and promptly forgotten, or worse, something that is not hard to find out like Mother’s maiden name. Of course, now that I am on the phone with an agent, I am the one who looks stupid. “I don’t know… Spot?”
Fortunately for everyone PINs, passwords and challenge questions are the way of the past. Call centers, IVRs and virtual assistants all over the world are adding secure biometrics to ensure the person is who they claim to be. With secure voice biometrics customers can simply state a pass phrase they don’t have to remember, or even be recognized just from a normal conversation. In addition new biometric modalities enable people to use their face, fingerprint, iris and even unique behaviors to prove their identity. All the time not having to memorize anything! “I just remembered. Rover!”
Too many times a customer must proactively call a company to enquire about an issue they are having. And nothing causes greater frustration and a maddening scream like a customer service agent acknowledging, “Oh yes. I see your flight is delayed.” Huh? So they knew about it? Well, then why didn’t they let the customer know in advance and prevent the phone call?
It doesn’t need to be this way. We live in a world of powerful push notifications through multiple channels where sending a text or email costs a fraction of a penny. Why don’t more companies get onboard with proactive outbound communications? Many do but only for limited scenarios like overdue bills or appointment reminders. They fail to connect the whole customer experience due to siloed service channels.
A proactive outbound platform connected to the inbound IVR platform ensures customers are notified in advance of issues like flight delays or suspicious charges on their credit card. A well-timed text or email ensures the right outcome and also increases customer satisfaction by preventing them from calling your contact center, which reduces operational expenses. Today’s consumers want to be notified proactively; they opt in for communications that help reduce their effort. New technology allows organizations to both notify consumers and engage in a two-way conversational text dialogue using smart, natural language understanding.
Being scared and having a good scream is fun – in the right situation. Calling service channels should not elicit a response best reserved for a Friday night horror flick. With the right investments and planning, organizations can offer their customers a service experience that leaves them beaming - not screaming.
Publish Date: October 31, 2017 5:00 AM
As you wish. That’s the catch phrase that resounds with Princess Bride fans as the movie’s 30th anniversary has recently passed and TCM plans a special showing in theatres on October 15. One of the reasons this silly romantic comedy has become such a cult classic is that it is littered with phrases like “Inconceivable!” and “Wuv, tru wuv,” that find a resting place in the back of our minds.
One of my favorites is “As you wish” – that statement of devotion and tru wuv that Westley proclaimed to Princess Buttercup. Wouldn’t it be refreshing to get customer service that said, “As you wish”? There are a small handful of establishments that follow that mantra when it comes to how they treat their customers, but this philosophy is pretty hard to find with voice or online customer service. Customers are underwhelmed by the digital experiences most brands deliver. Only 7% of brands are exceeding customer expectations, and part of the reason is that their queries are not being answered or solved. To many customers, an “As you wish” customer service is “inconceivable!”
Is an “As you wish” customer service “inconceivable” to your customers? If you want to discover more about transforming your brand’s customer experience to meet consumer expectations, contact us today!
Publish Date: October 13, 2017 5:00 AM
Some companies are a natural when it comes to communicating with their customers. They’re attractive, pleasant, interesting… but are they memorable? They may be the life of the party, but are they gaining customers that trust and value them? Are they making more than just acquaintances - but, rather, loyal customers?
A meaningful customer experience can be achieved by acquiring people skills that individuals must use in real life to create lasting relationships. Below is a simple list of engagement rules that can be applied not only to our personal relationships, but to enterprises that want to build a solid customer foundation. They can utilize these rules not only in their live chat programs, but also in virtual assistance and outbound communications.
What kind of people skills does your company have? How effective is your customer engagement strategy at making your customers feel valued? Applying these engagement rules can help in creating meaningful interactions, thereby building loyal customers.
Publish Date: October 12, 2017 5:00 AM
Remember when you were a kid in school and the teacher would put a gold star by your name on the good work chart? There’s something about seeing that shiny little sticker that fills you with pride in the work you did and determination to be even better. It also shows your peers that you’ve got brains! It’s too bad as adults that we can’t receive gold stars every time we succeed at something. Or can we…
As a provider of customer engagement solutions and services, Nuance and our customers around the world receive “gold stars” when leading research/analyst firms recognize the innovative, customer-focused work we do.
Case in point: A leading delivery service – and a Nuance customer – was just named winner of the 2017 Opus Research Intelligent Assistant Award. The delivery brand uses our AI-powered Virtual Assistant Nina to provide a high level of personalization across more than 79 countries in 15 languages. Nina lets customers get answers to their questions quickly and easily through the digital channel. The award-winning enterprise and Nuance were honored for delivering an engaging customer experience using natural language understanding, machine learning, and artificial intelligence through the virtual assistant on the brand’s website.
Using intelligent automation and conversational interaction, the Nuance Nina-powered virtual assistant can field frequent shipping questions from customers. In just a year and a half of deployment, in North America alone, the virtual assistant is yielding impressive results, including:
Why is it so rewarding for Nuance to have a customer receive such an honor? The Opus Research Intelligent Assistant Awards recognize leading brands who are utilizing virtual assistants to redefine digital commerce and customer care. That’s right. We’re redefining digital customer care!
At the same awards ceremony, another Nuance customer was recognized with an Intelligent Assistants Award: Australian Government agency IP Australia was honored for their integrated digital strategy, using their virtual assistant “Alex,” deployed in partnership with Datacom.
Launched in May 2016, ‘Alex’ leverages Nuance Nina to engage customers directly on IP Australia’s website and Facebook page, providing answers to questions and continuously learning from customer queries. As the Australian Government’s first integrated Intelligent Assistant and web-chat digital experience, Alex has had a significant impact on IP Australia’s digital engagement strategy. In 2013, only 12% of the agency’s 800,000 customer interactions a year utilized digital channels and this has grown to its current level of 99.6% digital adoption. To date, Alex has supported over 50,000 customer interactions and assisted in maintaining IP Australia’s customer service satisfaction ratings at over 84%.
Further optimizations to Alex include the introduction of Nuance Nina Coach in July 2017, a first for Asia Pacific. Nina Coach moves Alex into the next generation of Human-Assisted Virtual Assistants powered by Artificial Intelligence, enabling Alex to seamlessly bring in a live agent to assist with tricky questions. This action is recorded, analyzed, and folded back into Nina’s semantic brain, making the NLU technology smarter and more accurate over time, so the virtual assistant knows the answer on its own moving forward.
But wait! There’s more!
Nuance won an award, ourselves! At the AI Summit, San Francisco, we received the 2017 Alconics Award for Best Intelligent Assistant Innovation. The AIconics are the world’s only independently judged awards for practical applications of AI in business. The awards recognize the achievements and advances of the firms pushing the development of these burgeoning technologies forward, offering a level playing field on which Silicon Valley giants and cutting edge start-ups alike can showcase their work during the last year.
So… with three awards that recognize our work in redefining digital customer care, what can this tech company do? Well, we give ourselves three gold stars!
Publish Date: October 5, 2017 5:00 AM
When the question for your enterprise business is no longer “To bot, or not to bot” but instead is “Which bot?”, where do you start to find the answer?
First, you must understand what large enterprises require in a chatbot. Consider these 7 guidelines for choosing a chatbot for your enterprise brand.
With customer loyalty and revenue at stake, selecting the right chatbot for your organization can make or break your customer service success. Using the above guidelines can get you off on the right foot towards your selection. If you want to learn more and dive deeper into this process, join us for the upcoming webinar, Key Considerations for Selecting the Right Chatbot for Enterprise Customer Service on October 3, at 11am ET/8am PT. Register here!
Publish Date: September 28, 2017 5:00 AM
Recently, Head of Nuance Communications Cognitive Innovation Group (CIG) Paul Tepper was interviewed by AI Business, a content portal for the latest news deciphering the impact of Artificial Intelligence (AI) in business. Paul sheds light on how AI is transforming the way businesses interact with and understand customers while providing insight into the opportunities and challenges the industry faces moving forward.
Here are the highlights of this very informative interview:
“AI is the greatest tool for unlocking the vast and unprecedented pools of unstructured data. … It has the potential to remove the friction we see today across a wide array of customer experiences.”
“AI can bridge the gap between increasing consumer demands and a strained customer service model.” … waiting on hold for an agent will soon become a ‘thing of the past.’
“Predictive AI will enable us to know what a customer is calling about before they even say anything. … Conversational AI will maintain context across multiple interactions and channels.”
Paul sums up the power of conversational AI. “Speech enables people to talk to devices hands-free, without needing a screen. This is especially helpful when your hands are busy, but in general, it enables people to talk to devices the way they talk to each other in the most natural, human way. Today, Automated Speech Recognition (ASR) systems are as accurate as humans or beyond human accuracy.”
Paul shared some thoughts on an important area of discussion – the need to safeguard, regulate, and control AI. Paul believes that much of the public fear today is overblown: “Again, we are still far away from ‘general AI’ achieving human-level intelligence as AI today and for the foreseeable future will be great at focused tasks.”
He stresses that we must take measures to keep secure the large volumes of data on which AI is trained.
Paul also reveals the power of the Nuance Omni-Channel Platform and highlights Nuance Dragon Drive and Nina as AI examples.
The article was written in anticipation of the AI Summit San Francisco, September 27-28. Yann Motte, Vice President, Strategy and Business Development, Cognitive Innovation Group, will be presenting at the AI Summit on the topic of “Making AI for Consumer Engagement Real.”
Stay tuned to What’s Next to get Yann’s insights and takeaways from the conference!
Publish Date: September 18, 2017 5:00 AM