In recent years, organizations large and small have made the customer experience journey a strategic priority. Why? I think McKinsey said it best: “Many businesses are coming to understand that, increasingly, how an organization delivers for its customers is as important as what product or service it provides.”
I added the emphasis on how because, it’s not just the services and products you provide to your customers, it’s how successful your customers can become as a result of their interactions and relationship with you. Healthcare organizations aren’t, for example, purchasing Nuance’s platform and solutions per se; they’re purchasing a path to better patient care and to improved financial performance. They’re success then becomes our success.
Harvard Business Review notes touchpoints bring the customer experience to life. Every individual experience and encounter—or touchpoint—your customer has with you must have their success in mind: every phone call, email, in-person meeting, troubleshooting chat with your contact center, your most recent digital ad campaign, exchanges with your billing department, even the signage at the airport. These touchpoints all count, they all add up… So how can you make your touchpoints more meaningful? McKinsey & Company suggest the following six actions:
A customer’s experience with your organization becomes the sum total of every touchpoint throughout their journey with you. Make your touchpoints count.
The Customer Success blog series with Brad Morrison, Senior Vice President of Nuance Healthcare Customer Success, is an honest take on the ways to build and maintain strong relationships with your customers. The Customer Success blog shares industry insights, lessons learned, and humble advice based on both customer fails and success.
Publish Date: September 12, 2019 5:00 AM
With iOS 13, Apple introduced a new and interesting capability for Apple Business Chat beta – Chat Suggest. This capability enables enterprises to connect Apple Business Chat messaging to their existing phone numbers, enabling them to seamlessly deflect calls.
If an iOS user searches for a store or the contact information of a business, then taps the phone number, the device will then open up a call sheet, suggesting a messaging or call option. If the user then selects the messaging option, they are redirected to iMessage, starting a seamless back-and-forth with a virtual assistant or live agent, supported by the media-rich experience for which Apple Business Chat beta is known. For example, access to Apple Wallet for easy payments, inclusion of videos and images, carousels to select an option from a menu, access to the calendar and much more – all intended to simplify the customer engagement and make the experience with the brand more enjoyable and meaningful.
And the best part? There’s no implementation work needed (if you have Apple Business Chat already in use). The enterprise only needs to register a phone number for the Chat Suggest service. Going forward, every time an iOS user taps that phone number from anywhere on their iPhone or iPad, the device automatically opens up a call sheet with the option to message or call in.
That’s probably the most asked question we hear from enterprises. Messaging is becoming more and more important because consumers don’t want to wait on hold forever anymore. Yes, the phone is still essential for many consumers, but increasingly more just want to get their questions answered – fast and without listening to questionable music choices.
Over 55% of consumers prefer to use a form of messaging to communicate with companies.
2018 Customer Service Messaging Trends Report, commissioned by Nuance Communications
But not every customer is aware that your business offers messaging, so how do you let your customers know about this capability? That’s the whole point of Apple Business Chat Suggest. It lets the user know that there is a messaging option, so they can reach out easily, without waiting on hold, and without the requirement to download an app or accept any further terms of service.
And because Apple Business Chat is built into iMessage and included with every iOS-enabled device there’s a huge market to address – to be exact, there are 1.4 billion Apple devices, 900 million of which are iPhones
Adding messaging to your customer engagement strategy couldn’t be easier. And with Nuance, enterprises can leverage our knowledge of best practices and conversational design, in combination with a customer engagement platform that addresses the customer’s inquiries in their moment of need without increasing contact center costs.
Publish Date: September 10, 2019 5:00 AM
Let’s imagine a beautiful day where the sun is shining, your favorite song is cranked up, and you are singing along as you drive home from work on a Friday afternoon. Just as you prepare to belt out the final verse, an emergency vehicle flies by with the sirens blaring. This exact scenario has happened to me. With the music turned up so loud I couldn’t hear the emergency vehicle sirens as it approached behind me. In this moment, I felt the surge of adrenaline pulse through me. My mind scattered and my eyes searched around the car… thinking, ‘I need to get over, but where?’ The problem with this experience and experiences like it is that emergency vehicles are in a rush to get to their destination, and every second of delay could negatively affect the outcome of the person in need of help. This scenario is a perfect example of when siren detection technology can meaningfully help in an emergency.
Siren detection is Nuance’s solution allowing the vehicle to recognize emergency siren signals and the direction from which the siren is approaching. Once a siren is identified, the media volume inside the vehicle can be lowered, and the location of the siren can be identified and displayed on the infotainment system. This helps any driver – whether listening to music or not – respond to the siren quickly and move so they do not impede the emergency vehicle.
Pulling Over to Make a Difference
A recent siren detection study conducted by the Nuance Design, Research, Innovation, and In-Vehicle Experience (DRIVE) lab investigated drivers expected uses and perspectives on siren detection. The study provided many insights into users’ views on this technology in vehicles, finding drivers welcoming it. One such driver exclaimed, “Great idea! Looking forward to seeing it implemented.”
Participants mentioned siren detection would be valuable when the siren is approaching from behind, when the siren is approaching from an unknown direction, or when the driver is approaching an intersection. “I love this idea. The more informative the alert could be the better in my opinion. No one ever wants to be the person in the way of an emergency,” said another driver. When vehicles on the road are equipped with this technology, they don’t only help the drivers, but help the first responders and the people who are calling for help. Imagine you are having an emergency and you require immediate help and find out that the emergency responders were late because people wouldn’t move out of their way on the road. Imagine that you are an emergency responder and you are trying to get to an emergency but can’t because people won’t move out of your way. We’ve all seen that person, who forever reason, does not pull over when an emergency vehicle is approaching. As you know from my experience, delaying emergency services from reaching someone in need is a very unsettling feeling.
Un-Silencing the Siren
When a siren is detected, 67% of participants wanted to be notified immediately and want all other audio in the vehicle turned down. Just like my experience blasting the radio, participants said siren detection would be most valuable when there is loud audio inside or outside of the car (music, passengers talking, highway driving, or city noise), in heavy traffic situations, and in suburban/urban settings. They want to know where the siren is, the direction it’s going, and how far away it is. Another study participant said, “I like the idea of this technology, especially if it can detect which direction the siren is moving so as to better inform the driver if they need to move out of the way.”
Similar to other DRIVE Lab studies, we see that drivers want multi-modal interaction. Drivers want both auditory and visual notifications. Other drivers went a step further and wanted their vehicle to slow down and pull over for them, much as similar technologies like curb detection or park assist do to help the driver.
Beyond the convenience of having the car pull over automatically, siren detection gives the gift of time.
Thinking back to that Friday afternoon, being alerted to siren earlier would have allowed me to keep calm and respond without uncertainty. It is our job as drivers not only to look out for our passengers, but to look out for others around us. Siren detection allows me to confidently say, “your call for help is heard.”
Publish Date: September 9, 2019 5:00 AM
Summer 2019: Contact center transformation
Last quarter, we saw the importance of humans and AI working together to thrive in the digital world. Nuance experts taught us what the human-AI interaction could achieve, and we saw the leading organizations already making it a reality.
This quarter, we’re taking a deep dive into the latest innovations in contact center transformation. You’ll find out how your contact center could be holding you back, and the key strategies for making your transformation a success.
Head over to Nuance IQ to find:
Coming soon: Nuance Innovation Quarterly issue two
But that’s not all, there’s even more to come. Nuance Innovation Quarterly, the sister publication to Nuance IQ, is just around the corner—and it’s filled with more exciting innovations.
This quarter, you’ll get valuable insights from the people and brands forging a path to customer engagement excellence, plus expert advice on the keys to transformation—from APIs to conversational design. And you’ll even get a closer look at the latest Nuance products that could help your contact center evolve and thrive.
Take a look
Check out Nuance IQ today to get the latest insights in contact center transformation—and don’t forget to sign up for the summer issue of Nuance Innovation Quarterly magazine and watch the latest IQ webinar on-demand.
Publish Date: August 26, 2019 5:00 AM
As patients take increasingly active, hands-on roles in their healthcare—truly becoming part of the care team—they need immediate, reliable access to the medical records, test results, and care plans so they can continue to advocate for themselves.
With the Epic MyChart Patient Portal, healthcare organizations enable exactly that: secure, online, on-demand access to their health information. But as we have written over the last several weeks, it isn’t enough to simply turn on the portal and assume patients will have positive, engaging experiences that keep them coming back. From choosing the features that drive adoption to attaining and maintaining organizational buy-in, there can be a lot to manage—including, perhaps especially, the need and ability to resolve your patients’ questions in real-time.
Consider the patient who’s trying to log into their MyChart on Saturday evening and has forgotten their username and/or password and has been waiting for important lab results. Likewise, patients may have trouble navigating the self-service tools for refilling prescriptions or requesting appointments. They need to reach out to someone for support in that moment or chances are good that they’ll leave the site and not return, meaning your organization is less likely to achieve your meaningful use goals.
Thus, the focus of the final chapter in our series, “Making the most of your Epic MyChart Patient Portal,” is to answer this key question: how do you help your patients utilize MyChart whenever, wherever, they need help? The allure of a patient portal is its around-the-clock accessibility, but most IT departments are simply not structured to operate and respond to patients in this way.
The answer lies in a real-time, patient-centric support model. Supporting your patients in this way helps encourage adoption of the key features and builds confidence and trust in a well-defined patient portal and, by extension, your office. When you’re evaluating your options, look for a partner who becomes an extension of your own team and shares your view of patient care. In addition, look for a partner who:
Healthcare is, and always will be, personal. When we have questions about our health, we need answers urgently, and not just during “normal” business hours. Make the most of your Epic MyChart patient portal investment by meeting your patients where they are with timely, effective support as they navigate the valuable tools you’ve given them. At the end of the day, you’ll create happier, healthier patients who are actively engaged in their care.
Learn more about the Nuance Epic MyChart Service Desk, and hear from our customer, Dr. Stephanie Lahr, CMIO from Regional Health, about how the service has improved their patients’ experience using, and adopting the patient portal.
Publish Date: August 21, 2019 5:00 AM
Future-proof customer service
Customer expectations continue to rise rapidly, and there’s no sign of that slowing down. They’re going to continue challenging companies to give them better experiences and make every interaction effortless.
To meet these expectations, organizations need a future-proof contact center designed to deliver intelligent engagements—but it takes hard work to get it right.
Register for our webinar on 14th August at 8am PT/11am ET to hear guest Forrester’s Ian Jacobs, Principal Analyst, explain how to transform your contact center effectively. In conversation with Nuance experts, Ian will guide us through the common pitfalls organizations encounter when transforming their contact centers, and show how you can avoid them.
Join us to discover:
Plus, you’ll learn about some organizations that have already undergone transformations, and the results they’ve achieved so far.
Space is limited, so reserve your seat now to get all of these valuable insights and more.
And don’t worry if you can’t make it on the day—register now and we’ll send you an on-demand link to watch whenever you want.
Publish Date: August 8, 2019 5:00 AM
Nuance’s Document Imaging Division is now part of Kofax. Learn more
Excuse me for one moment while I move away from my laptop to let the eufy RoboVac work its magic around my desk. In 2019, it’s hardly novel that little robot vacuums wander around hundreds of thousands, if not millions of homes sucking up dirt and dust, saving time and making indoor air and surfaces cleaner. Over the past forty years, robots have migrated from science fiction books, movies, and television shows, to the real world of automated manufacturing, toys, autonomous vehicles and now consumer “tools” like vacuums. We are surrounded by robots and have been for a while now.
Each year, robots become more versatile, useful, and less expensive, enabling people to work smarter, safer, and get more done in less time. Robots are good at vacuuming, completing precision manufacturing tasks, and driving trucks and cars, but can robots help with something like office work?
You may not see them at the coffee station or water cooler, but a growing number of enterprises are rapidly deploying robots in the form of software that automate an array of digital tasks. This category of software tools is referred to as Robotic Process Automation, or more commonly RPA. RPA is one of the fastest growing segments of the software industry, with some analyst estimates showing greater than 100% growth annually over the past two years. If you are not familiar with RPA software, this post will break it down, including why it’s important for office work.
While it seems to have come out of nowhere, RPA software has roots that extend back to the 1990s. Software developers used “automation” tools that could be programmed to manipulate another application. The programmers could feed it a long list of instructions and the automation software would follow each step testing the new application the way an end-user would operate it. The automation software would detect bugs and log them for the programmer to fix. Using automation, a programmer could let the “bot” test her or his software while they went onto another task in parallel. It was not as thorough or reliable as a human quality assurance tester, but it helped speed up repetitive testing tasks.
As automation tools evolved, they gained different capabilities to support a wider array of tasks. One of those capabilities is referred to as “screen scraping”. As the name implies, the software could detect fields and values on a computer screen by tracking and measuring pixels. Once a required field was identified, the software could copy (“scrape”) the field data and move it to another location like a spreadsheet or database. Most of the uses for this technology were limited, but for critical tasks, this removed the tedious and error-prone task of moving small amounts of data from one application to another.
In the late 1990s, with the arrival of the internet, the browser and HTML (hypertext markup language), there was now a universal platform available for efficiently sharing a lot of new data. As web applications exploded in number and usefulness, a massive number of data sources came online, literally. While those web apps, including sites like Amazon.com, eBay and many more, published a lot of data to the browser, there was no direct way to access it without manually copying and pasting a lot of data, or manually keying it into a spreadsheet or another database. This is where the concept of RPA really took hold.
As HTML standards matured, RPA tools could more accurately and dynamically map the user interfaces for an almost unlimited number of web applications. Once an automation routine was mapped to the coded tags that defined the page, the software could replicate human tasks, entering search queries and then exporting the resulting text, links, images and whatever else the app returned into a spreadsheet or database. This approach came to be defined as “virtual” or “synthetic” APIs (application programming interfaces). Regardless of the name, it served a critical purpose by programmatically, automatically and accurately capturing data from one system and accurately replicating it in another.
The inability to synchronize data between different systems has been a major information technology problem for decades, including most major industries. As the volume of applications has exploded in, so has the interoperability problem, driving up the cost to maintain systems and making it harder to drive greater operational efficiency as information becomes locked in system silos. By combining automation, screen scraping, virtual APIs and other capabilities, RPA provides a dynamic middleware platform that effectively unlocks information from virtually any human accessible source.
Suddenly a whole world of previously hard to collect web and system data was accessible at a dramatically lower cost and in near real time. RPA made it easier to:
With a relatively low-cost profile (several products are available as free community versions), enterprises have quickly adopted RPA technology deploying bots for a wide range of critical applications. Because RPA is adaptable to virtually any interface, enterprises can creatively access a wide range of data sources without making any structural changes to those sources. RPA bots will not transfer data as quickly as two systems with a bidirectional integration, but they can be run in parallel to increase overall throughput. In some situations, banks can run very high volumes, in the billions, of automated transactions tasks using RPA bots.
The combination of flexibility, relatively low-cost, a rapid rise in structured web-accessible data sources and a never-ending need for accurate, dynamic data has fueled the recent boom in Robotic Process Automation software. You won’t see them sitting in the cubicle near you, but the robots are in the office, helping workers perform many data centered tasks that were inaccessible before.
In the next of this two-part series, learn how RPA capabilities differ from and complement Document Capture software to deliver the most complete information capture solution for structured, semi-structured and unstructured information.
Publish Date: February 14, 2019 5:00 AM
Incident reporting is important in police work and helps keep investigations and cases moving along, but it can also be time-consuming. In fact, close to fifty percent of an officer’s day can be spent typing up reports or entering data into computer-aided dispatch (CAD) and records management systems (RMS), according to a recent survey. Heavy documentation can mean officers stay heads-down in their patrol cars, making them less situationally aware, or back at the station mired in paperwork.
Challenges in police paperwork are nothing new, but there is a new grouping of police reporting tools like speech recognition that can help. Here are 3 ways.
1. Improve situational awareness
When conducting tasks like license plate lookups or entering data into the CAD/RMS, an officer shifts focus from his or her surroundings and, if heads-down in the patrol car, can make them more prone to accidents – or ambush. Combine this with the poor ergonomics of having to shift and turn in the car seat to enter data into their laptops, and in-car documentation becomes less than ideal. With tools like speech recognition, officers can use their voice instead of manually typing or hunting and pecking on the computer keyboard. They stay heads-up and more focused and safer on patrol.
2. Improves specificity and accuracy within reports
According to the forgetting curve, within an hour most people only remember 50 percent of the information presented to them, and forgetfulness drops to 75 percent within 24 hours. Relying on manual documentation alone is risky. Officers, who are responding to multiple incidents each day, need to rely on memory-recall or decipher hand-written notes from hours before. Both can lead to inaccuracy and lack of specificity, which is a major concern when the outcomes of criminal proceedings are tied to the incident reports officers’ file. By dictating notes in real-time, officers can capture more detail and create a “narrative” of each incident, leading to better reporting.
3. Speeds documentation and reporting deadline
Meeting reporting deadlines is the lifeblood of police departments, and if these deadlines are not met, criminal proceedings can be stalled, or worse, abandoned. Speech recognition can help speed report turnaround times. Reports that traditionally took hours to create can be completed in minutes - simply by speaking.
Police paperwork is a challenge that many law enforcement professionals face. New police reporting tools like speech recognition technology can help
Publish Date: January 22, 2019 5:00 AM
Credibility and trust are instrumental in ensuring consumer loyalty. A customer who believes you and trusts you, works for you. They champion your brand and products and bring other folks with them. For a voice-enabled application this puts a lot of pressure on the virtual assistant’s persona, how they sound and what they say. As we automate more and more, the virtual assistant is becoming the first “person” a customer talks to at your company, and it’s usually because they have a problem.
Before we dive further in, let me tell a story about a pastrami sandwich, debit card fraud, and how a company won me over with their customer service.
My pastrami sandwich story
On a day like any other, after I made my way through an entire hot pastrami sandwich (it was a ‘cheat’ day…I swear), I waddled back to the office to continue working. On the way, my phone buzzed in my pocket and a text notification told me that someone withdrew $500 using an ATM in the Upper East Side of NYC, quite far from the Financial District where I was. As I looked at the screen in mild shock, another text showed up for another $500 from a different location. Panic set in, I immediately stopped waddling to make a phone call, and the person behind me walked right into me, letting out a few choice words as he charged past.
By the time I called the number listed in the texts, thousands of dollars were gone from my checking account, withdrawn from various ATMs around Manhattan. And, it was the first of the month, meaning rent was due. My stomach churned at seeing my checking account drained and was only amplified by that greasy pastrami sandwich. Yet, by the grace of excellent customer service, my problem was no longer a problem after only five minutes on the phone. Card cancelled, replacement on its way, and money refunded like nothing happened.
How can I help you today?
I tell this story to drive home how much trust can be built with an easy and successful customer service interaction. All the advertising in the world could not even attempt to build the same level of trust that was instilled in me from that single phone call. Not only did they solve my problem quickly and with little effort on my part, but they took a customer problem and used it as an opportunity to increase my perception of and trust in their brand. And, it all began with an automated voice saying, “How can I help you today?”
Is there a secret recipe when selecting a voice for your virtual assistant? What’s the magic mix of character, pacing and tone that absolutely guarantees a voice that is credible and trustworthy? People form initial judgments on a voice within half a second.1 To add, we all have different life experiences and cultural backgrounds that form other perceptions of how a voice sounds. Some studies suggest that higher tone and greater expression through high and low pitch contours are more trustworthy.2 But, does that hold true if the person engaged with a virtual assistant just had their checking account fraudulently drained and their adrenaline is pumping?
More than just the right voice
I argue that importance should not be put on the singular act of picking the ‘right’ voice but the many tasks and decisions that go into defining a virtual assistant persona. These decisions will guide a voice talent in how they should speak and steer how a text-to-speech voice could be sculpted. Without that foundation you’re betting on luck when picking the best voice and have little direction during voice production. While a book could be written about this, here are a handful of foundational principles.
Foundational principles to creating your brand’s voice
Picking the ‘right’ voice for your virtual assistant is a daunting task with opinions galore. Rather than get hung up on that specific point, focus on the elements that influence that choice. Above all else, listen to users to understand their problems, and let that lead to a persona that can be relatable to many users. Then, you’re on a path to having a virtual assistant that is credible and trustworthy – a voice that can put a consumer at ease and increase their loyalty, despite their choice of lunch and panic-inducing problem.
Publish Date: January 22, 2019 5:00 AM
According to Nielsen, the average television viewer watches upwards of five hours of television per day, reaching eight hours in some areas. Although, in an omni-channel world, some might consider television simply another channel or avenue for accessing content, it is so much more. Television is a companion, a friend, a babysitter for a mom who needs to make breakfast and, of course, provides a bird’s eye view of cutting-edge news and world events; to relegate the importance and power of television is unwise.
We can all agree that television is powerful, but have we considered what’s next? The notion of interacting with the television is rather nascent; several programs have scratched the surface with real-time voting or even text voting, but as technology advances our interaction with our favorite screen could be limitless.
With the integration of speech in the consumer remote control, we’ve seen engagement with the television increase upwards of 45 minutes per day. The personalization of the viewing experience with voice makes access and navigation of some 500+ channels manageable and meaningful.
Many people are anxiously anticipating the return of the final season of Game of Thrones. Perhaps you are like me and have yet to watch one episode. Let’s say I decide to check it out, I binge watch all seven seasons on Netflix, and finally season 8 arrives and I do not have HBO. With the simple use of my voice or click of the remote, I can engage seamlessly with my screen, accessing customer service to determine what the additional cost of adding HBO to my plan might be. I receive a confirmation notification on my screen and I’m ready to watch, never having had to speak to an agent.
Sound too practical and not terribly exciting? Well, how about this scenario…
I’m watching the ABC show Scandal, and I spy an amazing olive color, crocodile leather handbag on Olivia Pope’s arm. I want it. I want to know all about it: can I afford it, does Amazon have it available on Prime shipping…GIMME NOW. With my voice or the click of the remote I can interact with the television screen to access the meta data and pull up the product details, allowing me to figure out how many coffees I will have to forego to be able to put that bag in my hot little hands.
The ability to access information, engage and purchase is the future of engagement, and I am most excited about the possibilities.
Of course, not to sound shallow and totally focused on material goods, imagine what fundraising and real-time help could be afforded when tragedy strikes or when Sarah Mclachlan’s voice is overlaid with the faces of hurting or injured animals – again, the possibilities are endless.
The ability to engage around billing and services is very real today. The ability to purchase and engage at a deeper consumer level is making advances every day. Today Nuance is transforming the engagement experience with the power of voice on the remote control with Dragon TV, showing powerful retention and net promoter scores. Additionally, Nuance leads in the customer contact center space, providing both virtual and live experiences to improve engagement and customer satisfaction.
Seamless self-service, rich, targeted engagement and the ability to improve the entertainment and consumer experience is the holy grail for both consumers and service providers. When these pieces all work together, it creates happy customers. Happy customers tend not to churn and tend to spread the word.
Publish Date: November 15, 2018 5:00 AM
There has been a lot of noise around customer engagement lately. New channels to use for customer service, an increase in self-service, decline in traditional channel usage, a robot take-over - but what does all of that actually mean?
With new channels comes more choice. Customers have more options to reach out to a brand than ever before. Gone are the times where you could only get an answer if you walked into a store or picked up the phone. Nowadays, we can yell at our fridge to get customer support or we can use our mobile phone while on the way to work (car is parked, of course).
In the end, the presence of more channels means more consumers trying to connect with businesses. At first thought, that’s a good thing as it allows companies to sell more. The problem is that contact centers are not capable of scaling. There is a point where hiring new agents is not an option anymore. Yes, technology can be improved to route incoming messages more efficiently, agents can be trained to handle concurrent conversations, and they can be enabled with all sorts of information. But, there will be a point when customer satisfaction drops because consumers have to wait too long for an answer.
Automation seems to be the holy grail. Let’s use a bot to handle all the incoming conversations. Thanks to AI they are smart enough, right? Not quite. Current technology is capable of handling certain things. Bots and the more sophisticated virtual assistants can help enterprises with their work load. But even if they are fed with hundreds of chat logs and have been trained by professionals, they can only handle so much. Businesses will always have some topics that have to be handled by an actual human. Hint: That’s the elephant!
A McKinsey survey stated that 94% of customer care executives believed they would need to hire new agents or train current ones with new skills.This is not too far away from the truth. Nuance best practice for customer engagement is the following balance between technology, automation and humans:
This paradigm shift requires enterprises to rethink their approach to training and managing the agent workforce. New skills must be developed, new processes put in place and interfaces should be redesigned to ensure an always successful contact center. No need to panic, though. This is not an impossible task. There are experts out there that know exactly what contact center managers are going through and that have tested several options on how to transform your contact center. Let’s tackle that elephant together and create an experience your customers will love.
Publish Date: March 15, 2018 5:00 AM
Last month in Florida, I was absorbing some great discussions with our Executive Client Council (ECC), a group of C-level executives from the nation’s leading health systems. I was closely following a dialog rich in insight when I was struck by two simultaneous thoughts: (1) we should do this more often, and (2) why don’t more marketers do this more often?
“Listen to your customers” is unquestionable for any marketer. Most make at least some attempt to do it. Some do it well, and others not so well.
So that leaves my second thought: Why don’t more marketers do this more often? If you ask the question, the common reply will be “we can’t find the time”. If you push beyond the surface answer, you would also find an unease about what you will hear and an uncertainty if you want to hear it.
If it’s about time, think of the time and resources spent developing products and services that weren’t quite right or were just plain wrong. Then think about the time and money spent digging out of the resulting hole. Can you afford not to take the time? If it’s about being afraid of what you’ll hear because it might disrupt some major product development, then be prepared for the results you get in return — knowing that what you don’t know can hurt you.
We meet with our ECC regularly and it’s fantastic every time. We’ve learned a lot from this group of executives, and adjusted direction on product and technology development based on their coaching. We are committed to making this a more frequent dialog. Our intensive focus on clinical virtual assistant development is one direct outcome from listening to our customers.
Successful businesses are designed around what customers need. So, at the end of the day go beyond just thinking that you’re listening to your customers no matter how well you think you’re doing it. Instead, honestly ask why you aren’t spending more time with them.
Publish Date: March 9, 2018 5:00 AM
New voice and language solutions continue to impact productivity at every level – from improving workflows for document-intensive industries, to simplifying daily tasks.
The automation of these tasks – whether at home or where we work – rely on a set of intelligent systems; all of which use complex algorithms driven by machine learning to take a human ability, like language, touch or a simple gesture, and transform it into an action.
In fact, this was the premise that Dragon Speech Recognition was built upon over twenty-years ago; to take the everyday task of typing and transform it into a simpler process by voice.
The first iteration of Dragon Speech Recognition was the “smartest of smart” for its time – taking incoming streams of sound and interpreting them into dictation. What used to take hours to do; namely typing a document, turned into literally minutes, all simply by speaking into a computer.
Advances in Deep Learning technology has turned the complex algorithms our engineers scratched out on their white boards back then into reality. Today’s intelligent speech recognition engines not only interpret dictation, but also understand its context; distinguishing between words like homophones (for example: to, two, and too). They recognize the difference between dictation and a command, like “open Microsoft Word.” And the technology learns and adapts the more it’s used, learning the subtle nuances of spoken words. It can even distinguish, and parse out, background noise.
The power of all of this built-in functionality has propelled documentation productivity further than we could have ever imagined twenty years back.
Hundreds of police departments, whose officers are spending 3 to 4 hours each day typing incident reports, are improving their documentation processes tenfold. Law firms, whose clients are becoming ever more tech-savvy, are embracing voice-powered documentation solutions to shift labor-intensive tasks, such as searching documents for information, automating e-discovery, to writing case files and briefs, into seamless workflows. And financial services firms are using the accuracy that speech recognition offers to mitigate risk and improve compliance, in the face of expanding rules and regulations.
For the document-intensive industries that we work with daily, seeing the transformative impact our voice and language solutions are having is just as exciting today as it was twenty years ago when we first started automating talking into text.
Publish Date: March 5, 2018 5:00 AM
Everyone knows that kid in class that knew everything. Constantly raising their hand when the teacher asked a question, silently insisting, “I know! I know the answer! Call on me!” If you don’t know this kid - maybe you were this kid. Well, here he is again, in your adult life. The smart speaker.
The next thing we’re going to see in smart speakers is proactive engagement in conversations. Alexa, Google Home, and all the variants will have an option for them to be proactive and helpful in conversations. They’ll start listening to what people are talking about and get involved in the conversation when it makes sense. Imagine a typical Saturday morning conversation in the kitchen:
Mom: “What time is your lacrosse practice today, Ted?”
Ted: “Uhhh, I don’t know… It’s going to rain all day so it’ll probably be cancelled.”
Alexa: “Ted’s practice is on Google calendar at 1pm today at East Field. The forecast does show heavy rain and thunderstorm around 12:30pm.”
Mom: “Thanks, Alexa.”
Alexa: “No problem.”
This is an example of a smart speaker being helpful when it jumps into a conversation. Smart speakers will use affirmations like “Thanks, Alexa” to decide whether it’s interrupting too much or if it should offer more info.
Clippy was the user interface agent that came bundled with Microsoft Office starting in 1997. It remains, arguably, the best known and most hated user interface agent in computer history. It would pop up and try to predict what you wanted to do and most of the time miss the mark. When our smart speakers become proactive and start conversations or interject themselves into our conversations they’re going to have to strike a balance - Let’s call it “The Clippy Balance”. Smart Speakers will need to listen for affirmations that they’re being helpful and also listen for the negative feedback from users like “Shut up Alexa!
Publish Date: March 2, 2018 5:00 AM
The tech world seems agog with the idea of building everyone’s new virtual best friend. After generations of science fiction work depicting a future when machines with pleasant, reassuring voices easily answer any question and blithely fulfill any request, technology has finally reached the point where this fantasy could soon become reality. Some might argue that soon is right now.
Planners often talk about how our lives move between three primary environments: home, work (or school) and on-the-go. This makes sense for most of us and is useful to consider during product ideation and marketing development. It helps creators imagine how their solutions will solve problems unique to each environment.
Automobiles fall under “on-the-go” for hundreds of millions of people around the world. Technologic evolution in them over just the last five-to-ten years has been truly remarkable. Just one decade ago, even the most advanced vehicles lacked the “intelligence” we commonly see in the market today. They were already mechanical and electronic marvels that could perform many impressive functions, but the application of artificial intelligence (AI), machine learning, dynamic driver personalization, and external data exchange capabilities were still conceptual. Since then, however, advanced driver assistance systems, the Internet, and new human-machine-interfaces (HMI) have proliferated in vehicles at all segment levels. The “connected car” period of the last years is quickly morphing into the “smart car” era.
The key element to making cars “smart” is an AI platform that thoughtfully integrates the car’s HMI with vehicle sensors, a panoply of virtual assistants and cloud content, and adapts to the environment and users’ individual preferences and habits. Smart cars must possess an automotive assistant that can seamlessly link and make sense of a variety of inputs and data, from both on-board and off-board. Its value will be judged on how elegantly it communicates with people using speech and how well it understands natural language, while using data from disparate “expert” sources to deliver the right information or action at the right time. And, because it’s optimized for the automotive environment, it knows things such as the fuel or battery charge level and how far it is to the nearest places to top off; or what the most popular local restaurants will be along your route when it’s lunchtime in a couple hours.
To be incredibly effective, the automotive assistant must access the most appropriate systems for any given situation. It needs to be interoperable with a wide range of them both within and beyond the car. Specialized bots, virtual assistants and connected devices are increasing rapidly as part of the Internet of Things (IoT) revolution. An automotive assistant that can interface with them will drive incredible value for auto manufacturers, dealers and consumers alike.
Interoperability is a logical end-state that the full IoT ecosystem will eventually need to embrace to meet users’ needs and be successful. Assistants and bots will benefit from communicating together because consumers will ultimately decide they don’t want to be forced to choose — they want to have all options and to enjoy a frictionless experience. One way to think about the relationship between the automotive assistant and others in the virtual realm could be as that of a general contractor and sub-contractors on a construction project. The general contractor may possess the skills to, for instance, design an electrical system or install plumbing, but their primary role is to manage the overall project to ensure it is completed as efficiently and effectively as possible. To accomplish this, the general contractor will leverage their relationships with many specialized contractors who can be brought in at the right time to perform specific tasks expertly and quickly. Similarly, the automotive assistant, while highly capable itself, delivers the best experience for users by intelligently orchestrating all pieces of the smart car ecosystem.
The automotive assistant greatly improves user experiences using two other very important capabilities of advanced AI reasoning: “personalization” and “contextualization.” Personalization concerns learning users’ particular habits and preferences and using this knowledge to make informed recommendations that better support them as individuals. Contextualization concerns the conditions and circumstances that surround the user at a given moment — inside and outside the car — because aspects of both might affect the decision for or against a certain option. For instance, your automotive assistant might learn that you tend to stop for fast food on evenings when you leave your office, have a client meeting on your calendar and don’t have enough time to visit home first. Eventually, when these conditions occur, it may proactively present a few quick-service restaurant options along the route to your meeting, using previous stops at this type of place over time to narrow the recommendations to ones you are most likely to prefer. It effectively solves your problem: “What and where can I grab something I’d enjoy eating, on my way, and in the time I have?”
Taken together, interoperability and advances in AI reasoning enable automotive assistants to support the needs of people on-the-go in their vehicles better than any other virtual assistant possibly can. They will provide access to the most relevant and timely information — from the car, the environment and the cloud — to help users make better-informed and faster decisions, thereby enhancing their productivity, comfort and safety.
Publish Date: February 16, 2018 5:00 AM