Cookie Preference Centre

Your Privacy
Strictly Necessary Cookies
Performance Cookies
Functional Cookies
Targeting Cookies

Your Privacy

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, your device or used to make the site work as you expect it to. The information does not usually identify you directly, but it can give you a more personalized web experience. You can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, you should know that blocking some types of cookies may impact your experience on the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site may not work then.

Cookies used

ContactCenterWorld.com

Performance Cookies

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site.

Cookies used

Google Analytics

Functional Cookies

These cookies allow the provision of enhance functionality and personalization, such as videos and live chats. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these functionalities may not function properly.

Cookies used

Twitter

Facebook

LinkedIn

Targeting Cookies

These cookies are set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant ads on other sites. They work by uniquely identifying your browser and device. If you do not allow these cookies, you will not experience our targeted advertising across different websites.

Cookies used

LinkedIn

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties

OK
Become a Basic Member for free. Click Here

Nuance - ContactCenterWorld.com Blog Page 5

Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

How our voices age

One of the anxieties that I’ve often heard expressed regarding voice biometrics is how does the technology account for the natural aging of our voice? Through our personal experience, we all know that the voice we had as a child is quite different from the voice we now have as an adult. Fortunately, as I’ll demonstrate with a series of examples, voice biometrics is quite indifferent to the age of our voice!

To prove this point, I performed several tests using the voices of well know actors that have a wealth of voice recordings available in the public domain. In full disclosure, to perform these tests I had to disable Nuance’s standard playback detection algorithms in our voice biometric system. Performing voice biometric verifications with recorded audio would clearly not be feasible in a real-world deployment, as I’ve written about in a previous blog, which you can read here.

The first test that I conducted involves my childhood action movie idol, Arnold Schwarzenegger. The Austrian-born star of the Terminator film series, who would later become the governor of California, has an instantly recognizable voice. Our very own brain-powered voice biometric engines can easily identify his voice, whether we are listening to a rerun of the 1984 movie Terminator, or a recent interview featuring Mr. Schwarzenegger. So, given this, how does a voice biometric engine perform? To find out I enrolled 40 seconds of his voice from an interview Mr. Schwarzenegger delivered in 2015 that was available on YouTube. I then ran a voice biometric check on three seconds of Mr. Schwarzenegger’s voice from the movie Pumping Iron from 1977 that was also available on YouTube. Despite a 38-year difference between these two recordings, the voice biometric engine had no trouble recognizing that this was the same person, at banking-grade security settings.

Now this first test was very favorable, because even though there was a 38-year difference between the two clips, in both cases Mr. Schwarzenegger was in his adult years in which the voice changes very little. When Pumping Iron was filmed, Mr. Schwarzenegger was 30 years old, and in 2015 when the interview was recorded he was 68.

The real challenge is how will voice biometrics perform during the two periods of our lives when our voices change more rapidly, which are during our teenage years and during the latter years of our adult lives.

To explore this question, I performed a voice biometrics test with another famous actor whose voice is instantly recognizable as well, Morgan Freeman. Born in 1937, Mr. Freeman has blessed us with a wealth of quality acting over a period that exceeds five decades. In 2017, Mr. Freeman will be celebrating his 80th birthday. In this test, I enrolled Mr. Freeman’s voice in one of our biometrics programs with 40 seconds from the movie The Execution of Raymond Graham, a movie that was produced in 1985 when Mr. Freeman was 48 years old. I then passed 3 seconds of audio in the system from Mr. Freeman’s voice from a recently-produced National Geographic series titled The Story of God, filmed in 2016 when Mr. Freeman was 79 years old. Excerpts from this series can be viewed on National Geographic’s YouTube channel. Once again, age did not impact the performance of the voice biometric engine; it validated Mr. Freeman’s voice at 79 as belonging to the same person as Mr. Freeman’s voice at age 48, despite 31-years separating these two recordings of his voice. Once again, the system was set to banking-grade security performance levels.

However, there is a period during our lives where our voices do change in a material way, and that is during the transition from our childhood to our adult years. You may nevertheless be surprised how robust voice biometrics can be, even during a period of what we perceive as a rapid change of our voice. To illustrate the point, I performed a test with the voice of Candace Cameron Bure, the actress that gained notoriety playing the role of D.J. Tanner in the American TV series Full House. I chose Ms. Bure because she started acting in Full House as a child, at the age of 11, and ended as a young adult at age 18. This provided me with yearly voice samples as Ms. Bure matured to adulthood.

To perform the test, I enrolled 40 seconds of Ms. Bure’s voice from an episode of Full House in season one, which was recorded in 1987. I then performed verification tests with three seconds of audio from each subsequent season, until season eight when Ms. Bure was 18 years old in 1994. Even in this test, despite a seven-year difference between the enrollment audio and the last verification audio, the voice biometric engine had no issue identifying Ms. Bure’s voice. As with previous tests, the system was configured to banking-grade security levels. In fact, it isn’t until Ms. Bure reached the age of 21 in 1997, that a voice sample from her performance in the movie NightScream where the voice biometric engine is no longer able to match Ms. Bure’s voice to her voice sample from the age of 11. The voice biometric engine concluded that there was approximately a 90% probability that these two voice samples belonged to the same person. To achieve a banking-grade level of performance, the probability needs to exceed 99%.

There is however a solution to even this voice-aging challenge. It’s a capability that is called smart-adaptation in the solution. It automatically adapts the voiceprint on file for an individual with each successful authentication to the system without compromising security. As such, in the example with Ms. Bure’s voice, if her voice was enrolled at age 11, and then was heard again at age 18, then the voice would have been automatically adapted so that when at age 21 her voice was verified again, it would have been successfully matched. The cases where a person’s voice is enrolled as a child and is there only heard again as an adult will in most use-cases be extremely rare. In such cases, the individual’s voice will need to be re-enrolled.

These examples showcase that age is, for virtually all practical use-cases, a non-material factor in the performance of voice biometrics. One could enroll in a voice biometric system at age 30, and then verify for the first time 40 years later at the age of 70, with the same layer of security across all ages. Indeed, our voices change very little during our adult years. In cases where children’s voices need to be enrolled, the use of smart adaptation can automatically address the changing voice characteristics that occur naturally during our teenage years. Age may be a very sensitive topic, requiring tact when the subject arises in conversation, but to an adaptable voice biometric engine, your voice is wonderful no matter what your age.

Source: http://whatsnext.nuance.com/customer-experience/voice-biometrics-aging-customer-authentication/

Publish Date: January 26, 2017 5:00 AM


Dragon, do you speak my dialect?

In a couple of recent blog posts (here and here) we looked into variation between individual speakers and several factors contributing to it, but an important aspect worth diving deeper into in the human language is dialect.

So what is a dialect? A tricky question to answer that can get you into political trouble in some areas of the world! In the past, central authorities were often skeptical of communities that claimed to have its own (regional) language, preferring to rather speak of a mere “dialect.” In the reverse, smaller countries with a big neighbor often insisted they spoke its own language, not just a dialect of the neighbor’s language. Luckily linguistic variation today is often seen as a precious treasure of cultural heritage, but in many places Max Weinreich’s summary that “a language is a dialect with an army and navy” is still valid. Avoiding those issues, I will use “dialect” in a pragmatic way, also encompassing regional languages and accents.

Looking back over the more than 20 years I have spoken to customers and others about Automatic Speech Recognition (ASR) the most frequently asked question definitely was, “Do your systems speak dialect X?”, where “X” may have been Bavarian, Scottish English, Swiss German, Canadian French – and many other examples.

After many centuries of authorities trying to discourage the use of dialects, today many people are actually proud of their ability to speak a dialect. Recall when during a trip to an exotic place you recognized somebody coming from the place you were born in just by listening to how they speak – it’s a welcome feeling. Even governments exploit this today: the German state of Baden-Württemberg (which prides itself of being the birth place of many inventors and scientists, like Carl Benz, Johann Kepler, Albert Einstein; AND is also the home of the Nuance Ulm office) coined the slogan: “We can do everything. Except [speak] High German.”  Obviously the slogan is not quite true in that most speakers of dialects also speak the “standard” form of their language and apply what linguists call “code switching.”  Depending on the social setting, speakers switch between standard language (in a formal setting) to dialect (at home or with friends) and back. Dialect, similarly, can be a tool with which you can signal to somebody they are welcome in your home or that they will remain a stranger, as they don’t speak your dialect. The same mechanism may be at work in those numerous radio spots or YouTube videos where people make fun of ASR, which supposedly does not speak a dialect, see for example here and here.

The second reason why people may have doubts about ASR working well with dialect may also be related to the long history of dialects not being an acceptable language to use in school (at least in some countries). Clearly dialects deviate from the rules of the standard language, as codified in the grammar book, so that somehow encouraged the myth that dialects do not have any rules, are “irregular” and necessarily difficult to capture in a machine. But from a linguistic viewpoint, that is really just a myth: granted, dialects sometimes don’t have a written form, but for linguists spoken language is more important anyways, written language only being a secondary derivation. And in the spoken form dialects are as regular as any other language, they are neither worse or nor more difficult, nor better or easier than “standard” languages.

Machine Learning, especially Deep Learning based on Neural Nets, can deal with the variety of having several dialects and a standard form in one population. As long as you make sure all dialects are reflected in your training data (and we make sure it is, e.g. for the UK we use more than 20 defined dialect regions) the resulting models will reflect all those ways of pronouncing the phonemes (or sounds) of a language. We make sure to include words that are special to a dialect (again using the UK as an example different areas refer to a bread roll as a cob, a barm cake or a bun) and where pronunciation differences go beyond isolated phonemes, we reflect that in the pronunciation dictionary.

For instance, our UK English language pack recognizes 52 different pronunciations of the word “Heathrow” so our airline customers can cater to those whose first language isn’t English. When differences become too big, we create separate models in some cases. Users of Dragon speech recognition software can choose between variations of English and between Flemish (for Belgium) and Dutch (for the Netherlands).

Occasionally this is done “under the hood,” so to speak. Even in the Dragon US English version, there are several dialect models. We use a classifier (another application of Machine learning) to detect which “package” fits best to the user’s dialect and use that for recognizing (if you are interested in a more academic text on how to do this, this PhD thesis is an in depth study of how to deal with Arabic dialects). We also verify that it works by measuring accuracy gains per variant, e.g. Dragon Professional Individual English has an accuracy improvement (over the previous version) of 22.5% error reduction for speakers of English with a Hispanic accent, 16.5% for southern (US) dialects, 13.5% for Australian English, 18.8% for UK English, 17.4% for Indian English and 17.4% for Southeast Asian speakers of English.

Finally, as I mentioned in this blog post we have adaptation to help us with the challenge: dictation software like Dragon will adapt overtime to a user’s specific dialect. When the usage deviates from how we thought it would be used during training, ASR may not work for every dialect at every time. However, speech recognition’s accuracy across a number of languages has risen considerably by upwards of 99%, and is evidenced by the broad and global integration of our cloud based ASR and NLU, used by thousands of apps in cars, IoT devices, smart phones etc.


Have a look at the heat map above, showing where there are people who use our technology successfully. This also holds when we drill deeper into Scotland vs. England.

Linguistic variety is as important to us as it is important to you; which is why we support more than 80 languages (including regional languages like Catalan and Basque, which we developed in cooperation with regional governments ), and as we have seen in this blog post we do a lot more to cover variation and dialects beyond that number. So, we welcome the challenge of dialect – even if in the form of a YouTube spoof.

Sources:

speech-recognition-usage-by-country © Nuance Communications, Inc.

Source: http://whatsnext.nuance.com/in-the-labs/speech-recognition-understands-language-dialects/

Publish Date: January 26, 2017 5:00 AM


Achieving IVR success in 2017: Obstacles and opportunities

The new year brings the promise of fresh ideas, opportunities for change, and the commitment to work towards elusive annual goals. Many people know the feeling: while January starts with enthusiasm, reality kicks in by March and progress is sidetracked. The same holds true for business. Each year there is hope for huge new technology trends that will sweep through their industry driving improvements in time to market, product innovation, or customer service. But often these trends are beaten back by real-life road blocks that impede progress.

As we start 2017 let’s let look at six factors shaping progress on the IVR – three potential obstacles and three positive opportunities that will impact companies when driving change.

 

Obstacles preventing IVR improvements

There will be hurdles on the path to any goal. Many times, the roadblocks a business faces are similar to those we face with personal resolutions. The top three obstacles to IVR success that Nuance sees include:

  1. “Good enough” mindset – It’s easy to avoid making a change if you think everything is fine. Many enterprises invested in their IVR in the past and now believe one of two things – either their systems are okay as-is, or new technology will not drive further improvements. The company has achieved maximum benefit and containment goals with its current IVR and don’t see how to create incremental value. This attitude leads to an IVR stagnating and not meeting consumer desires. Significant advancements have been made in automation, speech recognition, prediction and analytic capabilities that allow companies to create intelligent, conversational IVRs that exceed caller expectations.
  1. Continued investment in other channels (mobile, web, SMS) – Enterprises are encouraged to service multiple new touchpoints, yet their budgets remain flat or see limited growth. Companies put investment into the cool new technologies and as a result, the IVR suffers and falls further behind. Rather than seeing it as a zero sum game, forward-thinking companies are using IVR investments to create links and common experiences between various channels for a superior experience that callers will notice.
  1. Slow market adoption of new IVR technologies – No matter how impressive new advancements appear, it’s human nature to take a wait-and-see approach before making a move. Many new IVR technology trends such as human-assisted artificial intelligence, analytics, and prediction are shown to add value and improve experiences. But if the industry or peer companies aren’t seen as adopting new features, it’s natural for companies to hold back. Consider forging a different path and pilot new technologies in the IVR – your results and customer satisfaction just might surprise you.

Opportunities driving improvements in IVRs

To counter any obstacle, it helps to have tailwinds pushing you along. It’s much easier to make progress on resolutions if there is support and encouragement along the way. Organizations have roadblocks but there are also positive industry trends propelling them to improve their IVR and customer service. Nuance is finding three key trends that will bring about IVR improvement:

  1. On-going need to continuously improve customer satisfactionWith so much technology at their fingertips, the customer has never been more powerful. Texts, tweets, Facebooks posts, and even Snapchat rants can hurt a company much faster today than even a year ago. An old, outdated IVR with cumbersome menus that can’t effectively recognize callers will make any customer roll their eyes. The customer’s push for faster, quicker, and easier options is moving companies to finally make the upgrades they’ve long pondered.
  1. Desire to create one cohesive cross-channel experienceBusinesses know that customers don’t exclusively call the IVR for support. They first visit the website, try the app, or check the Twitter feed. But does your IVR know about these customer interactions? Customers expect a similar experience across these channels. A slick app experience followed by a clunky, outdated IVR that sounds like it’s from 1987 won’t cut it. Today, more and more companies are investing in the work and technology to build an omni-channel experience that knows where their customers prefer to engage and, most importantly, can offer a common, intelligent experience within each channel.
  1. Increase customer use of self-service and reduce calls to live agentsThese are two trends tightly tied together in a yin and yang partnership. On the one hand, we are a self-service nation that seeks to remove any friction that slows progress towards resolving our issue. In parallel, the businesses we love encourage more self-service to reduce their operating costs and remain competitive. The faster they get our issues resolved, ideally without tapping a live agent, the better it is for the company’s bottom line. But this only works if the self-service it implements actually helps the caller. An intelligent, modern Conversational IVR knows who is calling, has context, and can take action just like talking to a live agent. Creating this balance will prove mutually beneficial for both customers and enterprises.

For any roadblock – be it personal or business – there are ways to overcome it. For the contact center and the IVR, broad market trends are giving companies a reason to revisit their existing systems. Forward-thinking companies see the opportunity available and adopt new technologies to offer a modern, conversational IVR that breaks down old stereotypes and delivers an improved customer experience.

Source: http://whatsnext.nuance.com/customer-experience/obstacles-and-opportunities-to-achieving-ivr-success/

Publish Date: January 12, 2017 5:00 AM


Is your business feeling a little groggy? Wake it up with a conversational IVR

Whether you’re a night owl, pulled an all-nighter or spent your evening tossing and turning, waking up in the morning can be difficult. Millions of people around the world turn to a big cup of coffee to give them that extra boost of energy necessary to keep them alert, productive and focusing on what’s important.

What’s coffee got to do with customer service?

Well, much like drinking a cup of coffee in the morning to keep you moving, one of the biggest cities in the country is getting a boost to its customer service experience with the help of a conversational IVR (interactive voice response) and natural language understanding. Like coffee, a conversational IVR can help keep city call centers alert and on their toes, even on the groggiest of days.

The need for more intelligent self-service options

Many contact centers use outdated IVR systems or extensive phone trees that lead to unnecessarily long wait times for callers, lengthier calls and high call volume. Call centers, which are responsible for a wealth of information, face a growing challenge when it comes to providing a quality experience: expanding customer expectations. Misroutes and multiple call transfers – and the frustrations that they bring – are growing less acceptable by the day. Call centers can no longer get by with phone trees or antiquated systems to service customers. If there was a place where a cup of caffeine-like efficiency could help, look no further.

Customers expect companies to be available all day, every day, and require answers to their questions more quickly than ever before, on whatever channel they prefer. More than 87% of Americans revealed that customer service has a significant impact on their decision to do business with a company, and 66% of consumers reported cancelling a service or ending a relationship with a business because of one bad service experience. The customer experience matters.

So, companies are looking for ways to meet customer expectations and provide easy, fast and convenient ways for people to access information.

Intelligent self-service is the answer.

Almost 60% of consumers feel automated self-service options have improved customer service and nine out of 10 consumers say they use automated self-service systems to complete transactions, so they don’t need to speak with a live agent.

See improvement with natural language understanding and conversational IVR

By implementing Nuance self-service technology in your contact center, you can engage callers in easy-to-follow conversation that directs them to the right information quickly – and with fewer misrouted calls or extensive department transfers. By engaging callers in conversational dialogue that allow people to speak naturally, natural language understanding helps direct callers to the right information faster and easier. In fact, 73% of consumers feel that interacting with an automated phone system they could converse with as if it were a live agent would significantly improve the experience.

Additionally, a conversational IVR with natural language understanding improves more than just the customer experience. It frees up call center agents to focus on more complex and rewarding interactions as their time spent managing simple tasks and transferring calls is significantly reduced.

Seamless and intelligent customer experiences are key to strengthening current relationships and establishing new ones. Using a conversational IVR can help improve your customer experience and bring you more business. Even though IVR, like coffee, cannot solve your problems alone, it acts as a fundamental boost that can make your business more efficient, nimble and proactive to the needs of your customers and employees alike. So drink up, because conversational IVR from Nuance is here to help.

Source: http://whatsnext.nuance.com/customer-experience/conversational-ivr-helps-customers-find-city-information/

Publish Date: January 10, 2017 5:00 AM


Relief for 7 IVR headaches

The IVR may not grab the attention like digital channels (mobile, web), but it’s still the preferred method for consumers to resolve issues and a critical part of a company’s customer service strategy. However, far too many companies today continue to manage their IVR as a siloed channel that’s working okay and doesn’t warrant incremental investment. But customers notice when the IVR is the weak link in their customer service journey and they don’t like it.

Organizations need to take a hard look at the experience they are offering their customers in the IVR in comparison with their other service channels. The infographic below illustrates seven ways an IVR may be guilty of causing headaches for customers. See if any of these happen in your company’s IVR and how to administer the right relief to drive improvement.

And if you’re interested in learning more about how to refresh your IVR, check out this report from Frost and Sullivan.

Source: http://whatsnext.nuance.com/customer-experience/relief-for-seven-ivr-call-center-headaches/

Publish Date: December 22, 2016 5:00 AM


You had me at “Hello”: Jerry Maguire shows us the power of voice

This month marks the 20th anniversary of the 1996 film Jerry Maguire. The movie, featuring Tom Cruise, Renee Zellweger and Cuba Gooding Jr., tells the story of a smooth sports agent who experiences a moral epiphany which spins him into a professional and personal crisis. Jerry Maguire has been solidified in the annals of popular culture for its wealth of memorable quotes from, “Show me the money” to, “Help me, help you.” But perhaps the most enduring scene of the movie comes at its finale, when Dorothy (Zellweger) responds to an emotional monologue from Jerry (Cruise) and utters that famous line, “You had me at Hello.”

Powerful words, right? And it got me thinking. I work at a technology company that works extensively with businesses trying to improve their call center experience. We understand that making a good first impression is critical and that businesses must capture a customer’s attention immediately. As Malcolm Gladwell noted in his book Blink: The Power of Thinking Without Thinking, people make instantaneous judgements. Gladwell writes, “decisions made very quickly can be every bit as good as decisions made cautiously and deliberately.” Essentially, people have a habit of deciding if they’re with you or against you in matter of seconds. So, the question presents itself: How do we get customers at “Hello?”

To create a compelling dialogue that engages customers instantly, businesses need to create a human-like experience. But in an automated IVR context, how exactly do companies do that…without a human being? The answer is simple: a high-quality text-to-speech (TTS) solution. Through natural-sounding speech output, TTS gives a voice to the enormous amount of data in today’s automated world.

Beyond its technical capabilities though, TTS is particularly unique because it elicits an emotional response. It’s a “heard” product. People identify with the voices they hear. It’s the person on the phone that the caller is screaming at. It’s the person on the phone that the caller is happy with. And that voice has the potential to drive and improve the customer’s experience – and the potential for long-term customer loyalty if you can get it right. TTS allows businesses to deliver expressive voices that match the needs of customers and establish an authentic connection.

Additionally, TTS is inherently customizable. Just like an actor on a film set, we can “direct” TTS to do and say what we want, how we want it. We can tailor the intonation of voices based on the company, country or customer needs. This could mean sculpting a phrase to be more vibrant, changing the way a unique product name is spoken, or how exactly to say “Hello.” And when companies can deliver what their customers specifically want, it helps solidify a lasting first impression.

What’s more, TTS technology has improved substantially over the last several years. Gone are the days where it sounds distractingly stiff and robotic. Our TTS at Nuance sounds human-like because we’ve spent a significant amount of time recording with voice talent and refining the product. People often label TTS as simply a “computer voice.” But there’s a person behind each one, talking into a microphone. And the advances are truly audible. TTS is sounding more and more human (without entering the uncanny valley), as it becomes increasingly intelligible, accurate, and conversational.

People regularly make judgments in as little as five seconds. And that’s no different in the IVR, as customers will decide – almost instantly – whether they like the voice they hear. By leveraging TTS, businesses can start off strong in the customer relationship through natural, human-like conversations that resonate with customers. It’s not always easy winning people over. But with the right tech, you can have them at “Hello.”

Source: http://whatsnext.nuance.com/customer-experience/text-to-speech-you-had-me-at-hello/

Publish Date: December 21, 2016 5:00 AM


How to interact with online customers safely and efficiently

What are two things that bother you the most when contacting customer service? Chances are you would say having to remember your username and password for each account and being put on hold to wait for the next available representative. But with many brands, you will not have to experience those concerns anymore. You may have even experienced one or the other of the following innovations that have made customer engagement worry-free.

Voice biometrics

Voice biometrics is transforming the contact center experience. And as we race towards one billion enrollments and a market size of $44.2 billion in 2021, we’re seeing security and ease of use as major drivers for adoption.

  • Better security for the consumer: Think of knowledge-based security as sprawl. When one level becomes ineffective, another level is added. PINS become passwords and passwords then require security questions to back them up. This is hard and stressful work for the customer, and it puts them further and further away from completing their intended task. Voice biometrics does away with all this. It uses the customer’s unique voiceprint for authentication. It can be passive, where the user can say anything and their voice gets matched to a voiceprint. Or it can be active, where the caller is asked to recite a passphrase. Either way, it’s a natural, effortless and much more accurate way to authenticate.
  • For the corporation: Knowledge-based security is easily compromised. The four-digit PIN is the weakest credential as it’s often shared and a brute force attack can quickly compromise it without any knowledge of the legitimate account holder. Passwords and security questions can be successfully answered with simple web searches of the account holder. Voice biometrics cannot be compromised in this way. Because a voiceprint is a hashed string of numbers and characters, a compromised voiceprint has no value to a hacker. Not only that, each time a fraudster speaks within an IVR, call center or mobile app, they leave behind their own voiceprint that can be used to proactively keep them out of the system and even alert law enforcement. The power of the voice really is in your hands.

 

Virtual assistants

Virtual assistants are becoming more popular for brands as consumers realize their convenience and brands cash in on the savings they provide. In fact, unique active consumer virtual assistant users will grow from 390 million in 2015 to 1.8 billion worldwide by the end of 2021. Here are two of the major reasons why this growth is occurring:

  1. Instant gratification: In our always-on, next-day delivery, 24/7 news cycle society, we’re used to getting what we want, when we want it — and if we can’t get what we want from one company, a reasonable substitute is often just a click or call away. Enterprises are under increasing pressure to provide instant, personalized, intelligent answers to any – and all – questions consumers have.
  2. Operating costs: Operating a large contact center is expensive – and growing more expensive by the day. More than 50% of customers call customer service when a company doesn’t reach them first, and according to Forrester, a typical transaction completed via a live agent costs upwards of $12 per call! Adopting a virtual assistant can significantly cut costs of each customer interaction, letting you reallocate that spend to other operational costs.

 

The best of both worlds

As mentioned before, you may have at one time or another experienced either a virtual assistant or authentication through voice biometrics; but have you experienced both together? Have you interacted with a virtual assistant and been asked to verify your account just by speaking or using face recognition?

Now you can!

Adding secure authentication to virtual assistant engagements allows enterprises to address a broader range of customer questions without requiring complex passwords or PINs. Nuance has offered this benefit through Nina ID since 2012. Nina ID uses voice biometrics to confirm the identity of the user by the sound of their voice, while at the same time fighting the increase in fraud that today permeates not only online channels, but phone, mobile, SMS and more. Meanwhile, the customer is still receiving the best in personal self-serve assistance via Nina.

And now, Nina ID 2.0: In addition to using your voice for authentication, you have the option to simply take a selfie. Nina uses AI-powered voice biometrics and face recognition to confirm the identity of the user by the sound of their voice and/or their face. Clearly, Nuance has the customer in mind when creating customer experiences, as is evident in Nina ID 2.0. With immediate engagement, strong security, seamless authentication, and active fraudster detection, customers can feel confident in their relationship with a brand.

To read more about this exciting news, find the Nuance press release on Nina ID here.

Source: http://whatsnext.nuance.com/customer-experience/nina-id-means-virtual-assistants-voice-biometrics-secure/

Publish Date: December 16, 2016 5:00 AM


The key market trends driving adoption of virtual assistants

Virtual assistants are rapidly gaining traction in both consumer and enterprise markets. A recent report from Tractica forecasts that unique active consumer virtual assistant users will grow from 390 million in 2015 to 1.8 billion worldwide by the end of 2021 – increasing the market revenue from $1.6 billion to roughly $15.8 billion!

Why the increase in adoption?

Social and economic forces, and advances in technology, are driving demand for personalized customer service that can be delivered more quickly and in an automated fashion over the customer’s channel of choice. Additionally, improving the level of customer care is now a top priority for many enterprises.

According to Deloitte Research, 85% of customer service organizations view customer experience as a competitive differentiator. A simple search on LinkedIn reveals hundreds of customer success executive titles and companies, indicating a recent trend in the enterprise to elevate customer success programs. Seventy-seven percent of these enterprises expect to maintain or grow the size of their team during the next 12-24 months. And last, but not least, 82% view accuracy and quality of information as the most important attribute of customer experience.

While businesses understand the need to improve the customer experience in order to increase customer satisfaction and retention, they seek to balance the cost of providing quality service with the constant business objective of controlling costs.

5 factors that contribute to growth

  1. Instant gratification: In our always-on, next-day delivery, 24/7 news cycle society, we’re used to getting what we want, when we want it — and if we can’t get what we want from one company, a reasonable substitute is often just a click or call away. Enterprises are under increasing pressure to provide instant, personalized, intelligent answers to any – and all – questions consumers have.
  2. Mobile phone proliferation: In 2016, nearly 60% of the world’s population has access to the internet from their mobile smartphones. We use our phones for everything from ordering food to filing an insurance claim. Our expectations for self-service extend to these devices where screen real estate puts strict restrictions on website design and access to information.
  3. Conversational, interactive speech: Thanks to the proliferation of well-known consumer brands releasing virtual assistants, we can now use our voices for everything from ordering pizza to turning on lights, to purchasing a plane ticket. Natural language understanding allows us to “talk normally” while interacting with intelligent systems to get answers faster and provide a more comfortable experience.
  4. Social media: The popularity of social media channels such as Facebook give consumers more ways to communicate with the enterprise, seek information, and instantly share their opinions with hundreds of thousands of followers. Increasingly, these channels are also seen as a source of revenue growth.  In fact, 66% of Facebook Messenger users also shop online. The bad news is many companies can’t keep up with the bigger conversation volumes. Nuance’s virtual assistant Nina integrates seamlessly with Facebook Messenger, helping businesses cost-effectively stay on top of social media interactions.
  5. Operating costs: Operating a large contact center is expensive – and growing more expensive by the day. More than 50% of customers call customer service when a company doesn’t reach them first, and according to Forrester, a typical transaction completed via a live agent costs upwards of $12 per call! Adopting a virtual assistant can significantly cut costs of each customer interaction, letting you reallocate that spend to other operational costs.

In the end, increasing the ratio of self-service to live service will have a dramatic impact on cutting costs, and if done using state-of-the-art technology and best practices, will not negatively impact customer satisfaction. Given the evidence at hand, it’s no surprise that the intelligent virtual assistant market has so much potential.

Source: http://whatsnext.nuance.com/customer-experience/virtual-assistants-adoption-rising/

Publish Date: December 9, 2016 5:00 AM


Addressing privacy concerns with speech recognition

In an age of talking machines and artificial intelligence, where virtually everything is connected, data increasingly takes a central role in the efficacy of these systems.  These systems offer tremendous benefits to people and society.  They also raise important privacy considerations for industry participants, including Nuance.

Today a number of news articles were published concerning a complaint filed by certain consumer protection groups with the U.S. Federal Trade Commission (FTC) related to data privacy, and specifically related to what information is being collected from children through voice-enabled toys manufactured by one of our customers.

Nuance takes data privacy seriously.  With that in mind, we would like to share a handful of important points with our customers, investors, media and our employees.

  • We have not received an inquiry from the FTC or any other privacy authority regarding this matter, but will respond appropriately to any official inquiry we may receive;
  • Our policy is that we don’t use or sell voice data for marketing or advertising purposes;
  • Upon learning of the consumer advocacy groups’ concerns through media, we validated that we have adhered to our policy with respect to the voice data collected through the toys referred to in the complaint; and,
  • Nuance does not share voice data collected from or on behalf of any of our customers with any of our other customers.

We have made and will continue to make data privacy a priority.

For Media:

Richard Mack – Richard.Mack@nuance.com

Rebecca Paquette – Rebecca.Paquette@nuance.com

 

For Investors:   

Richard Mack – Richard.Mack@nuance.com

Christine Marchuska – Christine.Marchuska@nuance.com

Source: http://whatsnext.nuance.com/connected-living/speech-recognition-data-privacy/

Publish Date: December 6, 2016 5:00 AM


3 ways to beat expectations with customer service

Customer expectations continue to evolve as consumers become more comfortable with various self-service technologies. Consumers are learning to adapt their interaction patterns to the channels more convenient for them and starting to assume that companies have access to at least the same level of information about their preferences, their accounts and past interactions.

As we can see from McKinsey & Company’s recent report on “Winning the expectations game in customer care,” this desire for personalized experiences, immediate resolution and convenience at all times has turned into a heavy burden for customer service. Many companies cannot currently deliver seamless customer interactions or consistent experiences across different touchpoints, so organizations are struggling to meet customer expectations and increase revenue.

Upon closer examination of the report’s findings, some natural questions might come to mind:

  • Is it possible to exceed customer expectations with exceptional service?
  • What can be done to push the long-term benefits of exceptional service to the corporate bottom-line?
  • What are the first steps to adopting exceptional service?
  • How can companies discern between fads and truly lasting trends in customer service?

No matter where your organization is in terms of experience customization, usage of virtual assistants as gatekeepers for critical channels, agent skills building or technology investments, having the right mindset is key to finding quick wins and aligning your business towards this quickly evolving landscape. Here are three ways to start redirecting your customer service strategy and start paving the way for new opportunities:

  1. Go beyond cross-channel data exchanges: While technologies like CTI allow you to pass data back and forth between channels, think about how to share the context of the last interaction. For example, if I’m in the middle of a transaction and get transferred, call centers should include data on where I failed, how many times I tried and what steps to resolution have been skipped. That way, agents can continue the conversation seamlessly, without having to ask users to explain – once again – what is it they need.
  2. Be proactive, proactive and proactive: Virtual assistants and chatbots provide a fantastic way for customer to engage in a self-service conversation. However, most systems rely on customers to start the conversation and provide relevant information. Consider leveraging outreach strategies from other channels, such as email, outbound calls and even mobile notifications. As soon as you identify a relevant pattern for your users, have your assistant proactively reach out to the customers in their preferred channel. By being proactive, you may be able to solve a need they didn’t even know they had!
  3. Customize multi-channel actions: While customers’ needs and goals might be similar across various channels (web, phone, email, mobile, etc.), the resolution of that goal might need to be different based on the channel. For example, if I interact with a virtual assistant over the web, it might be appropriate for the system to help me navigate through visual content to reach my goal. But the same request over the phone might be better served via a visual IVR interaction. And querying a request via mobile app would be better handled as a conversation with a live agent.

Where ever you are in your journey to delivering exceptional customer service, implementing a cross-channel data exchange, being proactive and providing customized actions can only increase customer satisfaction and your company’s bottom line.

Source: http://whatsnext.nuance.com/customer-experience/exceed-customer-service-expectations-with-proactive-customization-data/

Publish Date: December 2, 2016 5:00 AM


Mythbusters: Debunking 5 common IVR myths

It’s easy to spoof an IVR experience. When not properly designed, the IVR can be cumbersome and difficult to navigate, and as a result it becomes nothing more than a witty caricature. But what if many of the pre-conceived ideas consumers and businesses have about an IVR are wrong? Today, we’ll debunk 5 of these myths and I encourage you to listen carefully as your opinion may be changed.

Myth #1 – Your customers care that your menu options have changed

Far too many times an IVR greets callers with the random command that they, “Listen carefully as our menu options have recently changed.” Nobody cares if your company updates your IVR menu. Callers just want the issue resolved fast, and adding this type of unnecessary greeting only adds to their frustration.

The simplest fix is to drop that prompt or, if you want to take your IVR game to the next level, upgrade to a Natural Language Understanding (NLU) solution. It allows callers to be greeted with a more user-friendly prompt such as, “Thanks for calling ABC Company, how can I help you today?” Many companies are adopting this approach, which provides callers with a friendly greeting and allows them to say in their own words what they want to achieve with no need to remember menu options.

Myth #2 – Customers prefer to talk to live agents, even for the simplest transactions

Ever call a friend HOPING to leave a message but instead they actually answer? Sometimes we like to use the phone for convenience and don’t actually want to have a conversation. The same is true with the IVR. Just because someone calls your company doesn’t necessarily mean they want to talk to an agent.

In fact, many people are surprised to learn that 60-70% of callers who use the IVR would rather NOT talk to a human. Instead, they want to self-serve using an automated system, which satisfies customers and helps businesses. For example, the city of New York recognized there was a problem with its 311 system as call volumes rose and the call center couldn’t keep up with demand. By putting in an IVR with NLU, the city of New York increased call containment rates while increasing call center capability for those callers requiring an agent. Investing in the right IVR automation yields a win/win all around.

Myth #3 – Speech recognition systems never understand

“Did you say ‘Key Largo’?”

“No Chicago!”

We’ve all experienced the frustration of the IVR not recognizing our entry. It’s annoying and fuels the myth that speech recognition doesn’t work – but nothing can be further from the truth.

Today’s modern IVRs utilize advanced speech recognition engines with accuracy rates at 95% or higher. Nuance’s flagship automatic speech recognition product, Nuance Recognizer, has been a market leader for more than 15 years and supports more than 86 languages and dialects. Consider how FedEx implemented a natural language based IVR that accommodates both English and French speaking callers on one platform to deliver improved results.

Myth #4 – IVRs can’t handle detailed information or complex queries

There is a wonderful scene in the movie Forrest Gump when young Forrest is being chased by bullies and trying hard to move fast despite the braces on his legs slowing him down. After much tension and pushing he finally breaks free of the braces and runs hard and fast – and alters the course of his destiny. Today’s IVRs are that faster, unshackled Forrest, capable of so much more. They just haven’t broken free of past behavior and unlocked their full potential.

An IVR you call today with NLU can handle and transact long, complex sentences with multiple pieces of information. Yet most people only speak in short bursts of 4-5 words or less. Why? Because we’ve been conditioned that way and haven’t pushed the IVR to the next level. We have been trained all our lives to speak slowly into the IVR so it could recognize our intent. And many menus asked us to only say one word e.g. “Press 1 or say ‘Sales.’” We simply haven’t even considered saying much more.

That is changing. With the advent of powerful voice assistants, each of us is getting better at engaging voice recognition engines and asking more complex questions. And as more companies adopt NLU-based IVRs, customers will see increased benefits from their engagements and have a deeper, richer interaction that unlocks the IVR’s full potential.

Myth #5 – The fastest way to resolve an issue is to “zero out” to a live agent

It’s okay to admit it – we all get frustrated sometimes when the IVR greeting kicks in and begins to outline all the possible choices and menus. We just want to talk to a real person because we think they can help us much faster. But they can’t always.

Modern, conversational IVRs that allow customers to simply ask for what they need have been shown to deliver results faster and increase customer satisfaction. For example, Delta wanted to minimize customer frustration with their IVR and encourage self-service. They also needed to stay ahead of other airlines in a competitive industry. After introducing an IVR powered with natural language understanding, Delta saw opt-out rates at the main menu drop from 37% of callers down to just 9%. Customers preferred to use the IVR instead of asking for an agent. The right improvements to your IVR can help customers appreciate the convenience of self-service.

Five myths engaged and five myths dispelled (hopefully!). Modern, conversational IVRs are in practice today at leading companies and can do much more than we give them credit for. Help your company unlock their full IVR potential and you may find that your customer’s opinion will be changed.

Source: http://whatsnext.nuance.com/customer-experience/five-myths-about-conversational-ivr/

Publish Date: November 29, 2016 5:00 AM


Looking to increase your holiday sales? Turn to virtual assistants for help

The holiday season is upon us – and for many consumers, that means extra helpings of food, conviviality, and…shopping. Black Friday used to be the biggest retail day of the year, when consumers would line up outside storefronts in the middle of the night to be first in line for the newest TV set or Apple iPhone. But in today’s age of the always-on, always-connected consumer, online is taking over. We kicked off this year’s holiday shopping season with Singles’ Day, which began in China as an e-commerce offshoot of Valentine’s Day and this year drove almost $20 billion in goods purchased online in a single day. And the shopping bonanza that is Cyber Monday is less than a week away.

Retail sales for the holiday season have risen steadily since 2008, and that trend is only expected to continue this year: According to the National Retail Federation (NRF), sales are projected to reach $655.8 billion. The majority of these shoppers will be purchasing online, instead of in stores. And according to Adobe Digital Insights’ 2016 Holiday Shopping Predictions, for the first time ever, mobile devices are expected to eclipse desktop in digital browsing. However, since desktop conversion rates are still nearly three times those of smartphones, this poses a challenge for retailers in converting mobile prospects into purchasing customers. How can they seal the deal when 76 percent of shoppers change their mind about which brand to purchase as a result of a Google mobile search?

Consumers have little patience for hard-to-navigate websites

Across the globe, consumers have very little patience for websites that require them to invest more than a modicum of time and energy. They are quick to move on from websites that frustrate them – 94% report turning to third-party search engines such as Google, Yahoo!, or Bing to help them find answers. And they’ll only give a company website an average of 70 seconds to find information before going elsewhere. As an added challenge for mobile shoppers, mobile websites are often inferior in quality to their corresponding desktop versions, opening up the potential for prospects to move to a competitor’s site and take their business with them.

In the ultimate determination of where to make their holiday purchases, website usability is a key factor for consumers. And the stakes are high: Sixty-three percent of global consumers will stop doing business with a company whose website is difficult to use.

Using a virtual assistant can increase sales

With the rise of digital channels, consumers expect service and access to information 24/7. The need to meet the expectations of today’s digitally savvy consumers is driving retailers to evolve their online experience. An innovative and cost-effective way to provide a continuous presence, responsive customer care, and a personalized interaction is to embrace self-service solutions such as virtual assistants. Deploying a virtual assistant enables enterprises to offer a faster, more effective online experience by answering questions, providing content, and guiding their customers through transactions. In the case of retailers, a virtual assistant can walk prospects through the mobile and desktop purchasing process, thereby minimizing customer frustration and creating an experience that will both increase holiday revenue and foster enduring customer relationships.

If you’re interested in increasing your sales potential this holiday season – and in delivering a customer experience that encourages sustained brand loyalty well beyond it – contact us to learn more about how to deploy a virtual assistant on your desktop or mobile website.

Source: http://whatsnext.nuance.com/customer-experience/virtual-assistants-drive-increased-holiday-retail-sales/

Publish Date: November 23, 2016 5:00 AM


Mythbusters: Solving 5 mysteries around virtual assistants

Expectations of virtual assistants can be unrealistic, as they are often shaped by the make-believe world we see in science fiction movies (think Blade Runner and The Terminator). As a result, we often seek fictional virtual assistants that can reason and sense emotion and understand our questions and be good listeners and solve our problems. But when it comes to judging a virtual assistant solution, what do you really need to get the job done? What is fact and what is myth about this technology? We’re here to help you dispel 5 myths and mysteries surrounding intelligent virtual assistants.

Myth #1: Virtual assistants need an avatar

Intelligent responses? Yes. Personalized interaction? Check. But a virtual assistant doesn’t have to have an avatar. Many of our most effective virtual assistants are embedded into website menus, search boxes, and mobile in-app “buttons” that enable voice or text interaction where you need it and when you need it.  Avatars are “nice to have’s”, not must-haves. But if you do include an avatar, make sure they aren’t a photograph of an actual human being. Research indicates that we still want to know – without a doubt – when we are talking with a machine.

Myth #2: Virtual assistants don’t provide enough privacy or security for my personal information

Using a virtual assistant does not mean forfeiting your privacy or security. According to contact center usage research, 55 percent of consumers prefer automated self-service. In separate findings, 1 in 5 millennials seeks self-service checkout to a cashier. And regardless of age group, one of the most popular reasons for wanting automated self-service is to keep transactions and financial information private. So it’s easy to see why 89% of consumers want to engage in conversation with virtual assistants to quickly find information instead of searching through Web pages or a mobile app on their own. However, it’s still important for businesses to ensure the virtual assistant vendor they select meets the Payment Card Industry Data Security Standard (PCI DDS), to better protect sensitive consumer information.

Myth #3: Virtual assistants only work on a website

Customers engage with businesses on multiple channels, many averaging three or more during their customer journeys, according to research from Ovum. These channels increasingly include text/SMS, email, and messaging apps like Facebook Messenger – to name just a few. Deploying a digital virtual assistant on one or more of these channels can have significant impact on reducing operating costs by containing conversations to lower cost channels and deflecting calls from contact center agents.

Myth #4: Virtual assistants can damage a company brand with unintended responses

Virtual assistants like Nina from Nuance integrate “human supervision” to curate interactions, minimize overhead and maximize value through efficient monitoring of responses. For instance, Nina uses machine learning to automatically group and categorize questions, map them to possible answers, and then allow a person to quickly verify or update these groupings. Nina also provides the capability to transfer to an agent during the conversation without losing context, and can track all customer interactions so agents can intervene when necessary.

Myth #5: Virtual assistants can only handle simple interactions

That depends on the virtual assistant. Many can take single questions and effectively provide resolutions – like “get me an Uber.” But the real complexity lies in a virtual assistant’s ability to authenticate your identity, engage in multi-slot conversations (interactions that collect multiple pieces of related information for the purpose of accomplishing a specific goal or learning about a topic quickly and easily) and provide you with personalized information such as money transfers and healthcare records. Nina integrates with Nuance’s Voice Biometrics capabilities to authenticate users by their voice and supports multi-slot conversational dialogs so that you can say “I want to transfer $500 to my son’s bank account” and magically, it happens.

These advanced capabilities aren’t science fiction. This kind of intelligent virtual assistant technology is in-market and working for companies worldwide as we speak. To learn more about virtual assistants and how your business can benefit from implementing this technology, check out our website for more details.

Source: http://whatsnext.nuance.com/customer-experience/five-myths-about-virtual-assistants/

Publish Date: November 14, 2016 5:00 AM


Don’t call us, we’ll text you!

All businesses love their customers. Customers are what make a business run and when they have issues they will pick up the phone and engage. But sometimes companies can better help their customers – and themselves – by proactively reaching out to them before they need to call the contact center. The business gets to reduce their costs and the customer doesn’t have to deal with a channel many people find frustrating.

The goal for any IVR is to effectively resolve the customer’s issue the first time they call. But contrary to what many may think, the surest way to improve call center metrics may be to ensure your customers don’t need to call in the first place. There are myriad reasons people call that could be avoided if a company simply engages the customer before the need to call arises – paying bills, changing or confirming appointments, or reordering prescriptions to name a few. Routine issues like these need never turn into a customer problem – or a number in your contact center queue – if companies simply help customers remember.

Proactively reaching out and reminding customers of upcoming deadlines or medicine to be picked up is not only good for your business costs – it’s what customers want. A study by Wakefield Research found that 90% of consumers are more likely to do business with a company that reminds them versus one that does not.

Fix-it Initiative #4 – Reduce call volumes with proactive engagement

Adding proactive engagement to your customer service strategy makes sense. Start by thinking through what types of issues you can help your customers with in advance. The most popular issues include late payment fees, interruption of service, bank overdraft, or healthcare related problems. These are negative topics where the customer will feel some pain if they don’t act. But businesses can offer positive engagements that will delight customers. As a real life example, my bank texted asking me to review recent purchases they considered suspicious. I handled the whole thing with a short text reply and without any need to call my bank and interrupt my time with family. Which provided great joy and yes, love, for my bank.

Reducing the need for inbound calls is always good as it allows agents more time to assist callers with more complex needs. And if someone calls after receiving a proactive message, imagine how much happier they will be, not to mention the improved odds of having their issue addressed the first time.

Picking the right channels

With so many ways to reach customers, companies must create an optimized, multi-channel approach that meets customers’ unique preferences. Below is a quick breakdown of the communications consumers want – versus what they believe they get today.

Incredibly, in our digital age, there is almost a third of people who still want a traditional letter! Clearly there is a big gap to be filled by companies seeking to offer world class customer service.

In developing any proactive engagement strategy companies must build solutions that meet 4 criteria:

Building your IVR strategy in partnership with an integrated, orchestrated outbound communications channel creates a synergy and delivers an improved customer experience. And yes, it’s possible your customers might not call you anymore. But they’ll love you even more because of it.

Source: http://whatsnext.nuance.com/customer-experience/improve-first-call-resolution-with-proactive-engagement/

Publish Date: November 9, 2016 5:00 AM


Surveying the future of tech through AT&T hackathons

Working with AT&T and a large crowd of developers offered a view into the future of intelligent connected devices. Given that 50% of consumers say they plan to buy at least one Internet of Things product during the year, we can expect even more to come. Our involvement at this most recent hackathon in LA has prepared us for what those smart things might look like.

Vying for the “Best Use of Nuance Technology” prize, we saw a collaborative storytelling app, voice activated multi-rotor drones, interactive event maps, and a plethora of other exciting and innovative ideas. The completed projects we saw offered us a vision into the types of solutions that could be ubiquitous in the days to come.

The winner of our speech themed challenge, called ‘You Are Here,’ built an augmented virtual map for users to locate themselves and each other in real-time at events. The target use-case for this app includes large festivals and events such as SXSW, CES, and the like. Put simply, it’s an interactive point-of-interest (POI) map that can provide best routes within an events space and even surface information about accessibility for an extended audience.

AT&T Mobile App Hackathon – Best Use of Nuance Technology Winner: ‘You Are Here’

Best Use of Nuance Technology Winner: ‘You Are Here’

For the actual development of the app, the team used Unity to create a virtual map and some integrated plug-ins for external elements. Location information was based on device location services, while iBeacon listener was used to trigger events.

Even though the hackathon may have ended, the team plans on continuing to improve their creation with additional POI options, improved location sharing and an enhanced general framework.  You can watch the team’s full submission video here.

Unity was used to develop the virtual map

Though we are used to the concept of maps guiding us from place to place, the value of an app that can connect people more easily and convey location as well as security and accessibility information about an events venue is just a small taste of how much room there is to grow in the world of smart things.

In fact, with an estimated annual compound growth rate of 33% from now until 2021, we can expect to see a ton of development within the Internet of Things space. As more inputs are tracked and information is more readily available, it’s easy to see how we are moving towards a world in which people, things and places are more connected and perhaps, closer than ever.

We’re excited to continue to see what innovative solutions hackathon participants can put together and will take part in AT&T Hackathons in Atlanta and New York next month. Come hang out with us and take your shot at speech enabling an app of your own for a chance to win money and prizes! You can register for all of AT&T’s upcoming events here.

Stay informed about our events and hackathon presence by following us on Twitter @NuanceDev and subscribing to the What’s Next blog.

Source: http://whatsnext.nuance.com/developers/nuance-att-hackathon-creates-internet-of-things-apps/

Publish Date: November 7, 2016 5:00 AM

Page: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

ABOUT US IN 60 seconds!

Latest Americas Newsletter
both ids empty
session userid =
session UserTempID =
session adminlevel =
session blnTempHelpChatShow =
CMS =
session cookie set = True
session page-view-total =
session page-view-total =
applicaiton blnAwardsClosed =
session blnCompletedAwardInterestPopup =
session blnCheckNewsletterInterestPopup =
session blnCompletedNewsletterInterestPopup =