Nu Echo - ContactCenterWorld.com Blog
The SpeDial partners. From left to right: Dominique Boucher (Nu Echo), Fernando Batista (INESC-ID), Katerina Louka (VoiceWeb), Isabel Trancoso (INESC-ID), Joakim Gustafson (KTH), and coordinator Alexandros Potamianos (National Technical University of Athens).
Two weeks ago in Luxembourg, the international team pictured above presented the final results of the SpeDial project to reviewers of the European Commission. The project, which lasted two years, was funded by the EU 7th Framework Programme (See here for a short description of the project).
The presentation went very well and we received very positive feedback from the reviewers: “Great Project”, “Fantastic Team”, “Wonderful Execution”, “Concrete Work based on Deep Studies”, “Fantastic Collaboration”.
This project was the result of a tight collaboration between both SMEs (VoiceWeb and Nu Echo) and the research partners (KTH, INESC-ID, Athena Research and Innovation Center, and TSI). We owe a big thank to Alexandros Potamianos, who did an exceptional job coordinating this project involving both European and North American partners. It was not easy to get a non-European partner in the consortium; but Nu Echo managed to make it happen.
A real opportunity of this project, other than the international exposure and new research, was that it enabled us to enhance some of our proprietary technologies which we use for automated detection of perfformance issues (potentials for improvement). During the final review, I showcased a tuning project Nu Echo worked on a couple years ago that required (at that time) a fair amount of call analysis to pinpoint the cause of the most critical performance problems. Using the new technologies we have developed, detecting those issues took a fraction of the time; we now simply run a report which happens in less than a minute! Of course, some additional time is required to validate hot-spots, but it’s nothing compared to listening to hundreds of calls, analyzing each issue manually, and trying to find the common denominator of what is causing the issue.
In the end, what does all of this mean for our clients? An improved ability to find tuning opportunities more efficiently in speech applications, resulting in less time spent analyzing data and more time doing actual tuning work. That is especially important for projects on tight schedule and budget.
Publish Date: January 25, 2016 5:00 AM
On January 14th, I will be in Luxembourg to present the results of a joint research project to reviewers of the European Commission.
We’ve not been vocal about it (and frankly, I’m not looking for excuses, that’s just plain laziness on my part), but Nu Echo has been an active consortium member of a European research project over the last two years: the SpeDial project, funded by the European Commission’s 7th Framework Programme (FP7). The consortium, led by prof. Alex Potamianos, included both commercial entities like VoiceWeb in Athens, Greece, and ourselves, and a number of academic research partners:
- the Athena Research and Innovation Center in Information, Communication and Knowledge Technologies, in Greece,
- the Telecommunication Systems Institute @ Technical University Of Crete,
- KTH Royal Institute of Technology of Stockholm, Sweden, and
- INESC-ID, from Lisboa, Portugal.
The project, whose name stands for Spoken Dialogue Analytics, aimed to apply speech analytics technologies to the IVR world. To quote prof. Potamianos, the project proposes “a process for spoken dialogue service development, enhancement and customization of deployed services, where data logs are analyzed and used to enhance the service in a semi-automated fashion”. Some of the technologies employed in the project are age/gender detection, speech and text affective analysis, and hotspot detection.
As part of this project, Nu Echo has significantly enhanced its internal speech tuning environment, Atelier. For instance, we added full support for the common SPDXml file format that was devised in the project and a sophisticated dialogue path navigator to interactively explore paths in the dialogue that were taken by actual callers and pinpoint dialogue hotspots (think Google Analytics’ behavior flow for speech applications!). We also devised effective techniques and tools to automate the process of finding tuning opportunities. A preliminary version of these tools was presented at the SpeechTEK conference in New York last summer.
In the upcoming weeks, I will present in more details the results of our work and how the tools developed help us tame the complexity of speech tuning. Stay tuned!
Publish Date: January 13, 2016 5:00 AM
Upcoming EventsSubmit Event