Cookie Preference Centre

Your Privacy
Strictly Necessary Cookies
Performance Cookies
Functional Cookies
Targeting Cookies

Your Privacy

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, your device or used to make the site work as you expect it to. The information does not usually identify you directly, but it can give you a more personalized web experience. You can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, you should know that blocking some types of cookies may impact your experience on the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site may not work then.

Cookies used

ContactCenterWorld.com

Performance Cookies

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site.

Cookies used

Google Analytics

Functional Cookies

These cookies allow the provision of enhance functionality and personalization, such as videos and live chats. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these functionalities may not function properly.

Cookies used

Twitter

Facebook

LinkedIn

Targeting Cookies

These cookies are set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant ads on other sites. They work by uniquely identifying your browser and device. If you do not allow these cookies, you will not experience our targeted advertising across different websites.

Cookies used

LinkedIn

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties

OK
BECOME
A MEMBER
TODAY TO:
CLICK HERE
TELL A
FRIEND
[HIDE]

Here are some suggested Connections for you! - Log in to start networking.

Nedal Khatib - ContactCenterWorld.com Blog

Another thought provoking interview with our CEO Amir Liberman

https://www.linkedin.com/feed/update/urn:li:activity:6819889612878565376/

 

 

Publish Date: July 12, 2021 3:24 PM


Emotion-detection voice analysis tech CEO insists he is NOT using AI: “It’s biased”

https://www.verdict.co.uk/voice-emotion-detection-technology/

 

How did a suicide terror attack in Israel lead to the establishment of a voice analysis emotion recognition company? In this interview, Nemesysco CEO Amir Liberman reveals why he founded his company and how he thinks technology can help entities uncover people’s true emotions through natural conversation.

Unusually for a tech company these days, Nemesysco states outright that it does not use Artificial Intelligence (AI). Using so-called Layered Voice Analysis (LVA), the Nemesysco algorithm interprets a person’s emotion based on uncontrolled psychophysical changes to their voice. “AI is a trained model and the trained model is biased by the person who trained it. But this is not an AI-based system,” Liberman insists, bucking the trend of AI-in-everything, particularly prevalent in the Israeli startup scene.

 

Founded in 2000 in Netanya, Israel, Nemesysco started out as a security company, but eventually branched out to also offer fraud prevention tools, automated risk assessment, personality tests and pre-employment evaluation systems. Clients and partners include Nestle, Endemol, Alianz and Castel. Here’s our interview with Liberman:

 

Tell me a bit about your company, Nemesysco.

Back in 1997, we had this terror attack in Israel and what I wanted to do was to build a lie detector. This terror attack took the life of three young mothers. It truly was heart-breaking. What I wanted to do was to build a quick lie detector that we could put on the borders and ask people when they come in to Israel, “Do you plan to commit a suicide attack?”

So that was the very naïve thought of a 24-year-old guy. And when we started researching, it was initially of course just me with some friends. We started with a very naïve way, we took people and recorded them saying “What colour is the sky? The sky is green, grass is blue, I’m the Queen of England …” And what we got eventually was that there was very little reaction, nothing was really standing out.

And then we had this situation where one of my friends asked a very blunt question, but the guy actually lied, and the system picked it up like a bomb exploding on the screen. Everything turned red. So that was the moment when we actually realised that lies have to have some meaning, there has to be something of essence. Then what we discovered was that it wasn’t the lie that we were picking up, it was all the different ingredients of a lie.

 

We got a university dataset that was prepared in Israel, and it was made of a Stroop test, when it shows you a card and the card says red (but) written in green, which creates the same reaction in the body as a lie would. That was the assumption at least in our system. Actually this is not a lie – there is a conflict but it’s not a lie.

So is lie detection still the core part of your business?

No, I would say not. We were originally purely a security company. But back in 2005 we made a switch.

For an entity, yes, lies are interesting, but how about learning more about your employee’s personality? The real personality. You know, they reveal things that they didn’t even know themselves once they talk about it, once they are confronted with the result. [As an employer] why don’t you try to understand who the [employee] is, to put them in the right job?

 
 

So at some point we said, okay, it’s not lie detection. The thing about lie detection is that there is no such thing. Take it from me, I have been dealing with this for 24 years. There is no such thing as lie detection!

All you can do is present the stimuli and show the reaction. What we can pick up is the moment where you feel the jeopardy, when you feel the tension, when you want to fight, but still run away. It’s actually a very interesting moment in time when you are past a certain point that you don’t want to confront. It’s a very strange situation. So, we can pick up these sets of emotions, but it’s not a lie: it’s the reaction around the lie.

What sectors does your company work with?

We still work with governments. We work with investigations which is actually a very positive experience because you cannot use our system and torture someone for example. It doesn’t work. So we kind of promote fair investigation and I really like that.

We also work with insurance companies, recruitment agencies, credit services, banks and marketing agencies.

We work with HR from entry point to exit point. From recruitment and veracity assessment to personality assessment, to interviews and how one is feeling in the organisation. All the surveys that people do, where they may polish the truth a bit. They may not reveal things just because they don’t want to hurt anyone. But this is counterproductive, to them and to the organisation.

We work a lot today with medical applications, as well. And it goes way beyond that, (as) we’ve also been used in entertainment with games and with matchmaking TV shows. We were presented by Big Brother on several occasions.

Why is it important to have emotion recognition rather than just voice analysis?

The question is very simple: Do you want to know the truth? As a manager, I need to make decisions based on good data. And if the data is not accurate, then it’s not going to serve anyone.

Don’t you think humans already have the capability to detect emotion? Do we really need technology or AI to do this?

Well, I’m not sure if AI is the right thing; we’ll talk about AI in a second.

The thing is, first of all, humans are very bad in detecting other people’s emotions. That’s a fact. As humans, we always come with our own bag of emotions from home. We had a bad day, something happened to the cat, something happened to the spouse; you know, you’re judging everything very differently. So we want something that is unbiased completely. That looks at you as you are.

That’s a very interesting choice of words. Specifically because a lot of people opposed to emotion recognition would say that it is inherently biased and that bias cannot be taken out of it. How do you respond to that?

Because that is AI. AI is a trained model and the trained model is biased by the person who trained it – absolutely. But this is not an AI-based system.

AI is like a DNA examination. We know what specific sequences in voice patterns look like and what they should be associated with in terms of emotional reaction. We know what it looks like; we don’t need an AI to do that.

That’s the challenge with emotions really, there are no scales. There is no set of proper definitions of what happiness is. This guy is happy three points, this guy is happy 60 points. There was no scale. We were the first to actually offer a scale. So this is what we do.

It was really working from the ground up building the entire theory. Not just as a theory but as evidence-based science. The first to actually offer quantifiable measurements for emotions.

Our technology was developed from the ground up based on initial bio-markers that were calculated from the voice without any assumed meaning. We were just observing them changing during real-life calls.

Then during our everlasting research on recorded audio that we received – some unique sessions generated a unique reaction in some of these variables. Our work was manual, but these very unique and obvious sessions were then added to the assumption list and validated against other data that was less significant from the same family of emotions.

But it is still a human that defined the scales. Does this not leave room for bias?

Initially we analysed and identified stress. When we started, it was all about stress. There was no knowledge about any other emotions. It was not defined. Everything was stress. And we took recordings from crashing planes, from pilots with a mayday alert. We took recordings from death row prisoners just before the thing and there were things that were standing out. And so we said, okay, this is an extreme state of stress and this is a normal state of stress. And now, let’s build a scale between them.

Now we could normalise it between zero to 100. We only dealt with real-life data and with real-life materials.

There’s also the argument that different people’s emotions are expressed differently depending on their cultural upbringing and societal background. How do you account for that bias?

That’s a wonderful question. Thank you.

So from a theoretical point of view, what we had in mind at the very beginning was that everything that you can control, whether it’s your choice of words, the way you pronounce things, the volume, all of that should be ignored. They’re not even worth taking into account, because you can control them. They have zero value.

Then we’re talking about all the ingredients that you can no longer control. And then apparently, we are all the same: We are all humans, the same animal. We all have the same brain activities and our brain is built very, very similarly, if not identically.

How do you think you are different from your competitors?

We are not AI-based. Everybody’s approach today is to bring in actors who express various emotions in different ways in different scales and say, okay let’s train these emotions. And let’s use a model to train these emotions. We work in a completely different way. We actually worked from the ground up.

When you take an AI and you teach it to recognise all these expressed emotions, the moment the system makes a mistake and classifies an actor expressing anger as angry, that’s the day I know I have to throw everything out and go back to the drawing board.

If I’m fooled by actors, how can I stay true to the true purpose of the system? That’s why we are dealing with genuine emotions.

Don’t you think that there are certain things, when people intend to hide them, that they should stay hidden? How do you respond to people saying that this is an infringement on privacy and freedom?

It’s always a challenging question and, of course, everything has to be done according to the laws and according to what is applicable and what is allowed and fair.

The thing is, if I, as an employee, get the chance to say how I feel about my boss and I say, ‘Well, I love it, I think he’s a great boss. Okay, I did my role.’ But deep inside, I know that the boss is awful and something picks that out from my voice.

And not just from my voice, but maybe from a few other people who did the same. They also said that the boss was magnificent, but they all felt differently. Don’t you think everybody wins except for this bad boss?

Well, what my argument is, if a person wants to hide something, don’t they have the right to hide something?

You always have the right to refuse a test, you always have the right not to participate. Anyway, if you don’t want to lie, don’t lie.

But if you do want to lie should you not be allowed to lie?

Listen, if I’m the employer and I pay your salary, I want to make the best use of my money to achieve the best productivity (and) the best environment I can. To do that I need to base my decision on knowledge, and the better the knowledge I have, the better my decisions can be.

You said that a person should always have the right to refuse the test, which means that they should always know that it is taking place and that their emotions are being monitored by technology, but consumers don’t always know this. How would you argue for that?

I think it’s better when people know what is going on and they should be aware of the consequences. But again, you know, you’re working with Facebook, you’re sharing your life with everybody, you’re sharing your most intimate moments with family, with everybody.

There are so many things that monitor you. The way you write, everything you type, everything you say, how quickly you type on your keyboard. The thing is, it’s not about the technology that is being used, but how it is being used.

What you’re saying is it’s not the technology that is bad, it’s the people behind the technology?

Right now your cell phone knows more about you than what you’ll ever know. Let alone emotion detection, it knows what you like and don’t like and knows what time you wake, whom you like, whom you speak with, whom you don’t speak with.

It knows everything and knows where you are every second of the day. You think you have privacy, think again. Today, I don’t think anybody in the world has any sense of privacy.

Publish Date: June 30, 2021 2:20 PM


Q&A: Nemesysco CEO Amir Liberman on Leveraging Voice Analysis for More Accurate Assessments

https://www.hcmtechnologyreport.com/qa-nemesysco-ceo-amir-liberman-leveraging-voice-analysis/

 

We spoke with Amir Liberman, founder and CEO of Nemesysco, a leading provider of voice analysis solutions. The Israeli company’s Layered Voice Analysis (LVA) technology correlates vocal data with human emotions for use in assessments and personality tests. 

Usually, assessments rely on self-reporting from the candidate, the same person you’re trying to assess. That doesn’t strike as ideal. 

Well, leaving aside the concept of misleading answers, people don’t really know a lot about themselves. You need to look at yourself in the mirror in order to know how you behave in a group.

I’ll share with you a story about hiring. When we started to look into the field of personality, I requested that all my team take a personality test. I wanted to see what it told me about them. And from the lot I got three identical reports, identical personalities, the vertical personality types on three different types of people. They were all lions. They were all leaders, they were all thought leaders. They were all pushing people, and pulling people up behind them, leading people to their goals. And these are three completely different people.

 

Publish Date: April 18, 2021 2:30 PM


Nadie va a ocupar un puesto sin que no pase la entrevista de control y confianza: José Ramón Amieva

Nadie va a ocupar un puesto sin que no pase la entrevista de control y confianza: José Ramón Amieva

Publish Date: December 13, 2020 2:42 PM


Q&A: Nemesysco CEO Amir Liberman on Leveraging Voice Analysis for More Accurate Assessments

https://www.hcmtechnologyreport.com/qa-nemesysco-ceo-amir-liberman-leveraging-voice-analysis/

Publish Date: November 22, 2020 9:49 AM

CCWorld-TV

PROGRAMMING COMING SOON!

About us - in 60 seconds!

Submit Event

Upcoming Events

The place where the world's best meet and share their best practices!

A place for professionals to learn the latest and greatest strategies and ideas and to connect with the elite in the industry. 

This is the highest rated industry event with ... Read More...
 999 

Latest Americas Newsletter
both ids empty
session userid =
session UserTempID =
session adminlevel =
session blnTempHelpChatShow =
CMS =
session cookie set = True
session page-view-total =
session page-view-total =
applicaiton blnAwardsClosed =
session blnCompletedAwardInterestPopup =
session blnCheckNewsletterInterestPopup =
session blnCompletedNewsletterInterestPopup =