Cookie Preference Centre

Your Privacy
Strictly Necessary Cookies
Performance Cookies
Functional Cookies
Targeting Cookies

Your Privacy

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, your device or used to make the site work as you expect it to. The information does not usually identify you directly, but it can give you a more personalized web experience. You can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, you should know that blocking some types of cookies may impact your experience on the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site may not work then.

Cookies used

ContactCenterWorld.com

Performance Cookies

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site.

Cookies used

Google Analytics

Functional Cookies

These cookies allow the provision of enhance functionality and personalization, such as videos and live chats. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these functionalities may not function properly.

Cookies used

Twitter

Facebook

LinkedIn

Targeting Cookies

These cookies are set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant ads on other sites. They work by uniquely identifying your browser and device. If you do not allow these cookies, you will not experience our targeted advertising across different websites.

Cookies used

LinkedIn

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties

OK
BECOME
A MEMBER
TODAY TO:
CLICK HERE
[HIDE]

Here are some suggested Connections for you! - Log in to start networking.

A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not? - Sestek - ContactCenterWorld.com Blog

A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not?

ProPublica’s survey had revealed that the risk assessment algorithm named COMPAS and AI behind the system tends to identify blacks as more risky than whites.

The famous trolley dilemma on ethical philosophy asks: “would you kill one person to save five?”. In this question, you are asked to imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.

As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers. However, down this side track is one lone worker, just as oblivious as his colleagues.

So, would you pull the lever, leading to one death but saving five?

This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.

The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.

Of course, there is not a single correct and moral answer to this question about how people think when deciding on an action. However, it is estimated that many people answered this question as “yes, I pull the lever, I can sacrifice one worker to save the lives of five workers”. Also, this answer can be found moral by many people.

Today, apart from philosophy, this dilemma is brought to the agenda by adapting to artificial intelligence. Although there are no AI implementations that can think like a human and make moral judgments, it is often expressed by scientists that we’re approaching this. Of course, how these dilemmas can be solved by AI is of utmost importance. Especially considering that driverless cars will come to traffic in the next ten years, Though not expected of it, AI is thought to have to make some decisions and achieve moral results. On the other hand, it is often mentioned that the possibility of artificial intelligence applications and robots equipped with AI can pose a greater danger than leaving people unemployed. The danger is racist and sexist bias and prejudices in decisions made by AI. Research on the results of AI algorithms used in a number of experiments and decision making processes gives an idea about the magnitude of this danger.

Recently, A research conducted by MIT is particularly remarkable. In this research, the application of artificial intelligence, which is expected to recognize and distinguish the thousand photos uploaded to it, differentiates whites in a perfect way, But, When it comes to blacks it starts to make a big mistake. When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

Research shows that speech examples used to train the machine learning application is likely to lead to bias. Such problems with the technology have been evident in popular tools such as Google Translate. Recently, while translating Turkish to English Google Translate matched a number of jobs and situations with men and some with women (for instance the sentence “o bir aşçı” translated as “she is a cook”, the sentence “o bir mühendis” is translated as “he is an engineer”) and of course the sexist bias of these translational content has been the subject of debate.

As most of you may remember, a recent example of the biased AI is An AI application developed by Microsoft. In 2016, Microsoft launched the chat application called Tay, which learned human behavior using artificial intelligence algorithms and interacted with other users on Twitter with what she learned. Tay was designed to learn to communicate with people and tweet with data provided by other users on Twitter. In sixteen hours, The tweets she created with the data she collected from Twitter users became sexist and pro-Hitler. On March 25, 2016, Microsoft had to close Thai by apologizing to all users for these unwanted aggressive tweets.

In the text of the apology, Microsoft stated that “artificial intelligence has learned with both positive and negative interactions with people” and therefore “the problem is as social as it is technical”. In fact, this seems to be the highlight of an entire discussion. It can also be clearly seen that although Thai was taught very well to imitate human behavior, she was not taught to behave correctly or morally.

As all these examples clearly show, The racist, sexist, or in some cases status bias produced by artificial intelligence arise from the data sets used to train AI. The datasets used by artificial intelligence algorithms are of course collected from the internet, which is the biggest resource. For example, Microsoft’s Thai who tries to tweet and interact with people in this way, or Google Translate are trying to learn the words, how and with which other words they are used together, so they try both to capture the meaning and to produce answers using natural language against what they understood. Artificial intelligence establishes some relationality through its algorithm while it’s learning which words, how and with which other words these words are used  statistically in the datasets provided from the internet. These can sometimes be relationalities whose cause is not understood by human. But in any case, these are not artificial intelligence produced by itself, but the relationalities that exist in the data set that it uses. Therefore, it can match feminen pronouns with cooking, cleaning or secretarial jobs and masculine pronouns with engineering. In other words, the issue appears not as the prejudices of artificial intelligence, but as data sets used in the learning processes of algorithms. That is; racist and sexist content of the internet where this data is collected makes AI produce biases.

As said in the Microsoft statement, social causes rather than technical reasons lie at the root of the problem. While AI learns with the data produced by real people, it can learn to behave like a human, it can analyze the data much faster than the human mind, but at last it cannot learn whether this behavior is right or wrong. But on the other hand, do people always act “good” and “right” in the real world? Maybe, As those who claim that artificial intelligence is not biased, AI produces the most realistic results, but expectation is to see the most suitable results for an ideal world. Considering that there are inequalities and prejudices in the world we live in and the historically produced data is biased, there is no surprise that AI applications also make biased decisions and have real world bias in their decisions. On the other hand, while answering the question “Would you kill one person to save five people?”, It is not unlikely that AI would take into account the race and sex or status of these people, that is, making the dilemma deeper.

Humans shouldn’t be a single source in AI Training

Maybe it is not a very good idea for artificial intelligence to learn merely from people. It is certain that  alternative learning ways for artificial intelligence, data sets that are meticulously prepared, cleaned from prejudices and bias as much as possible or algorithms showing how the AI came to which result and how will allow us to progress on these problems. When these are possible, there may be some things that people can learn from AI. Then, It may also be possible for us to negotiate the trolley dilemma and its variations with AI.

Source: https://www.sestek.com/2020/02/a-new-world-without-artificial-stereotypes-and-biases-with-artificial-intelligence/

Publish Date: February 7, 2020


2022 Buyers Guide Recruitment Products/Services

 
1.) 
Emmersion

Automated Language Testing
Emmersion offers automated assessments to quickly and accurately test speaking, writing, and grammar fluency in 9 languages and counting. We help contact centers improve CSAT scores by screening for top talent and retaining top performers.

2.) 
MainTrax

HireTrax
HireTrax, MainTrax's standalone pre-hire virtual interviewing solution, automatically analyzes the behavioral characteristics found in each candidate's VOICE to help you select reps better suited for the specific job at hand. After all, agents speak with your customers for hours each day so it's vital they possess the behavioral characteristics and personality traits necessary to be successful. By picking those with tendencies of empathy and positive behavioral traits, you'll have a higher caliber of candidates who will perform better on the job and stay.

3.) 
Orion Learning Services Inc.

Assessments for Recruitment, Talent Management, Succession Planning
Looking for assessment tools to help you recruit faster, better and more accurately?

Orion Learning offers a full suite of assessment tools designed to target and report on candidate potential. Our tools are used for recruitment, talent management, succession planning and coaching/mentoring. All of the tools are delivered online and the reports are available online and will provide you with an amazing view of the candidate/individual's potential, interview questions, coaching/mentoring steps and much more.

If you're looking to find the candidate/individual with the highest potential, call Orion today!

4.) 
Vads

VADS Recruitment Services
VADS Indonesia provides a recruitment process with strict selection with various requirements according to client needs. VADS Indonesia also has a database of trained candidates so that it can meet the agent needs quickly and in large numbers.

5.) 
SalesMatch Ltd

Contact Centre Behavioural Assessments
SalesMatch is an intelligent web based sales and contact centre behavioural assessment platform. It is based on the well known, tried and tested DISC psychometric theory, used by thousands of organisations round the world.

- Reduces Agent Attrition - By selecting the right agent for the role
- Increases Performance - By matching the character profile to the task
- Reduces Time Off - A well matched profile to the role reduced work
stress
- Reduces Recruitment Costs - By early identification of the right candidates

Putting the right person in the job role has become the key focus in the drive...
(read more)

6.) 
TactiCall Recruitment Services

TactiCall Recruitment Services
Permanent Placement
Temporary / Labour Hire / Contingent and Contract Hire
Recruitment Consulting Services
Assessment Centre Design and Facilitation
 



View more from Sestek

Recent Blog Posts:
CEO Interview: Knovvu is the Beginning of a New Era for SestekJuly 6, 2022
Employee Experience, Customer Experience, Total Experience: How are They All Connected?May 16, 2022
Speech Analytics Come to Rescue for Better CXApril 13, 2022
Voice: Still the Most Natural, the Most Comfortable and the SafestDecember 20, 2021
From Single-Use Bots to Intelligent One-for-All BotsNovember 11, 2021
Chatbot? Virtual Assistant? Digital Assistant? What’s The Difference?September 21, 2021
The Evolution of Machine Learning: Explainable AIJuly 13, 2021
Making Conversational AI Smarter: 4 Hints to Design an Intelligent Conversational AI SolutionMay 25, 2021
How to Deploy Successful Conversational AI ProjectsApril 1, 2021
Transformation of Driving Experience: Tips for Implementing Conversational AI in Automotive IndustryMarch 9, 2021

About us - in 60 seconds!

Newsletter Registration

Please check to agree to be placed on the eNewsletter mailing list.

Latest Americas Newsletter
both ids empty
session userid =
session UserTempID =
session adminlevel =
session blnTempHelpChatShow =
CMS =
session cookie set = True
session page-view-total = 1
session page-view-total = 1
applicaiton blnAwardsClosed =
session blnCompletedAwardInterestPopup =
session blnCheckNewsletterInterestPopup =
session blnCompletedNewsletterInterestPopup =