Cookie Preference Centre

Your Privacy
Strictly Necessary Cookies
Performance Cookies
Functional Cookies
Targeting Cookies

Your Privacy

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, your device or used to make the site work as you expect it to. The information does not usually identify you directly, but it can give you a more personalized web experience. You can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, you should know that blocking some types of cookies may impact your experience on the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site may not work then.

Cookies used

ContactCenterWorld.com

Performance Cookies

These cookies allow us to count visits and traffic sources, so we can measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site.

Cookies used

Google Analytics

Functional Cookies

These cookies allow the provision of enhance functionality and personalization, such as videos and live chats. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these functionalities may not function properly.

Cookies used

Twitter

Facebook

LinkedIn

Targeting Cookies

These cookies are set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant ads on other sites. They work by uniquely identifying your browser and device. If you do not allow these cookies, you will not experience our targeted advertising across different websites.

Cookies used

LinkedIn

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties

OK
[HIDE]

Here are some suggested Connections for you! - Log in to start networking.

EXECUTIVE MEMBER
Ikhwal Sidiq
Assistant Manager Trade and Remittance Services
408
MEMBER
Andres Barrios
Cloud Campus Regional Director
2
MEMBER
Thamer Noori
Director of Industrial Security and Safety Dept.
13
MEMBER
David Chacon
Global Growth & New Operating Models Director
50
MEMBER
Jason Taylor
Officer of County 311 Services
0

A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not? - Sestek - ContactCenterWorld.com Blog

A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not?

ProPublica’s survey had revealed that the risk assessment algorithm named COMPAS and AI behind the system tends to identify blacks as more risky than whites.

The famous trolley dilemma on ethical philosophy asks: “would you kill one person to save five?”. In this question, you are asked to imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.

As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers. However, down this side track is one lone worker, just as oblivious as his colleagues.

So, would you pull the lever, leading to one death but saving five?

This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.

The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.

The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.

Of course, there is not a single correct and moral answer to this question about how people think when deciding on an action. However, it is estimated that many people answered this question as “yes, I pull the lever, I can sacrifice one worker to save the lives of five workers”. Also, this answer can be found moral by many people.

Today, apart from philosophy, this dilemma is brought to the agenda by adapting to artificial intelligence. Although there are no AI implementations that can think like a human and make moral judgments, it is often expressed by scientists that we’re approaching this. Of course, how these dilemmas can be solved by AI is of utmost importance. Especially considering that driverless cars will come to traffic in the next ten years, Though not expected of it, AI is thought to have to make some decisions and achieve moral results. On the other hand, it is often mentioned that the possibility of artificial intelligence applications and robots equipped with AI can pose a greater danger than leaving people unemployed. The danger is racist and sexist bias and prejudices in decisions made by AI. Research on the results of AI algorithms used in a number of experiments and decision making processes gives an idea about the magnitude of this danger.

Recently, A research conducted by MIT is particularly remarkable. In this research, the application of artificial intelligence, which is expected to recognize and distinguish the thousand photos uploaded to it, differentiates whites in a perfect way, But, When it comes to blacks it starts to make a big mistake. When the person in the photo is a white man, the software is right 99 percent of the time.

But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

Research shows that speech examples used to train the machine learning application is likely to lead to bias. Such problems with the technology have been evident in popular tools such as Google Translate. Recently, while translating Turkish to English Google Translate matched a number of jobs and situations with men and some with women (for instance the sentence “o bir aşçı” translated as “she is a cook”, the sentence “o bir mühendis” is translated as “he is an engineer”) and of course the sexist bias of these translational content has been the subject of debate.

As most of you may remember, a recent example of the biased AI is An AI application developed by Microsoft. In 2016, Microsoft launched the chat application called Tay, which learned human behavior using artificial intelligence algorithms and interacted with other users on Twitter with what she learned. Tay was designed to learn to communicate with people and tweet with data provided by other users on Twitter. In sixteen hours, The tweets she created with the data she collected from Twitter users became sexist and pro-Hitler. On March 25, 2016, Microsoft had to close Thai by apologizing to all users for these unwanted aggressive tweets.

In the text of the apology, Microsoft stated that “artificial intelligence has learned with both positive and negative interactions with people” and therefore “the problem is as social as it is technical”. In fact, this seems to be the highlight of an entire discussion. It can also be clearly seen that although Thai was taught very well to imitate human behavior, she was not taught to behave correctly or morally.

As all these examples clearly show, The racist, sexist, or in some cases status bias produced by artificial intelligence arise from the data sets used to train AI. The datasets used by artificial intelligence algorithms are of course collected from the internet, which is the biggest resource. For example, Microsoft’s Thai who tries to tweet and interact with people in this way, or Google Translate are trying to learn the words, how and with which other words they are used together, so they try both to capture the meaning and to produce answers using natural language against what they understood. Artificial intelligence establishes some relationality through its algorithm while it’s learning which words, how and with which other words these words are used  statistically in the datasets provided from the internet. These can sometimes be relationalities whose cause is not understood by human. But in any case, these are not artificial intelligence produced by itself, but the relationalities that exist in the data set that it uses. Therefore, it can match feminen pronouns with cooking, cleaning or secretarial jobs and masculine pronouns with engineering. In other words, the issue appears not as the prejudices of artificial intelligence, but as data sets used in the learning processes of algorithms. That is; racist and sexist content of the internet where this data is collected makes AI produce biases.

As said in the Microsoft statement, social causes rather than technical reasons lie at the root of the problem. While AI learns with the data produced by real people, it can learn to behave like a human, it can analyze the data much faster than the human mind, but at last it cannot learn whether this behavior is right or wrong. But on the other hand, do people always act “good” and “right” in the real world? Maybe, As those who claim that artificial intelligence is not biased, AI produces the most realistic results, but expectation is to see the most suitable results for an ideal world. Considering that there are inequalities and prejudices in the world we live in and the historically produced data is biased, there is no surprise that AI applications also make biased decisions and have real world bias in their decisions. On the other hand, while answering the question “Would you kill one person to save five people?”, It is not unlikely that AI would take into account the race and sex or status of these people, that is, making the dilemma deeper.

Humans shouldn’t be a single source in AI Training

Maybe it is not a very good idea for artificial intelligence to learn merely from people. It is certain that  alternative learning ways for artificial intelligence, data sets that are meticulously prepared, cleaned from prejudices and bias as much as possible or algorithms showing how the AI came to which result and how will allow us to progress on these problems. When these are possible, there may be some things that people can learn from AI. Then, It may also be possible for us to negotiate the trolley dilemma and its variations with AI.

Source: https://www.sestek.com/2020/02/a-new-world-without-artificial-stereotypes-and-biases-with-artificial-intelligence/

Publish Date: February 7, 2020


2024 Buyers Guide Workforce Management

 
1.) 
Alvaria

Alvaria Workforce
Alvaria Workforce (formerly Aspect Workforce Management) is a high-performance contact center software solution that provides the forecasting, planning, scheduling, employee self-service and real-time agent tracking to ensure that all agents and supervisors are productive, engaged in their work and delivering an exceptional customer experience.

2.) 
Alvaria

Noble ShiftTrack WFM
Maximize the efficiency of your contact center and meet/exceed customer expectations with workforce engagement tools that help you accurately forecast workloads, match the right resources to your needs and keep agents motivated. More than just scheduling agents and tracking shifts, Noble’s ShiftTrack WEM solutions optimize labor costs, manage capacity more effectively and improve service levels.

3.) 
Calabrio

Calabrio ONE
Calabrio ONE offers contact centres the complete toolset to unlock the tremendous value buried within customer interaction data and use it to transform the entire business. One seamless solution combines a fully integrated workforce optimization suite with powerful voice-of-the-customer analytics tools deployed—in the cloud, on-premises, or in a hybrid environment.

Capture every customer interaction across all channels. Extract predictive and prescriptive insights. Elevate customer experiences, improve employee engagement and increase operational efficiency. Then, extend customer-centric strategies across the business to accelerate sales, drive innovation and move your business forward.

4.) 
eGain Corporation

eGain Solve
Rated #1 by analysts and trusted by some of the biggest brands in the world, eGain Solve helps businesses design and deliver smart, connected customer journeys across social, mobile, web, and contact centers. You can sell smarter, serve better, and know more.

5.) 
MFE International

Agyletime Cloud Workforce Management
Agyletime is an enterprise grade true Cloud WFM and t is channel agnostic .
Its ease to Use , easier onboarding, forecasting and better scheduling.
You can integrate to CRMs such as SFDC, Zendesk, ServiceNow and others and to telephony systems such as AVAYA, CISCO, Genesys and other cloud telephony systems ; to independent Chat systems to aggregate data for WFM omnichannel forecasting, scheduling, optimisation and reporting.

6.) 
Pointel

Genesys Workforce Management
WFM Voice Self-Service allows agents and supervisors to access and update workforce management information from any telephone. Leveraging the Genesys Voice Platform and its open standards-based techonologies such as VoiceXML, robust applications can be developed to provide "anytime, anywhere" access to WFM planning and scheduling and real-time fuctions. These valuable additions to the standard WFM solution can be tailored to meet the unique needs of the contact center.

WFM Voice Self-Service can be deployed either in enterprise premises or hosted in a service provider's network.

If the client is a Genesys Voice Platform customer, they can run WFM Voice Self-Service on the Genesys Voice...
(read more)

7.) 
Vads

VADS Workforce Management
VADS Workforme Magaement is a smart tools that gives connectivity to the workers while working remotely. An intelligent workforce management system that aims to improve operation efficiency with the end goal to provide first.

8.) 
QPC Ltd.

QPC WFM - Calabrio/Teleopti Specialism
QPC has a long and successful history of delivering tried and tested innovative workforce management systems, and the training and consultancy needed to ensure organisations can leverage the customer service and operational efficiency benefits workforce management principles and automated workforce management systems can deliver.
 



View more from Sestek

Recent Blog Posts:
Conversational Analytics: The Secret to Quality Customer ServiceSeptember 17, 2022
Perfecting the Airport Experience with Conversational AIAugust 16, 2022
CEO Interview: Knovvu is the Beginning of a New Era for SestekJuly 6, 2022
Employee Experience, Customer Experience, Total Experience: How are They All Connected?May 16, 2022
Speech Analytics Come to Rescue for Better CXApril 13, 2022
Voice: Still the Most Natural, the Most Comfortable and the SafestDecember 20, 2021
From Single-Use Bots to Intelligent One-for-All BotsNovember 11, 2021
Chatbot? Virtual Assistant? Digital Assistant? What’s The Difference?September 21, 2021
The Evolution of Machine Learning: Explainable AIJuly 13, 2021
Making Conversational AI Smarter: 4 Hints to Design an Intelligent Conversational AI SolutionMay 25, 2021

About us - in 60 seconds!

Join Our Team

Industry Champion Award Leaderboard

Most active award (top 10) entrants in the past 48 hours! - Vote for Others / About Program
Submit Event

Upcoming Events

The 19th AMERICAS Annual Best Practices Conferences are here! Meeting Point for the World's Best Contact Center & CX Companies Read More...
 31813 
Showing 1 - 1 of 3 items

Newsletter Registration

Please check to agree to be placed on the eNewsletter mailing list.
both ids empty
session userid =
session UserTempID =
session adminlevel =
session blnTempHelpChatShow =
CMS =
session cookie set = True
session page-view-total = 1
session page-view-total = 1
applicaiton blnAwardsClosed =
session blnCompletedAwardInterestPopup =
session blnCheckNewsletterInterestPopup =
session blnCompletedNewsletterInterestPopup =