ProPublica’s survey had revealed that the risk assessment algorithm named COMPAS and AI behind the system tends to identify blacks as more risky than whites.
The famous trolley dilemma on ethical philosophy asks: “would you kill one person to save five?”. In this question, you are asked to imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.
As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers. However, down this side track is one lone worker, just as oblivious as his colleagues.
So, would you pull the lever, leading to one death but saving five?
This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.
The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.
The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.
Of course, there is not a single correct and moral answer to this question about how people think when deciding on an action. However, it is estimated that many people answered this question as “yes, I pull the lever, I can sacrifice one worker to save the lives of five workers”. Also, this answer can be found moral by many people.
Today, apart from philosophy, this dilemma is brought to the agenda by adapting to artificial intelligence. Although there are no AI implementations that can think like a human and make moral judgments, it is often expressed by scientists that we’re approaching this. Of course, how these dilemmas can be solved by AI is of utmost importance. Especially considering that driverless cars will come to traffic in the next ten years, Though not expected of it, AI is thought to have to make some decisions and achieve moral results. On the other hand, it is often mentioned that the possibility of artificial intelligence applications and robots equipped with AI can pose a greater danger than leaving people unemployed. The danger is racist and sexist bias and prejudices in decisions made by AI. Research on the results of AI algorithms used in a number of experiments and decision making processes gives an idea about the magnitude of this danger.
Recently, A research conducted by MIT is particularly remarkable. In this research, the application of artificial intelligence, which is expected to recognize and distinguish the thousand photos uploaded to it, differentiates whites in a perfect way, But, When it comes to blacks it starts to make a big mistake. When the person in the photo is a white man, the software is right 99 percent of the time.
But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.
Research shows that speech examples used to train the machine learning application is likely to lead to bias. Such problems with the technology have been evident in popular tools such as Google Translate. Recently, while translating Turkish to English Google Translate matched a number of jobs and situations with men and some with women (for instance the sentence “o bir aşçı” translated as “she is a cook”, the sentence “o bir mühendis” is translated as “he is an engineer”) and of course the sexist bias of these translational content has been the subject of debate.
As most of you may remember, a recent example of the biased AI is An AI application developed by Microsoft. In 2016, Microsoft launched the chat application called Tay, which learned human behavior using artificial intelligence algorithms and interacted with other users on Twitter with what she learned. Tay was designed to learn to communicate with people and tweet with data provided by other users on Twitter. In sixteen hours, The tweets she created with the data she collected from Twitter users became sexist and pro-Hitler. On March 25, 2016, Microsoft had to close Thai by apologizing to all users for these unwanted aggressive tweets.
In the text of the apology, Microsoft stated that “artificial intelligence has learned with both positive and negative interactions with people” and therefore “the problem is as social as it is technical”. In fact, this seems to be the highlight of an entire discussion. It can also be clearly seen that although Thai was taught very well to imitate human behavior, she was not taught to behave correctly or morally.
As all these examples clearly show, The racist, sexist, or in some cases status bias produced by artificial intelligence arise from the data sets used to train AI. The datasets used by artificial intelligence algorithms are of course collected from the internet, which is the biggest resource. For example, Microsoft’s Thai who tries to tweet and interact with people in this way, or Google Translate are trying to learn the words, how and with which other words they are used together, so they try both to capture the meaning and to produce answers using natural language against what they understood. Artificial intelligence establishes some relationality through its algorithm while it’s learning which words, how and with which other words these words are used statistically in the datasets provided from the internet. These can sometimes be relationalities whose cause is not understood by human. But in any case, these are not artificial intelligence produced by itself, but the relationalities that exist in the data set that it uses. Therefore, it can match feminen pronouns with cooking, cleaning or secretarial jobs and masculine pronouns with engineering. In other words, the issue appears not as the prejudices of artificial intelligence, but as data sets used in the learning processes of algorithms. That is; racist and sexist content of the internet where this data is collected makes AI produce biases.
As said in the Microsoft statement, social causes rather than technical reasons lie at the root of the problem. While AI learns with the data produced by real people, it can learn to behave like a human, it can analyze the data much faster than the human mind, but at last it cannot learn whether this behavior is right or wrong. But on the other hand, do people always act “good” and “right” in the real world? Maybe, As those who claim that artificial intelligence is not biased, AI produces the most realistic results, but expectation is to see the most suitable results for an ideal world. Considering that there are inequalities and prejudices in the world we live in and the historically produced data is biased, there is no surprise that AI applications also make biased decisions and have real world bias in their decisions. On the other hand, while answering the question “Would you kill one person to save five people?”, It is not unlikely that AI would take into account the race and sex or status of these people, that is, making the dilemma deeper.
Humans shouldn’t be a single source in AI Training
Maybe it is not a very good idea for artificial intelligence to learn merely from people. It is certain that alternative learning ways for artificial intelligence, data sets that are meticulously prepared, cleaned from prejudices and bias as much as possible or algorithms showing how the AI came to which result and how will allow us to progress on these problems. When these are possible, there may be some things that people can learn from AI. Then, It may also be possible for us to negotiate the trolley dilemma and its variations with AI.
Publish Date: February 7, 2020
Automated Language Testing
Emmersion offers automated assessments to quickly and accurately test speaking, writing, and grammar fluency in 9 languages and counting. We help contact centers improve CSAT scores by screening for top talent and retaining top performers.
HireTrax, MainTrax's standalone pre-hire virtual interviewing solution, automatically analyzes the behavioral characteristics found in each candidate's VOICE to help you select reps better suited for the specific job at hand. After all, agents speak with your customers for hours each day so it's vital they possess the behavioral characteristics and personality traits necessary to be successful. By picking those with tendencies of empathy and positive behavioral traits, you'll have a higher caliber of candidates who will perform better on the job and stay.
|3.)||Orion Learning Services Inc.|
Assessments for Recruitment, Talent Management, Succession Planning
Looking for assessment tools to help you recruit faster, better and more accurately?
Orion Learning offers a full suite of assessment tools designed to target and report on candidate potential. Our tools are used for recruitment, talent management, succession planning and coaching/mentoring. All of the tools are delivered online and the reports are available online and will provide you with an amazing view of the candidate/individual's potential, interview questions, coaching/mentoring steps and much more.
If you're looking to find the candidate/individual with the highest potential, call Orion today!
VADS Recruitment Services
VADS Indonesia provides a recruitment process with strict selection with various requirements according to client needs. VADS Indonesia also has a database of trained candidates so that it can meet the agent needs quickly and in large numbers.
Contact Centre Behavioural Assessments
SalesMatch is an intelligent web based sales and contact centre behavioural assessment platform. It is based on the well known, tried and tested DISC psychometric theory, used by thousands of organisations round the world.
- Reduces Agent Attrition - By selecting the right agent for the role
- Increases Performance - By matching the character profile to the task
- Reduces Time Off - A well matched profile to the role reduced work
- Reduces Recruitment Costs - By early identification of the right candidates
Putting the right person in the job role has become the key focus in the drive...
|6.)||TactiCall Recruitment Services|
TactiCall Recruitment Services
Temporary / Labour Hire / Contingent and Contract Hire
Recruitment Consulting Services
Assessment Centre Design and Facilitation
|CEO Interview: Knovvu is the Beginning of a New Era for Sestek||July 6, 2022|
|Employee Experience, Customer Experience, Total Experience: How are They All Connected?||May 16, 2022|
|Speech Analytics Come to Rescue for Better CX||April 13, 2022|
|Voice: Still the Most Natural, the Most Comfortable and the Safest||December 20, 2021|
|From Single-Use Bots to Intelligent One-for-All Bots||November 11, 2021|
|Chatbot? Virtual Assistant? Digital Assistant? What’s The Difference?||September 21, 2021|
|The Evolution of Machine Learning: Explainable AI||July 13, 2021|
|Making Conversational AI Smarter: 4 Hints to Design an Intelligent Conversational AI Solution||May 25, 2021|
|How to Deploy Successful Conversational AI Projects||April 1, 2021|
|Transformation of Driving Experience: Tips for Implementing Conversational AI in Automotive Industry||March 9, 2021|