ProPublica’s survey had revealed that the risk assessment algorithm named COMPAS and AI behind the system tends to identify blacks as more risky than whites.
The famous trolley dilemma on ethical philosophy asks: “would you kill one person to save five?”. In this question, you are asked to imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.
As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers. However, down this side track is one lone worker, just as oblivious as his colleagues.
So, would you pull the lever, leading to one death but saving five?
This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.
The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.
The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.
Of course, there is not a single correct and moral answer to this question about how people think when deciding on an action. However, it is estimated that many people answered this question as “yes, I pull the lever, I can sacrifice one worker to save the lives of five workers”. Also, this answer can be found moral by many people.
Today, apart from philosophy, this dilemma is brought to the agenda by adapting to artificial intelligence. Although there are no AI implementations that can think like a human and make moral judgments, it is often expressed by scientists that we’re approaching this. Of course, how these dilemmas can be solved by AI is of utmost importance. Especially considering that driverless cars will come to traffic in the next ten years, Though not expected of it, AI is thought to have to make some decisions and achieve moral results. On the other hand, it is often mentioned that the possibility of artificial intelligence applications and robots equipped with AI can pose a greater danger than leaving people unemployed. The danger is racist and sexist bias and prejudices in decisions made by AI. Research on the results of AI algorithms used in a number of experiments and decision making processes gives an idea about the magnitude of this danger.
Recently, A research conducted by MIT is particularly remarkable. In this research, the application of artificial intelligence, which is expected to recognize and distinguish the thousand photos uploaded to it, differentiates whites in a perfect way, But, When it comes to blacks it starts to make a big mistake. When the person in the photo is a white man, the software is right 99 percent of the time.
But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.
Research shows that speech examples used to train the machine learning application is likely to lead to bias. Such problems with the technology have been evident in popular tools such as Google Translate. Recently, while translating Turkish to English Google Translate matched a number of jobs and situations with men and some with women (for instance the sentence “o bir aşçı” translated as “she is a cook”, the sentence “o bir mühendis” is translated as “he is an engineer”) and of course the sexist bias of these translational content has been the subject of debate.
As most of you may remember, a recent example of the biased AI is An AI application developed by Microsoft. In 2016, Microsoft launched the chat application called Tay, which learned human behavior using artificial intelligence algorithms and interacted with other users on Twitter with what she learned. Tay was designed to learn to communicate with people and tweet with data provided by other users on Twitter. In sixteen hours, The tweets she created with the data she collected from Twitter users became sexist and pro-Hitler. On March 25, 2016, Microsoft had to close Thai by apologizing to all users for these unwanted aggressive tweets.
In the text of the apology, Microsoft stated that “artificial intelligence has learned with both positive and negative interactions with people” and therefore “the problem is as social as it is technical”. In fact, this seems to be the highlight of an entire discussion. It can also be clearly seen that although Thai was taught very well to imitate human behavior, she was not taught to behave correctly or morally.
As all these examples clearly show, The racist, sexist, or in some cases status bias produced by artificial intelligence arise from the data sets used to train AI. The datasets used by artificial intelligence algorithms are of course collected from the internet, which is the biggest resource. For example, Microsoft’s Thai who tries to tweet and interact with people in this way, or Google Translate are trying to learn the words, how and with which other words they are used together, so they try both to capture the meaning and to produce answers using natural language against what they understood. Artificial intelligence establishes some relationality through its algorithm while it’s learning which words, how and with which other words these words are used statistically in the datasets provided from the internet. These can sometimes be relationalities whose cause is not understood by human. But in any case, these are not artificial intelligence produced by itself, but the relationalities that exist in the data set that it uses. Therefore, it can match feminen pronouns with cooking, cleaning or secretarial jobs and masculine pronouns with engineering. In other words, the issue appears not as the prejudices of artificial intelligence, but as data sets used in the learning processes of algorithms. That is; racist and sexist content of the internet where this data is collected makes AI produce biases.
As said in the Microsoft statement, social causes rather than technical reasons lie at the root of the problem. While AI learns with the data produced by real people, it can learn to behave like a human, it can analyze the data much faster than the human mind, but at last it cannot learn whether this behavior is right or wrong. But on the other hand, do people always act “good” and “right” in the real world? Maybe, As those who claim that artificial intelligence is not biased, AI produces the most realistic results, but expectation is to see the most suitable results for an ideal world. Considering that there are inequalities and prejudices in the world we live in and the historically produced data is biased, there is no surprise that AI applications also make biased decisions and have real world bias in their decisions. On the other hand, while answering the question “Would you kill one person to save five people?”, It is not unlikely that AI would take into account the race and sex or status of these people, that is, making the dilemma deeper.
Humans shouldn’t be a single source in AI Training
Maybe it is not a very good idea for artificial intelligence to learn merely from people. It is certain that alternative learning ways for artificial intelligence, data sets that are meticulously prepared, cleaned from prejudices and bias as much as possible or algorithms showing how the AI came to which result and how will allow us to progress on these problems. When these are possible, there may be some things that people can learn from AI. Then, It may also be possible for us to negotiate the trolley dilemma and its variations with AI.
Publish Date: February 7, 2020 5:00 AM
eGain CallTrack™ is a dynamic call tracking and case management solution that helps companies provide quick, high-quality, and cost-efficient resolution of customer issues across traditional and emerging interaction channels. It is one of the many innovative customer interaction products in eGain Solve™ suite, the unified customer engagement and knowledge management software suite, which helps businesses transform their traditional call centers into omnichannel customer engagement hubs. eGain CallTrack will enable you to track, manage and resolve cases and maintain service level agreements (SLA), across channels.
Happitu is your customer support team’s personal coach. It guides your team through every interaction with custom workflows, responsive scripting, and dynamic help topics.
Documentation in Happitu is automated, detailed, and consistent. Go beyond handle times and service levels with the rich insights of Happitu – from granular interaction data to aggregate data and trends – you get the complete CX journey!
We intentionally built the Happitu Workflow Designer with your customer support team in mind. Using our intuitive tools that provide quick and safe iteration, you eliminate the need to involve IT or Development. Yes, you will no longer have to dread submitting a change request to Devin from IT!
Try it free for 45 days!
Knowmax is an omnichannel knowledge management platform. Our mission is to transform contact centers into resolution centers and drive customer self service.
The platform is an industry-agnostic enterprise-grade knowledge platform with components helping in easy findability of actionable information at the right time across the desired touchpoint.
Nuxiba's Help Desk Software is ideal for delivering top-level contact help desk services. It not only tailors to your existing technology resources but also allows you to set staff priorities that forward clients to the most knowledgeable agents.
Give personalized assistance, and analyze performance with our real-time monitoring, recording, and reports. Integrate Salesforce or VTiger CRM systems, be TCPA compliant, protect your cardholder's data during calls, and more!
Request a quote or demo today and start the journey to increase your first-contact resolution percentages in less than three months!
OneDesk's software combines Helpdesk & Project Management into one application. No need to purchase, integrate and switch between applications. Your team can support your customers and work on projects in one place. Aimed at SMBs as well as departments at large enterprises, OneDesk is frequently used by project managers, customer service, IT, professional services and more. This easy-to-use, feature-rich, and highly configurable software can manage both ticket & task workflows.
|6.)||Teckinfo Solutions Pvt. Ltd.|
ActivDesk Help Desk Software
ActivDesk Help Desk Software is an intelligent ticketing software for multi- channel customer interaction & engagement. It streamlines the entire ticket management process. With its SLA & escalation management, it enables your help desk to deliver enriching customer experience.
|The Evolution of Machine Learning: Explainable AI||July 13, 2021 5:00 AM|
|Making Conversational AI Smarter: 4 Hints to Design an Intelligent Conversational AI Solution||May 25, 2021 5:00 AM|
|How to Deploy Successful Conversational AI Projects||April 1, 2021 5:00 AM|
|Transformation of Driving Experience: Tips for Implementing Conversational AI in Automotive Industry||March 9, 2021 5:00 AM|
|The Pillar of a Successful Conversational Journey: Speech Recognition||October 10, 2020 5:00 AM|
|Reducing Customer Complaints with Speech Analytics||August 12, 2020 5:00 AM|
|Gartner Recognizes Sestek’s Speech-to-Text Technology||May 13, 2020 5:00 AM|
|How does Conversational AI strengthen your business in tough times?||May 4, 2020 5:00 AM|
|AI: The Secret Weapon for Healthcare||March 28, 2020 5:00 AM|
|A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not?||February 7, 2020 5:00 AM|