ProPublica’s survey had revealed that the risk assessment algorithm named COMPAS and AI behind the system tends to identify blacks as more risky than whites.
The famous trolley dilemma on ethical philosophy asks: “would you kill one person to save five?”. In this question, you are asked to imagine you are standing beside some tram tracks. In the distance, you spot a runaway trolley hurtling down the tracks towards five workers who cannot hear it coming. Even if they do spot it, they won’t be able to move out of the way in time.
As this disaster looms, you glance down and see a lever connected to the tracks. You realise that if you pull the lever, the tram will be diverted down a second set of tracks away from the five unsuspecting workers. However, down this side track is one lone worker, just as oblivious as his colleagues.
So, would you pull the lever, leading to one death but saving five?
This is the crux of the classic thought experiment known as the trolley dilemma, developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985.
The trolley dilemma allows us to think through the consequences of an action and consider whether its moral value is determined solely by its outcome.
The trolley dilemma has since proven itself to be a remarkably flexible tool for probing our moral intuitions, and has been adapted to apply to various other scenarios, such as war, torture, drones, abortion and euthanasia.
Of course, there is not a single correct and moral answer to this question about how people think when deciding on an action. However, it is estimated that many people answered this question as “yes, I pull the lever, I can sacrifice one worker to save the lives of five workers”. Also, this answer can be found moral by many people.
Today, apart from philosophy, this dilemma is brought to the agenda by adapting to artificial intelligence. Although there are no AI implementations that can think like a human and make moral judgments, it is often expressed by scientists that we’re approaching this. Of course, how these dilemmas can be solved by AI is of utmost importance. Especially considering that driverless cars will come to traffic in the next ten years, Though not expected of it, AI is thought to have to make some decisions and achieve moral results. On the other hand, it is often mentioned that the possibility of artificial intelligence applications and robots equipped with AI can pose a greater danger than leaving people unemployed. The danger is racist and sexist bias and prejudices in decisions made by AI. Research on the results of AI algorithms used in a number of experiments and decision making processes gives an idea about the magnitude of this danger.
Recently, A research conducted by MIT is particularly remarkable. In this research, the application of artificial intelligence, which is expected to recognize and distinguish the thousand photos uploaded to it, differentiates whites in a perfect way, But, When it comes to blacks it starts to make a big mistake. When the person in the photo is a white man, the software is right 99 percent of the time.
But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.
Research shows that speech examples used to train the machine learning application is likely to lead to bias. Such problems with the technology have been evident in popular tools such as Google Translate. Recently, while translating Turkish to English Google Translate matched a number of jobs and situations with men and some with women (for instance the sentence “o bir aşçı” translated as “she is a cook”, the sentence “o bir mühendis” is translated as “he is an engineer”) and of course the sexist bias of these translational content has been the subject of debate.
As most of you may remember, a recent example of the biased AI is An AI application developed by Microsoft. In 2016, Microsoft launched the chat application called Tay, which learned human behavior using artificial intelligence algorithms and interacted with other users on Twitter with what she learned. Tay was designed to learn to communicate with people and tweet with data provided by other users on Twitter. In sixteen hours, The tweets she created with the data she collected from Twitter users became sexist and pro-Hitler. On March 25, 2016, Microsoft had to close Thai by apologizing to all users for these unwanted aggressive tweets.
In the text of the apology, Microsoft stated that “artificial intelligence has learned with both positive and negative interactions with people” and therefore “the problem is as social as it is technical”. In fact, this seems to be the highlight of an entire discussion. It can also be clearly seen that although Thai was taught very well to imitate human behavior, she was not taught to behave correctly or morally.
As all these examples clearly show, The racist, sexist, or in some cases status bias produced by artificial intelligence arise from the data sets used to train AI. The datasets used by artificial intelligence algorithms are of course collected from the internet, which is the biggest resource. For example, Microsoft’s Thai who tries to tweet and interact with people in this way, or Google Translate are trying to learn the words, how and with which other words they are used together, so they try both to capture the meaning and to produce answers using natural language against what they understood. Artificial intelligence establishes some relationality through its algorithm while it’s learning which words, how and with which other words these words are used statistically in the datasets provided from the internet. These can sometimes be relationalities whose cause is not understood by human. But in any case, these are not artificial intelligence produced by itself, but the relationalities that exist in the data set that it uses. Therefore, it can match feminen pronouns with cooking, cleaning or secretarial jobs and masculine pronouns with engineering. In other words, the issue appears not as the prejudices of artificial intelligence, but as data sets used in the learning processes of algorithms. That is; racist and sexist content of the internet where this data is collected makes AI produce biases.
As said in the Microsoft statement, social causes rather than technical reasons lie at the root of the problem. While AI learns with the data produced by real people, it can learn to behave like a human, it can analyze the data much faster than the human mind, but at last it cannot learn whether this behavior is right or wrong. But on the other hand, do people always act “good” and “right” in the real world? Maybe, As those who claim that artificial intelligence is not biased, AI produces the most realistic results, but expectation is to see the most suitable results for an ideal world. Considering that there are inequalities and prejudices in the world we live in and the historically produced data is biased, there is no surprise that AI applications also make biased decisions and have real world bias in their decisions. On the other hand, while answering the question “Would you kill one person to save five people?”, It is not unlikely that AI would take into account the race and sex or status of these people, that is, making the dilemma deeper.
Humans shouldn’t be a single source in AI Training
Maybe it is not a very good idea for artificial intelligence to learn merely from people. It is certain that alternative learning ways for artificial intelligence, data sets that are meticulously prepared, cleaned from prejudices and bias as much as possible or algorithms showing how the AI came to which result and how will allow us to progress on these problems. When these are possible, there may be some things that people can learn from AI. Then, It may also be possible for us to negotiate the trolley dilemma and its variations with AI.
Publish Date: February 7, 2020 5:00 AM
|1.)||Cloud IT Services GmbH|
Dialfire offers you a complete call center solution that is simple and intuitive to use and adapts to your needs. It's completely cloud-based, saving you the hassle of setting up a phone system and installing software. In addition to predictive dialing and call blending of inbound calls, you can create personalized campaigns through an intuitive and easy-to-use interface. Features include automated workflows and full control over the agent screen.
You can see the full list of features here:
Cloudonix provides businesses with software development tools (APIs/SDKs) enabling contextual communications, using existing communications tools and workflows.
Connect your website or mobile APP directly to your call or contact center, keeping the context of who the caller is, why they are calling, and how they got to you as part of the call.
|3.)||Connectica Solutions, LLC|
PBX On The Cloud
We provide complete Cloud-based PBX/Phone Systems, including predictive and progressive dialers, phone lines in more than 75 countries and low cost international rates.
We specialize in the US and Latin American market.
PH: +1 (346) 444-3555
|4.)||Layer One Technologies|
Layer One Technologies provides information and communication technology (ICT) services to businesses in the greater Charlotte area. Whatever your cabling needs, we have you covered.
PH: (980) 288-4800
Stay in contact quickly and easily with your international offices through Megacall.
Megacall will fulfill all your telecommunication requirements and on top of that go the extra mile and surprise you with new possibilities for your business, speed to action your requests and keeping within your budget.
📞 Virtual Switchboard
🔌 SIP trunk
🔢 Virtual number
📠 Call Center Solutions
PH: +34 952 667 511
Our SNAPsolution – UC and Collaboration tools – can be quickly deployed so you can realize your ROI just as quick. netsapiens offers a comprehensive suite of unified communications (UC) & Contact Center (CC) feature-sets to service providers. Custom-built to provide our partners with unprecedented levels of flexibility, customization, and ease of use.
netsapiens allows you to control your margins in order to improve your ROI. As a provider of a facilities-based solution, you are able to personally choose your origination, termination, hardware costs, etc. to fit your budget. We also allow you to strategically price your product for a maximum return on your investment by charging on a conc...
Build a strong business presence and improve customer relationships!
Offer custom-tailored solutions throughout your sales cycles and customer interactions and assess service levels and control corporate goal achievement through reports and statistics.
Obtain detailed reports based on relevant activity in your contact center, access our quality module to assess calls and performance, apply surveys, develop scripts, record conversations, and more!
Schedule a live demo or request a quote today!
Long Distance and Toll Free Specialists for Contact Centers
Televergence is a facilities based US Nationwide High volume/High Capacity Inbound and Outbound VoIP Carrier for Contact Centers. We specialize in termination (outbound) and origination (toll free and local DID) for the contact center. We have a technical staff that is up to date on the majority of platforms in use today, including asterisk and Vicci dial, as well as many of the currently in use software platforms.
Contact Centers generally see cost reduction and containment when using Televergence as their contact center telecom carrier. Televergence is the only active US VoIP carrier certified by the Women's Business Enterprise National Council (WBENC), which enables our services to qualify for meeting corporations Supplier Diversity Needs.
|The Pillar of a Successful Conversational Journey: Speech Recognition||October 10, 2020 5:00 AM|
|Reducing Customer Complaints with Speech Analytics||August 12, 2020 5:00 AM|
|Gartner Recognizes Sestek’s Speech-to-Text Technology||May 13, 2020 5:00 AM|
|How does Conversational AI strengthen your business in tough times?||May 4, 2020 5:00 AM|
|AI: The Secret Weapon for Healthcare||March 28, 2020 5:00 AM|
|A New World Without Artificial Stereotypes and Biases with Artificial Intelligence: Why Not?||February 7, 2020 5:00 AM|
|What will 2020 bring to conversational AI?||January 23, 2020 5:00 AM|
|Vital Steps To Get The Best Out Of Speech Analytics||November 25, 2019 5:00 AM|
|Anticipating Customer Intent Better with Predictive Analytics||October 31, 2019 5:00 AM|
|Boost Call Center Performance with AI and Agent Collaboration||August 19, 2019 5:00 AM|