跟TED演讲学英文:What moral decisions should driverless cars make by Iyad Rahwan

What moral decisions should driverless cars make?

在这里插入图片描述

Link: https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make

Speaker: Iyad Rahwan

Date: September 2016

文章目录

  • What moral decisions should driverless cars make?
    • Introduction
    • Vocabulary
    • Transcript
    • Summary
    • 后记

Introduction

Should your driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, Iyad Rahwan explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we’re willing (and not willing) to make.

Vocabulary

swerve: 美 [swɜːrv] 突然地改变方向

bystander: 旁观者

but the car may swerve, hitting one bystander, 但是汽车可能会突然转向,撞到一个旁观者

crash into:撞上,撞到xxx上

a bunch of:一群

it will crash into a bunch of pedestrians crossing the street它会撞上一群过马路的行人

the trolley problem:电车难题

The trolley problem is a classic ethical dilemma in philosophy and ethics, often used to explore moral decision-making and the concept of utilitarianism. It presents a scenario where a person is faced with a moral choice that involves sacrificing one life to save others. The traditional setup involves a runaway trolley hurtling down a track towards a group of people who will be killed if it continues on its path. The person facing the dilemma has the option to divert the trolley onto a different track, where it will kill one person instead of the group.

The ethical question at the heart of the trolley problem revolves around whether it is morally justifiable to actively intervene to sacrifice one life to save many others. Philosophers use variations of this scenario to explore different factors that may influence moral decision-making, such as the number of lives at stake, the role of intention, and the consequences of one’s actions.

The trolley problem has practical applications beyond philosophical thought experiments, particularly in fields like autonomous vehicle technology. Engineers and ethicists grapple with similar dilemmas when programming self-driving cars, where decisions must be made about how the vehicle should respond in potentially fatal situations.

trolley:美 [ˈtrɑːli] 乘电车;

utilitarian:美 [ˌjuːtɪlɪˈteriən] 有效用的;实用的;功利主义的

Bentham says the car should follow utilitarian ethics 汽车应该遵循功利主义伦理

take its course: 顺其自然, 完成自身的发展阶段

and you should let the car take its course even if that’s going to harm more people 你应该让汽车顺其自然,即使这会伤害更多的人

pamphlet:美 [ˈpæmflət] 小册子

English economist William Forster Lloyd published a pamphlet 英国经济学家威廉·福斯特·劳埃德出版了一本小册子

graze:美 [ɡreɪz] 在田野里吃草

English farmers who are sharing a common land for their sheep to graze. 共享一片土地放羊的英国农民。

rejuvenate:美 [rɪˈdʒuːvəneɪt] 使感觉更年轻;使恢复青春活力;

the land will be rejuvenated 这片土地将重新焕发生机

detriment:美 [ˈdetrɪmənt] 伤害,损害

to the detriment of all the farmers, 对所有农民不利

the tragedy of the commons 共用品悲剧

The tragedy of the commons is a concept in economics and environmental science that refers to a situation where multiple individuals, acting independently and rationally in their own self-interest, deplete a shared limited resource, leading to its degradation or depletion. The term was popularized by the ecologist Garrett Hardin in a famous 1968 essay.

The tragedy of the commons arises when individuals prioritize their own short-term gains over the long-term sustainability of the shared resource. Since no single individual bears the full cost of their actions, there is little incentive to conserve or manage the resource responsibly. Instead, each person maximizes their own benefit, leading to overexploitation or degradation of the resource, which ultimately harms everyone.

Classic examples of the tragedy of the commons include overfishing in open-access fisheries, deforestation of public lands, and pollution of the air and water. In each case, individuals or groups exploit the resource without considering the negative consequences for others or the sustainability of the resource itself.

Addressing the tragedy of the commons often requires collective action and the establishment of regulations, property rights, or other mechanisms to manage and protect the shared resource. By aligning individual incentives with the common good, it becomes possible to mitigate overuse and ensure the sustainable management of resources for the benefit of all.

insidious:美 [ɪnˈsɪdiəs] 潜伏的;隐患的

the problem may be a little bit more insidious because there is not necessarily an individual human being making those decisions. 这个问题可能更隐蔽一点,因为不一定会是人做出这些决定。

jaywalking: 走路不遵守交通规则, 擅自穿越马路

punish jaywalking:惩罚乱穿马路

zeroth:第零个

Asimov introduced the zeroth law which takes precedence above all, and it’s that a robot may not harm humanity as a whole. 阿西莫夫提出了第零定律,该定律高于一切,即机器人不得伤害整个人类。

Transcript

Today I’m going to talk
about technology and society.

The Department of Transport
estimated that last year

35,000 people died
from traffic crashes in the US alone.

Worldwide, 1.2 million people
die every year in traffic accidents.

If there was a way we could eliminate
90 percent of those accidents,

would you support it?

Of course you would.

This is what driverless car technology
promises to achieve

by eliminating the main
source of accidents –

human error.

Now picture yourself
in a driverless car in the year 2030,

sitting back and watching
this vintage TEDxCambridge video.

(Laughter)

All of a sudden,

the car experiences mechanical failure
and is unable to stop.

If the car continues,

it will crash into a bunch
of pedestrians crossing the street,

but the car may swerve,

hitting one bystander,

killing them to save the pedestrians.

What should the car do,
and who should decide?

What if instead the car
could swerve into a wall,

crashing and killing you, the passenger,

in order to save those pedestrians?

This scenario is inspired
by the trolley problem,

which was invented
by philosophers a few decades ago

to think about ethics.

Now, the way we think
about this problem matters.

We may for example
not think about it at all.

We may say this scenario is unrealistic,

incredibly unlikely, or just silly.

But I think this criticism
misses the point

because it takes
the scenario too literally.

Of course no accident
is going to look like this;

no accident has two or three options

where everybody dies somehow.

Instead, the car is going
to calculate something

like the probability of hitting
a certain group of people,

if you swerve one direction
versus another direction,

you might slightly increase the risk
to passengers or other drivers

versus pedestrians.

It’s going to be
a more complex calculation,

but it’s still going
to involve trade-offs,

and trade-offs often require ethics.

We might say then,
"Well, let’s not worry about this.

Let’s wait until technology
is fully ready and 100 percent safe."

Suppose that we can indeed
eliminate 90 percent of those accidents,

or even 99 percent in the next 10 years.

What if eliminating
the last one percent of accidents

requires 50 more years of research?

Should we not adopt the technology?

That’s 60 million people
dead in car accidents

if we maintain the current rate.

So the point is,

waiting for full safety is also a choice,

and it also involves trade-offs.

People online on social media
have been coming up with all sorts of ways

to not think about this problem.

One person suggested
the car should just swerve somehow

in between the passengers –

(Laughter)

and the bystander.

Of course if that’s what the car can do,
that’s what the car should do.

We’re interested in scenarios
in which this is not possible.

And my personal favorite
was a suggestion by a blogger

to have an eject button in the car
that you press –

(Laughter)

just before the car self-destructs.

(Laughter)

So if we acknowledge that cars
will have to make trade-offs on the road,

how do we think about those trade-offs,

and how do we decide?

Well, maybe we should run a survey
to find out what society wants,

because ultimately,

regulations and the law
are a reflection of societal values.

So this is what we did.

With my collaborators,

Jean-François Bonnefon and Azim Shariff,

we ran a survey

in which we presented people
with these types of scenarios.

We gave them two options
inspired by two philosophers:

Jeremy Bentham and Immanuel Kant.

Bentham says the car
should follow utilitarian ethics:

it should take the action
that will minimize total harm –

even if that action will kill a bystander

and even if that action
will kill the passenger.

Immanuel Kant says the car
should follow duty-bound principles,

like “Thou shalt not kill.”

So you should not take an action
that explicitly harms a human being,

and you should let the car take its course

even if that’s going to harm more people.

What do you think?

Bentham or Kant?

Here’s what we found.

Most people sided with Bentham.

So it seems that people
want cars to be utilitarian,

minimize total harm,

and that’s what we should all do.

Problem solved.

But there is a little catch.

When we asked people
whether they would purchase such cars,

they said, “Absolutely not.”

(Laughter)

They would like to buy cars
that protect them at all costs,

but they want everybody else
to buy cars that minimize harm.

(Laughter)

We’ve seen this problem before.

It’s called a social dilemma.

And to understand the social dilemma,

we have to go a little bit
back in history.

In the 1800s,

English economist William Forster Lloyd
published a pamphlet

which describes the following scenario.

You have a group of farmers –

English farmers –

who are sharing a common land
for their sheep to graze.

Now, if each farmer
brings a certain number of sheep –

let’s say three sheep –

the land will be rejuvenated,

the farmers are happy,

the sheep are happy,

everything is good.

Now, if one farmer brings one extra sheep,

that farmer will do slightly better,
and no one else will be harmed.

But if every farmer made
that individually rational decision,

the land will be overrun,
and it will be depleted

to the detriment of all the farmers,

and of course,
to the detriment of the sheep.

We see this problem in many places:

in the difficulty of managing overfishing,

or in reducing carbon emissions
to mitigate climate change.

When it comes to the regulation
of driverless cars,

the common land now
is basically public safety –

that’s the common good –

and the farmers are the passengers

or the car owners who are choosing
to ride in those cars.

And by making the individually
rational choice

of prioritizing their own safety,

they may collectively be
diminishing the common good,

which is minimizing total harm.

It’s called the tragedy of the commons,

traditionally,

but I think in the case
of driverless cars,

the problem may be
a little bit more insidious

because there is not necessarily
an individual human being

making those decisions.

So car manufacturers
may simply program cars

that will maximize safety
for their clients,

and those cars may learn
automatically on their own

that doing so requires slightly
increasing risk for pedestrians.

So to use the sheep metaphor,

it’s like we now have electric sheep
that have a mind of their own.

(Laughter)

And they may go and graze
even if the farmer doesn’t know it.

So this is what we may call
the tragedy of the algorithmic commons,

and if offers new types of challenges.

Typically, traditionally,

we solve these types
of social dilemmas using regulation,

so either governments
or communities get together,

and they decide collectively
what kind of outcome they want

and what sort of constraints
on individual behavior

they need to implement.

And then using monitoring and enforcement,

they can make sure
that the public good is preserved.

So why don’t we just,

as regulators,

require that all cars minimize harm?

After all, this is
what people say they want.

And more importantly,

I can be sure that as an individual,

if I buy a car that may
sacrifice me in a very rare case,

I’m not the only sucker doing that

while everybody else
enjoys unconditional protection.

In our survey, we did ask people
whether they would support regulation

and here’s what we found.

First of all, people
said no to regulation;

and second, they said,

"Well if you regulate cars to do this
and to minimize total harm,

I will not buy those cars."

So ironically,

by regulating cars to minimize harm,

we may actually end up with more harm

because people may not
opt into the safer technology

even if it’s much safer
than human drivers.

I don’t have the final
answer to this riddle,

but I think as a starting point,

we need society to come together

to decide what trade-offs
we are comfortable with

and to come up with ways
in which we can enforce those trade-offs.

As a starting point,
my brilliant students,

Edmond Awad and Sohan Dsouza,

built the Moral Machine website,

which generates random scenarios at you –

basically a bunch
of random dilemmas in a sequence

where you have to choose what
the car should do in a given scenario.

And we vary the ages and even
the species of the different victims.

So far we’ve collected
over five million decisions

by over one million people worldwide

from the website.

And this is helping us
form an early picture

of what trade-offs
people are comfortable with

and what matters to them –

even across cultures.

But more importantly,

doing this exercise
is helping people recognize

the difficulty of making those choices

and that the regulators
are tasked with impossible choices.

And maybe this will help us as a society
understand the kinds of trade-offs

that will be implemented
ultimately in regulation.

And indeed, I was very happy to hear

that the first set of regulations

that came from
the Department of Transport –

announced last week –

included a 15-point checklist
for all carmakers to provide,

and number 14 was ethical consideration –

how are you going to deal with that.

We also have people
reflect on their own decisions

by giving them summaries
of what they chose.

I’ll give you one example –

I’m just going to warn you
that this is not your typical example,

your typical user.

This is the most sacrificed and the most
saved character for this person.

(Laughter)

Some of you may agree with him,

or her, we don’t know.

But this person also seems to slightly
prefer passengers over pedestrians

in their choices

and is very happy to punish jaywalking.

(Laughter)

So let’s wrap up.

We started with the question –
let’s call it the ethical dilemma –

of what the car should do
in a specific scenario:

swerve or stay?

But then we realized
that the problem was a different one.

It was the problem of how to get
society to agree on and enforce

the trade-offs they’re comfortable with.

It’s a social dilemma.

In the 1940s, Isaac Asimov
wrote his famous laws of robotics –

the three laws of robotics.

A robot may not harm a human being,

a robot may not disobey a human being,

and a robot may not allow
itself to come to harm –

in this order of importance.

But after 40 years or so

and after so many stories
pushing these laws to the limit,

Asimov introduced the zeroth law

which takes precedence above all,

and it’s that a robot
may not harm humanity as a whole.

I don’t know what this means
in the context of driverless cars

or any specific situation,

and I don’t know how we can implement it,

but I think that by recognizing

that the regulation of driverless cars
is not only a technological problem

but also a societal cooperation problem,

I hope that we can at least begin
to ask the right questions.

Thank you.

(Applause)

Summary

In Iyad Rahwan’s TED Talk, he delves into the ethical dilemmas surrounding the advent of driverless car technology. He begins by highlighting the potential of this technology to significantly reduce traffic accidents caused by human error, thereby saving countless lives. However, he poses a thought-provoking scenario: if faced with a situation where a driverless car must choose between different courses of action, such as swerving to avoid pedestrians at the risk of harming the passenger, who should decide and how? Rahwan draws parallels to the classic philosophical trolley problem to illustrate the complex ethical considerations at play in programming autonomous vehicles.

Rahwan emphasizes the societal implications of adopting driverless car technology and the challenges it poses. He discusses the tension between individual preferences for safety and societal values, pointing out the paradox where individuals may support utilitarian ethics for autonomous vehicles while prioritizing their own safety when it comes to purchasing decisions. This dilemma reflects the classic tragedy of the commons, where individual rational choices may lead to suboptimal outcomes for society as a whole. Rahwan argues that addressing these challenges requires collective decision-making and regulation informed by societal values.

To explore societal values and preferences regarding the ethical dilemmas of driverless cars, Rahwan and his collaborators conducted surveys and developed the Moral Machine website. Through this platform, they collected data on people’s choices in hypothetical scenarios, revealing diverse perspectives and priorities across cultures. Rahwan underscores the importance of understanding and reconciling these differences in shaping regulations for autonomous vehicles. He concludes by advocating for ongoing dialogue and cooperation to navigate the ethical complexities of driverless car technology, ultimately aiming to ensure that societal values are reflected in its implementation.

后记

2024年5月6日18点43分于上海。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/322107.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

kubeflow简单记录

kubeflow 13.7k star 1、Training Operator 包括PytorchJob和XGboostJob,支持部署pytorch的分布式训练 2、KFServing快捷的部署推理服务 3、Jupyter Notebook 基于Web的交互式工具 4、Katib做超参数优化 5、Pipeline 基于Argo Workflow提供机器学习流程的创建、编排…

漏洞挖掘之某厂商OAuth2.0认证缺陷

0x00 前言 文章中的项目地址统一修改为: a.test.com 保护厂商也保护自己 0x01 OAuth2.0 经常出现的地方 1:网站登录处 2:社交帐号绑定处 0x02 某厂商绑定微博请求包 0x02.1 请求包1: Request: GET https://www.a.test.com/users/auth/weibo?…

mysql从入门到起飞+面试基础题

mysql基础 MySQL基础 企业面试题1 代码 select m.id,m.num from ( select t.id as id,count(1) num from ( select ra.requester_id as id from RequestAccepted raunion all select ra.accepter_id as id from RequestAccepted ra ) t group by t.id ) m group by id ord…

生产制造中刀具管理系统,帮助工厂不再频繁换刀

一、刀具管理的定义与重要性 刀具管理是指对生产过程中使用的各种刀具进行计划、采购、存储、分配、使用、监控、维修和报废等全过程的管理。刀具作为制造过程中的直接工具,其性能、质量和使用效率直接影响产品的加工精度、表面质量和生产效率。因此,建…

ICode国际青少年编程竞赛- Python-1级训练场-基础训练2

ICode国际青少年编程竞赛- Python-1级训练场-基础训练2 1、 a 4 # 变量a存储的数字是4 Dev.step(a) # 因为变量a的值是4,所以Dev.step(a)就相当于Dev.step(4)2、 a 1 # 变量a的值为1 for i in range(4):Dev.step(a)Dev.turnLeft()a a 1 # 变量a的值变为…

ios苹果App上架到应用商店的操作流程

哈喽,大家好呀,淼淼又来和大家见面啦,发现最近有许多想要上架App的小伙伴,但是又不知道要怎么操作,对于开发者而言,将精心打造的iOS应用程序成功上架到苹果的 App Store 是向全球用户展示咱们的产品和服务的…

Windows+Linux的虚拟串口工具

文章目录 1.Windows虚拟串口工具1.1 安装教程1.2 使用方法 2.Linux系统虚拟串口工具2.1 socat安装2.2 开启虚拟串口2.3 测试2.3.1 命令测试2.3.2 Cutecom工具测试 2.4 关闭虚拟串口 3.参考资料 1.Windows虚拟串口工具 下载地址:https://www.downxia.com/downinfo/4…

9、String类型和基本数据类型转换(Java)

String类型和基本数据类型转换 1、基本数据类型转String类型2、String类型转基本数据类型⭐ 1、基本数据类型转String类型 Java中String类型是字符串类型,是用 “ ” 双引号括起来的内容,所以基本数据类型转String类型直接+“ ”即可。&…

三年软件测试经验遭遇求职困境?揭秘求职市场的隐藏陷阱

1.个人背景 小李,我的一位朋友,拥有三年多的软件测试工作经验。他本科毕业后便投身于测试行业,熟练掌握Python编程,能够编写自动化测试脚本,并且熟悉Selenium和性能测试。然而,尽管他具备这些技能和经验&am…

企业做网站,如何设计才有创意?

企业做网站,如何设计才有创意?我们都希望能打造一个有创意的网站建设,能在众多网站中脱颖而出,能够营销推广公司的产品,为公司带来更多的经济效益收益。广州网站建设的时候,记住直观的设计可以让用户体验更…

《尿不湿级》STM32 F103C8T6最小系统板搭建(五)BOOT

一、BOOT是什么? 大多数初学者第一次接触BOOT总是对这个词感到不解,从哪冒出一个奇奇怪怪的东西还要接跳线帽,为什么要配置它才能进行串口程序的下载?为什么不正确配置会导致单片机无法正常启动…… boot,及物动词&…

配置 Trunk,实现相同VLAN的跨交换机通信

1.实验环境 公司的员工人数已达到 100 人,其网络设备如图所示。现在的网络环境导致广播较多网速慢,并且也不安全。公司希望按照部门划分网络,并且能够保证一定的网络安全性。 其网络规划如下。 PC1和 PC3为财务部,属于VLAN 2&…

【NodeMCU实时天气时钟温湿度项目 5】获取关于城市天气实况和天气预报的JSON信息(心知天气版)

| 今天是第五专题内容,主要是介绍如何从心知天气官网,获取包含当前天气实况和未来 3 天天气预报的JSON数据信息。 在学习获取及显示天气信息前,我们务必要对JSON数据格式有个深入的了解。 如您需要了解其它专题的内容&#xf…

手撕多线程

用一个双线程轮流打印1-100 // 定义一个类,用于交替打印奇偶数 public class AlternatePrinting {// 当前待打印的数字,初始为1private int currentNumber 1;// 用作线程间同步的锁对象private final Object lock new Object();// 程序入口public sta…

【如此简单!数据库入门系列】之无序不代表混乱 -- 堆文件

文章目录 前言堆文件链表实现页目录实现总结系列文章 前言 还记得上次遗留的问题吗? 以什么组织方式将数据保存在磁盘中? 今天我们接着讨论这个问题。 首先想一个问题:有一天,你开着自己心爱的大型SUV去超市购物。在停车场入口看…

代码随想录Day 41|Leetcode|Python|198.打家劫舍 ● 213.打家劫舍II ● 337.打家劫舍III

198.打家劫舍 你是一个专业的小偷,计划偷窃沿街的房屋。每间房内都藏有一定的现金,影响你偷窃的唯一制约因素就是相邻的房屋装有相互连通的防盗系统,如果两间相邻的房屋在同一晚上被小偷闯入,系统会自动报警。 给定一个代表每个…

linux上go项目打包与部署

1.第一步把项目打包 1.确保本地goland的操作系统为linux go env找到GOOS如果为window就修改为Linux 修改命令为 go env -w GOOSlinux2.打包 在项目根目录下输入 go build main.go然后项目根目录下会出现一个mian的二进制文件 3.上传包 将 main 程序包放到服务的目录下&…

python与java用途区别有哪些

区别: 1.Python比Java简单,学习成本低,开发效率高。 2.Java运行效率高于Python,尤其是纯Python开发的程序,效率极低。 3.Java相关资料多,尤其是中文资料。 4.Java版本比较稳定,Python2和3不…

全双工音频对讲模块-支持空中升级、多级无线中继

SA618F30是一款高集成的大功率全双工无线音频模块,发射功率高达32dBm。该音频模块简化接口,只需外接音频功放或麦克风即可作为一个小型对讲机,方便快捷嵌入到各类手持设备中。支持多级无线中继,支持OTA空中升级。 SA618F30配备1W…

Flutter笔记:Widgets Easier组件库(11)- 使用提示吐丝(Tip Toasts)

Flutter笔记 Widgets Easier组件库(11)使用提示吐丝 - 文章信息 - Author: 李俊才 (jcLee95) Visit me at CSDN: https://jclee95.blog.csdn.netMy WebSite:http://thispage.tech/Email: 291148484163.com. Shenzhen ChinaAddress of this …