美文网首页@IT·互联网读书
如何防止AI(人工智能)学到人类的偏见

如何防止AI(人工智能)学到人类的偏见

作者: FLINGH | 来源:发表于2019-03-27 18:42 被阅读3次

How many decisions have been made about you today,
今天,有多少与你有关的决策,

or this week or this year,
或这週,或今年,

by artificial intelligence?
是由人工智慧所做的?

I build AI for a living
我的工作是建造人工智慧,

so, full disclosure, I’m kind of a nerd.
所以,不隐瞒大家,我是个怪胎。

And because I’m kind of a nerd,
因为我是个怪胎,

wherever some new news story comes out
每当有新的新闻报导出来,

about artificial intelligence stealing all our jobs,
内容有谈到人工智慧 偷走我们所有的工作,

or robots getting citizenship of an actual country,
或是机器人取得 实际国家的公民权,

I’m the person my friends and followers message
我的朋友和追随者 就会发讯息给我,

freaking out about the future.
表示对于未来的担忧。

We see this everywhere.
这种状况处处可见。

This media panic that our robot overlords are taking over.
这种认为机器人统治者 会接管世界的媒体恐慌。

We could blame Hollywood for that.
我们可以怪罪于好莱坞。

But in reality, that’s not the problem we should be focusing on.
但,在现实中,我们不该 把焦点放在那个问题上。

There is a more pressing danger, a bigger risk with AI,
还有更迫切的危机, 人工智慧有个更大的风险,

that we need to fix first.
我们应该要先解决它。

So we are back to this question:
所以,回到这个问题:

How many decisions have been made about you today by AI?
今天,人工智慧做了 多少关于你的决策?

And how many of these
这些决策中,有多少

were based on your gender, your race or your background?
是根据你的性别、你的种族, 或你的背景所做出来的?

Algorithms are being used all the time
演算法常常被拿来使用,

to make decisions about who we are and what we want.
做出关于我们是什么人、 我们想要什么的相关决策。

Some of the women in this room will know what I’m talking about
这间房间中有一些女性 知道我在说什么,

if you’ve been made to sit through those pregnancy test adverts on YouTube
如果你曾经坐在电脑前 看 YouTube 时,

like 1,000 times.
被迫看完验孕测试的广告, 且发生过约一千次的话。

Or you’ve scrolled past adverts of fertility clinics
或者,你曾经在滑手机 看脸书动态时报时

on your Facebook feed.
一直看到不孕症诊所的广告。

Or in my case, Indian marriage bureaus.
或者,我的例子则是看到 印度婚姻介绍所的广告。

(Laughter)
(笑声)

But AI isn’t just being used to make decisions
但,人工智慧不只是被用来判定

about what products we want to buy
我们想要买什么产品,

or which show we want to binge watch next.
或是我们接下来想要看追哪齣剧。

I wonder how you’d feel about someone who thought things like this:
我很好奇,对于这样想的人, 你们有何感觉:

A black or Latino person
「黑人或拉丁裔的人

is less likely than a white person to pay off their loan on time."
准时还清贷款的可能性 没有白人高。」

A person called John makes a better programmer
「名字叫做约翰的人, 和名叫玛莉的人相比,

than a person called Mary."
会是比较好的程式设计师。」

A black man is more likely to be a repeat offender than a white man.
「比起白人,黑人比较 有可能会再次犯罪。」

You’re probably thinking,
你们可能在想:

Wow, that sounds like a pretty sexist, racist person, right?
「哇,那听起来是性别主义 和种族主义的人会说的话」对吧?

These are some real decisions that AI has made very recently,
上述这些是人工智慧 近期所做出的一些决策,

based on the biases it has learned from us,
决策依据是它向我们学来的偏见,

from the humans.
向人类学来的。

AI is being used to help decide whether or not you get that job interview;
人工智慧被用来协助判断 你是否能参加工作面试;

how much you pay for your car insurance;
你的汽车保险保费是多少钱;

how good your credit score is;
你的信用评等有多好;

and even what rating you get in your annual performance review.
甚至你在年度考绩中 得到的评级是多少。

But these decisions are all being filtered through
但这些决策都会先被过滤过,

its assumptions about our identity, our race, our gender, our age.
过滤依据就是关于我们的身分、 种族、性别、年龄等的假设。

How is that happening?
为什么会发生这种事?

Now, imagine an AI is helping a hiring manager
想像一下,人工智慧在协助 一位有人才需求的经理

find the next tech leader in the company.
寻找该公司的下一位技术主管。

So far, the manager has been hiring mostly men.
目前,这位经理僱用的人 大部分都是男性。

So the AI learns men are more likely to be programmers than women.
所以人工智慧学到的是,男性 比女性更有可能成为程式设计师。

And it’s a very short leap from there to:
很容易就会从 这个现象直接下结论:

men make better programmers than women.
男性程式设计师比女性好。

We have reinforced our own bias into the AI.
我们把我们自己的偏见 灌输给人工智慧。

And now, it’s screening out female candidates.
现在,它就会把 女性候选人给筛掉。

Hang on, if a human hiring manager did that,
等等,如果人类的经理这样做,

we’d be outraged, we wouldn’t allow it.
我们会很火大, 我们不会容忍这种事。

This kind of gender discrimination is not OK.
这种性别歧视是不对的。

And yet somehow, AI has become above the law,
但,人工智慧却以 某种方式超越了法律,

because a machine made the decision.
因为那个决策是机器做出来的。

That’s not it.
不只如此。

We are also reinforcing our bias in how we interact with AI.
我们和人工智慧的互动, 也加强了我们自己的偏见。

How often do you use a voice assistant like Siri, Alexa or even Cortana?
你们有多常使用语音助手,比如 Siri、Alexa,或甚至 Cortana?

They all have two things in common:
它们全都有两项共通点:

one, they can never get my name right,
第一,它们总是把我的名字弄错,

and second, they are all female.
第二,它们都是女性。

They are designed to be our obedient servants,
它们被设计为顺从我们的僕人,

turning your lights on and off, ordering your shopping.
帮你开灯、关灯,帮你下单购物。

You get male AIs too, but they tend to be more high-powered,
也会有男性的人工智慧, 但通常它们的功能比较强,

like IBM Watson, making business decisions,
比如 IBM 的 Watson, 做的是商业决策,

Salesforce Einstein or ROSS, the robot lawyer.
又如 Salesforce Einstein, 或是机器律师 ROSS 。

So poor robots, even they suffer from sexism in the workplace.
可怜的机器人,连它们也会 遇到工作场所的性别主义。

(Laughter)
(笑声)

Think about how these two things combine
想想看,当这两点结合起来时,

and affect a kid growing up in today’s world around AI.
会如何影响现今世界中与人工智慧 一同生活的孩子成长。

So they’re doing some research for a school project
所以,他们为了一项学校 专案计画做了些研究,

and they Google images of CEO.
他们用 Google 搜寻了 执行长的形象。

The algorithm shows them results of mostly men.
演算法呈现给他们看的搜寻结果, 大部分都是男性。

And now, they Google personal assistant.
他们又搜寻了个人助理。

As you can guess, it shows them mostly females.
你们可以猜到,呈现出来的 搜寻结果大部分是女性。

And then they want to put on some music, and maybe order some food,
接著,他们想要播放音乐, 也许再点一些食物来吃,

and now, they are barking orders at an obedient female voice assistant.
现在,他们便大声喊出命令,要顺从的女性语音助手去做。

Some of our brightest minds are creating this technology today.
一些最聪明的天才们 创造出现今的这种技术。

Technology that they could have created in any way they wanted.
他们可以依他们想要的 任何方式来创造这种技术。

And yet, they have chosen to create it in the style of 1950s "Mad Man" secretary.
但,他们选择採用五〇年代 《广告狂人》中的秘书风格。

Yay!
好呀!

But OK, don’t worry,
但,好,别担心,

this is not going to end with me telling you
演说的结尾不会是我告诉各位

that we are all heading towards sexist, racist machines running the world.
我们正在迈向一个由性别主义、 种族主义的机器所统治的世界。

The good news about AI is that it is entirely within our control.
关于人工智慧的好消息是, 它完全在我们的掌控当中。

We get to teach the right values, the right ethics to AI.
我们可以教导人工智慧 正确的价值观、正确的伦理。

So there are three things we can do.
有三件事是我们可以做的。

One, we can be aware of our own biases
第一,我们可以意识到 我们自己有偏见存在,

and the bias in machines around us.
以及我们身边的机器也有偏见。

Two, we can make sure that diverse teams are building this technology.
第二,我们可以确保这项技术 是由多样化的团队来建造。

And three, we have to give it diverse experiences to learn from.
第三,我们要提供多样化的经验, 让这项技术从中学习。

I can talk about the first two from personal experience.
我可以从个人经历来谈前两点。

When you work in technology
当你在科技业工作,

and you don’t look like a Mark Zuckerberg or Elon Musk,
且你看起来并不像是 马克祖克柏或伊隆马斯克,

your life is a little bit difficult, your ability gets questioned.
你的生活就会有一点辛苦, 你的能力会被质疑。

Here’s just one example.
这只是一个例子。

Like most developers, I often join online tech forums
和大部分的开发者一样, 我通常会加入线上技术讨论区,

and share my knowledge to help others.
分享我的知识来协助他人。

And I’ve found,
而我发现,

when I log on as myself, with my own photo, my own name,
当我用自己登入,放我自己的 照片,用我自己的名字,

I tend to get questions or comments like this:
我常常会得到这样的问题或意见:

What makes you think you’re qualified to talk about AI?
「你怎么会认为 你有资格谈论人工智慧?」

What makes you think you know about machine learning?
「你怎么会认为 你了解机器学习?」

So, as you do, I made a new profile,
所以,跟大家一样, 我会做个新的个人档案,

and this time, instead of my own picture, I chose a cat with a jet pack on it.
这次,我不放自己的照片,我选的照片是一隻背著 喷气飞行器的猫。

And I chose a name that did not reveal my gender.
我选用的名字看不出性别。

You can probably guess where this is going, right?
你们应该猜得出后续发展,对吧?

So, this time, I didn’t get any of those patronizing comments about my ability
所以,这次,那些高人一等的人完全没有对我的能力提出意见,

and I was able to actually get some work done.
我还真的能完成一些事。

And it sucks, guys.
各位,这真的很鸟。

I’ve been building robots since I was 15,
我从十五岁时就在建造机器人了,

I have a few degrees in computer science,
我有几个资讯科学的学位,

and yet, I had to hide my gender
但,我还是得隐瞒我的性别,

in order for my work to be taken seriously.
我所做的事才会被认真看待。

So, what’s going on here?
这是怎么回事?

Are men just better at technology than women?
在科技上,男人就是 比女人厉害吗?

Another study found
另一项研究发现,

that when women coders on one platform hid their gender, like myself,
在平台上,当女性编码者 像我这样隐瞒自己的性别时,

their code was accepted four percent more than men.
她们的程式码被接受的 比率比男性高 4%。

So this is not about the talent.
重点并不是才华。

This is about an elitism in AI
重点是人工智慧领域的精英主义,

that says a programmer needs to look like a certain person.
它说,程式设计师必须要 看起来像是某种人。

What we really need to do to make AI better
若想要让人工智慧更好, 我们需要做的事情

is bring people from all kinds of backgrounds.
是集合各种背景的人。

We need people who can write and tell stories
我们需要能够写故事、说故事的人

to help us create personalities of AI.
来协助我们创造出 人工智慧的人格。

We need people who can solve problems.
我们需要能够解决问题的人。

We need people who face different challenges
我们需要能够面对不同挑战的人,

and we need people who can tell us what are the real issues that need fixing
我们需要能够告诉我们 真正需要修正的问题是什么,

and help us find ways that technology can actually fix it.
且协助我们想办法 用科技来修正它的人。

Because, when people from diverse backgrounds come together,
因为,当来自多样化 背景的人集结在一起,

when we build things in the right way,
当我们用对的方式建造新东西时,

the possibilities are limitless.
就会有无限的可能性。

And that’s what I want to end by talking to you about.
我希望用这一点 来结束今天的演说。

Less racist robots, less machines that are going to take our jobs --
少谈种族主义的机器人、 少谈机器会抢走我们的工作——

and more about what technology can actually achieve.
多谈科技能够达成什么。

So, yes, some of the energy in the world of AI,
所以,是的,在人工智慧 世界中的某些能量,

in the world of technology
在科技世界中的某些能量,

is going to be about what ads you see on your stream.
会被用来决定放什么广告 到你的串流中。

But a lot of it is going towards making the world so much better.
但有很大一部分的目的 会是要让世界变得更好。

Think about a pregnant woman in the Democratic Republic of Congo,
想想看,在刚果民主 共和国的怀孕女子,

who has to walk 17 hours to her nearest rural prenatal clinic
她得要走十七小时的路, 才能到达最近的乡村妇产科诊所,

to get a checkup.
去做一次检查。

What if she could get diagnosis on her phone, instead?
如果她能够改用她的手机 来取得诊断呢?

Or think about what AI could do
或者,想想人工智慧能做什么,

for those one in three women in South Africa
来协助解决南非有三分之一女性

who face domestic violence.
要面对家暴的问题。

If it wasn’t safe to talk out loud,
如果大声谈论并不安全,

they could get an AI service to raise alarm,
她们可以透过人工智慧服务来求援,

get financial and legal advice.
取得财务和法律建议。

These are all real examples of projects that people, including myself,
这些例子都是目前 有人在利用人工智慧

are working on right now, using AI.
进行的专案计画,包括我在内。

So, I’m sure in the next couple of days there will be yet another news story
我相信,在接下来几天, 还会有另一则新闻报导,

about the existential risk,
谈及生存危机、

robots taking over and coming for your jobs.
机器人即将来抢走你的工作。

(Laughter)
(笑声)

And when something like that happens,
当发生这样的状况时,

I know I’ll get the same messages worrying about the future.
我知道我又会收到 关于担心未来的讯息。

But I feel incredibly positive about this technology.
但我对这项技术的感受 是非常正面的。

This is our chance to remake the world into a much more equal place.
这是一个机会, 我们可以把世界重建,成为更平等的地方。

But to do that, we need to build it the right way from the get go.
但,若想做到这个目标, 打从一开始就要用对方式。

We need people of different genders, races, sexualities and backgrounds.
我们需要不同性别、 种族、性向,和背景的人。

We need women to be the makers
我们需要女性来当创造者,

and not just the machines who do the makers’ bidding.
不只是会照著创造者的 命令做事的机器。

We need to think very carefully what we teach machines,
我们得要非常小心地思考 我们要教导机器什么,

what data we give them,
要给它们什么资料,

so they don’t just repeat our own past mistakes.
以免它们重蹈我们过去的覆辙。

So I hope I leave you thinking about two things.
我希望留下两件事让各位思考。

First, I hope you leave thinking about bias today.
第一,我希望大家离开这裡之后, 能想想现今的偏见。

And that the next time you scroll past an advert
下次当你滑手机看到广告时,

that assumes you are interested in fertility clinics
且广告内容是假设 你想了解不孕症诊所

or online betting websites,
或线上赌博网站,

that you think and remember
你就要思考一下并想起来,

that the same technology is assuming that a black man will reoffend.
这项技术也同样会假设 黑人会再犯罪。

Or that a woman is more likely to be a personal assistant than a CEO.
或者女性比较有可能 成为个人助理而非执行长。

And I hope that reminds you that we need to do something about it.
我希望那能够提醒各位, 我们得要採取行动。

And second,
第二,

I hope you think about the fact
我希望大家能想想,

that you don’t need to look a certain way
你并不需要有某种外表

or have a certain background in engineering or technology
或某种工程或科技背景,

to create AI,
才能创造人工智慧,

which is going to be a phenomenal force for our future.
它将会是我们未来的 一股惊人力量。

You don’t need to look like a Mark Zuckerberg,
你不需要看起来像马克祖克柏,

you can look like me.
你可以看起来像我。

And it is up to all of us in this room
要靠我们这间房间的所有人,

to convince the governments and the corporations
来说服政府和企业,

to build AI technology for everyone,
为每个人建造人工智慧技术,

including the edge cases.
包括边缘的个案。

And for us all to get education
而我们所有人将来都应该要

about this phenomenal technology in the future.
接受关于这项重大技术的教育。

Because if we do that,
因为,如果能这么做,

then we’ve only just scratched the surface of what we can achieve with AI.
那么我们还能够用人工智慧 做出更多了不起的事。

相关文章

网友评论

    本文标题:如何防止AI(人工智能)学到人类的偏见

    本文链接:https://www.haomeiwen.com/subject/ynetbqtx.html