人工智能技术并非真正的威胁(TED演讲视频+双语译文)
演讲题目:
WE ARE BUILDING A DYSTOPIA JUST TO MAKE PEOPLE CLICK ON ADS
我们在建立一个反乌托邦,仅仅是为了让人们点击广告(视频,点我观看)
We’re building an artificial intelligence-powered dys-topia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren’t even the real threat. What we need to understand is how the powerful might use AI to control us—and what we can do in response.
技术社会学家泽奈普·图费克奇致力于研究技术对社会造成的影响,她说:“我们正在通过点击鼠标来建立一个人工智能参与构成的反乌托邦。”在让人大开眼界的演讲中,她详细介绍了诸如脸书、谷歌和亚马逊等公司如何使用户点击他们投放的广告并干扰用户对政治与社会的理解。人工智能本身并非真正的威胁,我们需要了解的是掌权者可能如何利用该技术来操控我们,而我们能做些什么来应对。
正文如下:
So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that’s a distant threat. Or, we fret about digital surveillance with metaphors from the past. “1984,” George Orwell’s “1984,” it’s hitting the bestseller lists again. It’s a great book, but it’s not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent.
当人们谈论起对于人工智能的恐惧时,浮现在脑海里的往往是失控的机器人,就像终结者一样。这种担心可能有一定道理,但目前和我们相隔甚远。或者说,我们对数字监控心生恐惧,从过去的隐喻中可见端倪。例如乔治·奥威尔的著作《1984》,最近再次登上热销榜。这是一本很好的书,但是书中的反乌托邦社会并非21世纪的正确缩影。我们最应该担心的并不是人工智能本身对我们的影响,而是掌权者会怎样利用人工智能来控制并摆布我们,方式新奇,有时隐蔽、微妙且不可预料。不久的将来,很多对我们的自由和尊严有潜在威胁的科技就会被脸书、谷歌、亚马逊、阿里巴巴和腾讯等公司开发出来,那些公司收集并贩卖我们的信息和关注点给广告商等机构。
Now, artificial intelligence has started bolstering their business as well. And it may seem like artificial intelligence is just the next thing after online ads. It’s not. It’s a jump in category. It’s a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research. But to paraphrase a famous Hollywood philosopher, “With prodigious potential comes prodigious risk.”
现在,人工智能也开始强化自身的业务,看起来好像人工智能只不过是网络广告的下一步。但并非如此,它是一个全新的类别,是一个完全不同的世界,有着极高的潜力。它可以加快人们在很多领域的学习与研究速度,但就如好莱坞一位著名哲学家所言:“惊人的潜力带来的是惊人的风险。”
Now let’s look at a basic fact of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We’ve all had the experience of being followed on the web by an ad based on something we searched or read. You know, you look up a pair of boots and for a week, those boots are following you around everywhere you go. Even after you succumb and buy them, they’re still following you around. We’re kind of inured to that kind of basic, cheap manipulation. We roll our eyes and we think, “You know what? These things don’t work.” Except, online, the digital technologies are not just ads. Now, to understand that, let’s think of a physical world example. You know how, at the checkout counters at supermarkets, near the cashier, there’s candy and gum at the eye level of kids? That’s designed to make them whine at their parents just as the parents are about to sort of check out. Now, that’s a persuasion architecture. It’s not nice, but it kind of works. That’s why you see it in every supermarket. Now, in the physical world, such persuasion architectures are kind of limited, because you can only put so many things by the cashier. Right? And the candy and gum, it’s the same for everyone, even though it mostly works only for people who have whiny little humans beside them. In the physical world, we live with those limitations.
我们先了解一个关于数字生活的基本事实,即网络广告。是吧?我们几乎忽略了它们。尽管它们看起来很粗糙,没什么说服力。我们都曾在上网时被网上的一些广告追踪过。它们是根据我们的浏览历史生成的。比如,你搜索了一双皮靴,接下来的一周里,这双皮靴就在网上如影随形地跟着你。即使你屈服了,买下了它们,广告也不会消失。我们已经习惯了这种廉价粗暴的操纵,还不屑一顾地认为:这东西对我没用。但是别忘了,在网上,广告并不是数字科技的全部。为了便于理解,我们举几个现实世界的例子:知道为什么在超市收银台旁边要放一些小孩子一眼就能看到的糖果吗?那是为了让孩子在父母要结账的时候跟父母撒娇。那是一种说服架构,并不完美,但管用。这也是每家超市惯用的伎俩。在现实世界里,这种说服架构是有限制的。因为能放在收银台旁边的东西是有限的,对吧?而且所有人看到的都是同样的糖果。所以说,大多数情况下,这针对的只是那些带着小孩的买主。这些是现实世界的种种局限。
In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone’s phone private screen, so it’s not visible to us. And that’s different. And that’s just one of the basic things that artificial intelligence can do.
但在网络世界里,说服架构可以千变万化,因人而异。通过了解每个人的弱点,它们可以针对、推断、理解并部署在个人周围,一个接一个,然后会被发送到每个人的私人手机屏幕上,而其他人都看不见,这是(与物质世界)截然不同的地方,而这仅仅是人工智能的基本功能之一。
Now, let’s take an example. Let’s say you want to sell plane tickets to Vegas. Right? So in the old world, you could think of some demographics to target based on experience and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have a high limit on their credit card, or retired couples. Right? That’s what you would do in the past.
再举个例子:假如你要销售飞往拉斯维加斯的机票。在过去,你也许需要一些统计资料来确定销售对象,然后根据个人经验和判断。你也许会把推广目标定为25岁到35岁的男性,或者是高信用卡额度人群,或者是退休夫妇。对吧?那就是你以前采用的方法。
With big data and machine learning, that’s not how it works anymore. So to imagine that, think of all the data that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs that you uploaded there. If you start typing something and change your mind and delete it, Facebook keeps those and analyzes them, too. Increasingly, it tries to match you with your offline data. It also purchases a lot of data from data brokers. It could be everything from your financial records to a good chunk of your browsing history. Right? In the US, such data is routinely collected, collated and sold. In Europe, they have tougher rules.
但在大数据和人工智能面前,一切都改变了。请想象下,你被脸书掌握的所有信息、你的每一次状态更新、每一条对话内容、所有的登陆地点、你上传的所有照片,以及你输入了一部分后来又删掉的内容——这部分脸书也会保存下来进行分析。它将越来越多的数据和你的离线生活匹配。它还会从网络信息商那里购买大量信息,从你的财务记录到所有网页浏览记录,各类信息无所不包。在美国,这样的数据经常被收集、整理并贩卖。在欧洲,则有着更严格的规定。
So what happens then is, by churning through all that data, these machine-learning algorithms—that’s why they’re called learning algorithms—they learn to understand the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn how to apply this to new people. So if they’re presented with a new person, they can classify whether that person is likely to buy a ticket to Vegas or not. Fine. You’re thinking, an offer to buy tickets to Vegas. I can ignore that. But the problem isn’t that. The problem is, we no longer really understand how these complex algorithms work. We don’t understand how they’re doing this categorization. It’s giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it’s operating any more than you’d know what I was thinking right now if you were shown a cross section of my brain. It’s like we’re not programming anymore, we’re growing intelligence that we don’t truly understand.
所以接下来会发生的是:电脑通过算法分析所有收集到的数据——这之所以叫做学习算法,因为它们能够学会分析所有之前买过去维加斯机票的人的性格特征,而在学会分析已有数据的同时,也在学习如何将其应用在新的人群中。如果有个新用户,它们可以迅速判断这个人会不会买去维加斯的机票。还好,你也许会想,不就是一个卖机票的广告吗?我不理它不就行了。但问题不在这儿,真正的问题是,我们已经无法真正理解这些复杂的算法究竟是怎样工作的了。我们不知道它们是如何进行这种分类的。那是庞大的数字矩阵,成千上万的行与列,也许是数百万的行与列,而没有程序员看管它们,没有任何人看管它们。即使你拥有所有的数据,也完全了解算法是如何运行的,就好比展示出我的部分脑截面,你也不知道我在想什么。这就好像我们不再是在编程了,而是在创造一种我们并不真正了解的智能。
And these things only work if there’s an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work. That’s why Facebook wants to collect all the data it can about you. The algorithms work better.
这些只有在庞大的数据支持下才能工作,所以它们才致力于对我们所有人进行强力监控,以便机器学习算法能够运行。这就是脸书费尽心思收集用户信息的原因。这样算法才能更好运行。
So let’s push that Vegas example a bit. What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.
我们再将那个维加斯的例子强化一下。如果那个我们并不了解的系统发现,更容易把票卖给即将进入躁狂阶段的躁郁症患者,那是一群有挥霍金钱及好赌倾向的人,该怎么办。这些算法完全做得到,而你却对它们是如何做到的毫不知情。我曾把这个例子举给一些计算机科学家,后来其中一个找到我。他很烦恼地说:“这就是我没办法发表它的原因。”我问:“不能发表什么?”他曾尝试设计程序在狂躁症病人被确诊具有某些医疗症状前从他们的社交媒体上发现病情的端倪。他做到了,还做得相当不错,但他不明白这是怎么做到的,或者说如何算出来的。
Now, the problem isn’t solved if he doesn’t publish it, because there are already companies that are developing this kind of technology, and a lot of the stuff is just off the shelf. This is not very difficult anymore.
现在,不是他不发表,这个问题就得不到解决,因为已有其他公司在开发这种技术,很多类似的东西就摆在货架上。这已经不是什么难事了。
Do you ever go on YouTube meaning to watch one video and an hour later you’ve watched 27? You know how YouTube has this column on the right that says, “Up next” and it autoplays something? It’s an algorithm picking what it thinks that you might be interested in and maybe not find on your own. It’s not a human editor. It’s what algorithms do. It picks up on what you have watched and what people like you have watched, and infers that that must be what you’re interested in, what you want more of, and just shows you more. It sounds like a benign and useful feature, except when it isn’t.
你是否曾经只想在YouTube看1个视频,结果不知不觉看了27个?你知不知道YouTube的网页右边有一个边栏,上面写着“即将播放”,然后它往往会自动播放一些东西?这就是算法,算出你的兴趣点,甚至连你自己都没想到。这可不是人工编辑,这是算法的本职工作,它选出你以及和你相似的人看过的视频,然后推断出你的大致兴趣圈,推断出你想看什么,然后将这些东西展示给你。这听起来像是一个无害且贴心的功能,但有时候它并不是。
So in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there.
2016年,我参加了当时的总统候选人唐纳德·特朗普的系列集会,以学者的身份研究这个支持他的运动。我研究的就是社会运动,所以当时就研究了这个。然后,我想要写篇有关其中一次集会的文章,于是在YouTube上看了几遍这个集会的视频。YouTube就开始不断给我推荐并且自动播放一些白人至上主义的视频,这些视频一个比一个极端。如果我看了一个,就会有另一个更加极端的视频加入队列并自动播放。如果你观看有关希拉里或伯尼的内容,YouTube就会开始推荐并自动播放左翼阴谋内容,并且愈演愈烈。
Well, you might be thinking, this is politics, but it’s not. This isn’t about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan. It’s like you’re never hardcore enough for YouTube.
你也许觉得这和政治有关,但不是。这跟政治无关,只不过是算法在学习人类行为而已。我曾在YouTube上观看过一个有关素食主义的视频,之后YouTube就推送了纯素主义的视频。在YouTube上你就好像永远都不够决绝。
So what’s going on? Now, YouTube’s algorithm is proprietary, but here’s what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can show them something more hardcore, they’re more likely to stay on the site watching video after video going down that rabbit hole while Google serves them ads. Now, with nobody minding the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites and who have such explicit anti-Semitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit anti-Semitic content on their profile but who the algorithm detects may be susceptible to such messages, and lets you target them with ads, too. Now, this may sound like an implausible example, but this is real. ProPublica investigated this and found that you can indeed do this on Facebook, and Facebook helpfully offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google, and very quickly they found, yep, you can do it on Google, too. And it wasn't even expensive. The ProPublica reporter spent about 30 dollars to target this category.
这到底是怎么回事儿?现在YouTube有其专有的算法,但我认为事情是这样的:这算法已经分析出,如果能展示出更加核心的内容来诱惑网站用户,那用户就更有可能沉浸在网页里一个接一个观看推荐的视频,同时谷歌给它们投放广告。目前没有人在意网络的道德规范,这些网站可以对用户进行划分:哪些人仇视犹太人?哪些人视犹太人为寄生虫?以及说过明显反犹太言论的人。然后让你面向这些目标人群投放广告。他们还可以利用算法来找到和你类似的观众。那些个人账号中虽然没有过明显的反犹太人言论,但却被算法检测出可能被这种言论影响。然后也面向他们投放同样的广告。这听起来难以置信,但确有其事,ProPublica在这方面调查过,发现这的确可以在脸书上实现,而脸书还就如何将算法的受众再度扩大积极提出建议。Buzzfeed曾在谷歌上尝试,并很快发现:没错,这可以在谷歌实现。而这甚至花不了多少钱。ProPublica只花了大概30美元就找出了目标人群。
So last year, Donald Trump’s social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia, and I’m going to read exactly what he said. I’m quoting.
去年,特朗普的社交媒体经理披露说,他们使用脸书的隐藏发帖来动员大众退出,不是劝告,而是说服他们根本就不要投票。为了做到这一点,他们有针对性地找目标,比如在费城这种关键城市里定居的非裔美国人。请注意接下来我要复述的都是他们的原话。
They were using “nonpublic posts whose viewership the campaign controls so that only the people we want to see it see it. We modeled this. It will dramatically affect her ability to turn these people out.”
他们使用“由竞选者控制的非面向公众的贴文发帖。这样就只有我们选定的人可以看到其内容。我们估算了一下,这会在极大程度上做到让这些人退出”。
What's in those dark posts? We have no idea. Facebook won’t tell us.
以上我引述的隐藏贴文说了些什么呢?我们无从知晓,脸书不会告诉我们。
So Facebook also algorithmically arranges the posts that your friends put on Facebook, or the pages you follow. It doesn’t show you everything chronologically. It puts the order in the way that the algorithm thinks will entice you to stay on the site longer.
所以,脸书也利用算法管理贴文,不管是你朋友的发帖,还是你的跟帖。它不会把东西按时间顺序展现给你,而是按算法计算的顺序展现给你,以使你更长时间停留在页面上。
Now, so this has a lot of consequences. You may be thinking somebody is snubbing you on Facebook. The algorithm may never be showing your post to them. The algorithm is prioritizing some of them and burying the others.
而这会产生很多影响。你也许会觉得有人在脸书上对你不理不睬。算法可能根本就没有给他们展示你的发帖。算法会优先展示一些贴文而把另一些埋没。
Experiments show that what the algorithm picks to show you can affect your emotions. But that’s not all. It also affects political behavior. So in 2010, in the midterm elections, Facebook did an experiment on 61 million people in the US that was disclosed after the fact. So some people were shown, “Today is election day,” the simpler one, and some people were shown the one with that tiny tweak with those little thumbnails of your friends who clicked on “I voted.” This simple tweak. OK? So the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls. A fluke? No. Because in 2012, they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016 US presidential election was decided by about 100,000 votes. Now, Facebook can also very easily infer what your politics are, even if you’ve never disclosed them on the site. Right? These algorithms can do that quite easily. What if a platform with that kind of power decides to turn out supporters of one candidate over the other? How would we even know about it?
实验显示,算法决定展示给你的东西会影响你的情绪。但还不止这样,它还会影响政治行为。在2010年的中期选举中,脸书对6100万人做了一个实验。这是事后披露的,当时有些人收到了“今天是选举日”的贴文,比较简单的版本,而有些人则收到了微调过的贴文,上面有一些小的缩略图,显示的是你的哪些好友已投票。这小小的微调,看到了吧,仅仅是添加了缩略图而已。那些贴文仅出现了一次,后来的调查结果显示,根据选民登记册的确认,那次选举多出了34万投票者。仅仅是意外吗?并非如此。因为在2012年他们再次进行了同样的实验。而那一次类似贴文也只出现了一次,最后多出了28万投票者。作为参考,2016年总统大选的最终结果是由大概10万张选票决定的。脸书还可以轻易推断出你的政治倾向,即使你从没有在网上披露过。这可难不倒算法。如果一个拥有这样强大能力的平台决定要让一个候选者胜选,情况会怎样?我们可能根本无法察觉吧?
Now, we started from someplace seemingly innocuous—online adds following us around—and we’ve landed someplace else. As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this. These algorithms can quite easily infer things like your people’s ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people’s sexual orientation just from their dating profile pictures.
现在,我们从一个无伤大雅的方面——如影随形的网络广告——转到了另一个方面。作为普通大众和公民,我们无法再确认自己看到的信息和别人看到的信息是否一样。在没有共同的基本信息的情况下,公开辩论也会逐渐变得不再可能,而我们已经走上了这条路。这些算法可以轻易推断出任何一个用户的种族、宗教和政治观点、个人喜好、智力、心情、用药历史、父母离异状况、年龄和性别,这些都可以从你的脸书关注里推算出来。这些算法可以识别抗议人士,即使他们部分掩盖了面部特征。这些算法也许还能测出人们的性取向,只需查看他们的约会账号头像。
Now, these are probabilistic guesses, so they’re not going to be 100 percent right, but I don’t see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here’s the tragedy: we’re building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won’t be Orwell’s authoritarianism. This isn’t “1984.” Now, if authoritarianism is using overt fear to terrorize us, we’ll all be scared, but we’ll know it, we’ll hate it and we’ll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider’s web and we may not even know we’re in it.
这些都只是概率性推算,所以它们不会百分百精确,这些算法有很多误报,必然会导致其他层次的种种问题,但我没有看到掌权者因此而抵抗使用这些科技的诱惑。想象一下,拥有了海量的市民数据,一个国家能做什么?中国已经在使用面部识别技术来抓捕犯人。不幸的是,我们正在建造一个监控独裁性质的设施,目的仅是为了让人们点击广告。这和奥威尔笔下的独裁政府不同,不是《1984》里的情景。现在,如果独裁主义公开恐吓我们,我们会惧怕,但我们也会察觉,我们会奋起抵抗并瓦解它。但如果掌权者使用这种算法来安静地监视我们,来评判我们,煽动我们,来预测和识别出那些会给政府制造麻烦的家伙,并且大规模布置说服架构,利用每个人自身的弱点和漏洞把我们逐个击破。假如他们的做法受众面很广,就会给每个手机推送不同的信息,这样我们甚至都不会知道周围人看到的是什么。独裁主义会像蜘蛛网一样把我们困住,而我们甚至都不会意识到自己已深陷其中。
So Facebook’s market capitalization is approaching half a trillion dollars. It’s because it works great as a persuasion architecture. But the structure of that architecture is the same whether you’re selling shoes or whether you’re selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that’s what’s got to change.
脸书市值已接近5000亿美元。因为它作为一个说服架构运作完美。但不管你是要卖鞋子还是要卖政治思想,这个架构的结构都是固定的。算法并不知道其中的差异,同样的算法也被使用在我们身上。它让我们更易受广告诱导,管控着我们政治、个人及社会信息的流向,而那正是需要改变的部分。
Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.
我需要澄清一下,我们使用数字平台,因为它们带给我们巨大的便利。我和世界各地的朋友和家人通过脸书联系。我曾撰文探讨社交媒体在社会运动中的重要地位。我曾研究过如何使用这些技术来绕开世界范围内的审查制度。但并不是那些管理脸书或谷歌的人在意图不轨,尝试如何使世界走向极端化并推广极端主义。我读过很多由这些人写的十分善意的言论。但重要的并不是这些科技行业的人说的话,而是他们正在建造的架构体系和商业模式。那才是问题的关键所在。要么脸书是个5000亿市值的弥天大谎,那些广告根本就不奏效,它并不是以一个说服架构的模式成功运作;要么脸书的影响力就是令人担忧的。只有这两种可能。谷歌也一样。
So what can we do? This needs to change. Now, I can’t offer a simple recipe, because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning’s opacity, all this indiscriminate data that’s being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won’t be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don’t see how we can postpone this conversation anymore. These structures are organizing how we function and they’re controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they’re free. In this context, it means that we are the product that’s being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.
那么我们能做什么呢?我们要改变现状。现在我还无法给出一个简单的方法,因为我们需要重新调整整个数字科技的运行结构。从科技开发方式到激励方式,不论是在经济还是在其他领域,建立在这种体系之中的各种因素。我们必须面对并尝试解决由专有算法制造出来的透明度过低问题,机器学习的不透明带来的结构挑战,以及所有这种不加选择收集到的我们的信息。我们的任务艰巨,必须调整我们的科技、我们的创造力,是的,还有我们的政治,这样我们才能构建真正为人类服务的人工智能,但这也会受到人类价值观的阻碍。我也明白这不会轻松。我们甚至可能都无法在这些理论上达成一致。但如果我们每个人都认真对待一直以来都在依赖的这些操作系统。我认为我们没有理由再拖延下去了。这些结构在影响我们的工作方式,同时也在控制我们能做什么、不能做什么,而许许多多这种以广告为生的平台夸下海口,对大众分文不取,而事实上,我们却是他们销售的产品。我们需要一种数字经济,一种我们的数据及关注信息不会被售卖给出价最高的独裁者或煽动者的数字经济。
So to go back to that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence and digital technology to blossom, but for that, we must face this prodigious menace, open-eyed and now.
回到那句好莱坞名人说的话,我们的确想要人工智能与数字科技发展带来的惊人潜力,但与此同时,我们必须做好面对惊人风险的准备,睁大双眼,就在此时此刻。
Thank you.
谢谢。
整理自网络,侵删。
网友评论