美文网首页懂你 lv7
懂你英语 Level 7 Unit 2 Part 3【On Ma

懂你英语 Level 7 Unit 2 Part 3【On Ma

作者: 流非沫 | 来源:发表于2019-07-21 18:36 被阅读0次

    TED Talk >> Zeynep Tufekci: Machine intelligence makes human morals more important



    Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?

    审查是很重要的,但不能解决所有的问题。拿Facebook的强大的 新闻流算法来说,就是通过你的朋友圈和你浏览过的页面,决定你的 “推荐内容”的算法。它会决定要不要再推一张婴儿照片给你。



    A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.

    要不要推一条熟人的沮丧的状态?要不要推一条重要但艰涩的新闻? 这个问题没有正解。Facebook会根据网站的参与度来优化:喜欢、分享、评论。



    In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem.

    在2014年8月,Ferguson, Missouri爆发了抗议游行,一个白人警察在不明的状况下杀害了一位非裔少年。关于游行的新闻在我的未经算法过滤的Twitter上大量出现,但Facebook上却没有。是因为我的 Facebook 好友不关注这事吗?我禁用了 Facebook的算法,这是很麻烦的一键事,因为Facebook希望你一直在它的算法控制下使用,看到我的朋友持续地谈论这件事,只是算法没给我这些信息。我研究了这个现象,发现这是个普遍的问题。





    The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.

    Ferguson事件对算法是不适用的,它不是值得“赞”的新闻,谁会在这样的文章下点“赞”呢?甚至这新闻都不好被评论。因为没有“赞”和评论,算法会减少这些新闻的曝光,所以我们无法看到。相反的,在同一周,Facebook的算法热推了ALS冰桶挑战的信息。这很有意义,倒冰水, 为慈善捐款,很好。这个事件对算法是很适用的,机器帮我们做了这个决定。如果Facebook成为唯一的渠道,那么一场非常重要但却很困难的谈话可能会被扼杀


    Question

    1. What is possible danger of using algorithm to filter news?
      > Important social issues could be ignored
    2. How was news ranked by Facebook's new algorithm?
      > according to the likelihood of user engagement
    3. When you protest something,
      > you strongly object to it.



    Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."

    最后,这些系统也可能会在一些与人力系统不相似的那些事情上搞错。你们记得 Watson 吧,那个在智力竞赛《危险边缘》横扫人类选手的IBM机器智能系统,它是个很厉害的选手。但是在最后一轮比赛中,Watson 被问道:“它最大的机场是以 二战英雄命名的,它第二大机场是以二战战场命名的。”



    Chicago. The two humans got it right. Watson, on the other hand, answered "Toronto" -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make.

    芝加哥。两位人类选手答对了 但 Watson 答的是,多伦多,这是个猜美国城市的环节!这个厉害的系统也会犯人类都不会犯的错,二年级小孩都不会犯的错误。



    Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.

    我们的机器智能系统,会在一些不符合人类出错模式的问题上出错,这些问题都是我们无法预料和准备的。丢失一份完全有能力胜任的工作时,人们会感到很糟,但如果是因为机器子程序的栈溢出,就简直糟透了。



    In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons.

    在2010年五月,华尔街出现一次股票闪电崩盘,原因是“卖出”算法的反馈回路加剧了这次事故,在36分钟内损失了几十亿美金。我甚至不敢想,致命的自动化武器发生“错误”会是什么后果。



    So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.

    是的,人类总是会有偏见,法庭上、新闻机构、战争中的,决策者、看门人… 他们都会犯错,但这恰恰是我要说的。我们无法抛开这些困难的问题,我们不能把我们自身该承担的责任外包给机器。



    Artificial intelligence does not give us a "Get out of ethics free card."

    人工智能不会给我们 一张“伦理免责卡”。


    Question

    1. Why is wall streets's algorithm aware a cause a concern?
      > Algorithm errors can have serious consequences.
    2. What does Tufekci mean by "artificial intelligence does not give us a get out of ethics free" card."
      > Decisions made by AI don't free people from moral responsibilities.
    3. To wipe the floor with someone is
      > to defeat them easily



    Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.

    数据科学家Fred Benenson称之为“数学粉饰”。我们需要是相反的东西。我们需要培养算法的怀疑复查和调研能力。我们需要确保算法问责行,有审查,并公开透明。我们必须认识到,把数学和计算引入解决复杂的、有价值的人类事务中,并不能带来客观性。相反,人类事务的复杂性会扰乱算法。是的,我们可以,并且需要使用计算机来帮助我们做更好的决策。但我们也需要在判断中加入道德义务,在这个框架下使用算法,而不是像人与人之间退出、外包那样,就把责任转移给机器。



    Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.

    人工智能到来了, 这意味着我们要格外坚守人类的价值观和伦理。


    Question

    1. How does Tufekci end her presentation?
      > by emphasizing the importance of human values and ethics
    2. To abdicate responsibility means to
      > fail or refuse to perform a duty

    Game Grammar

    1. Which of the following words share similar meaning?
      > ethical, moral, righteous, cutthroat, unscrupulous
    2. How does Facebook decide which news to show?
      > likely user engagement
    3. A subjective judgement
      > is based on personal opinions rather than on facts

    相关文章

      网友评论

        本文标题:懂你英语 Level 7 Unit 2 Part 3【On Ma

        本文链接:https://www.haomeiwen.com/subject/uaumlctx.html