美文网首页
流利说-L7-U2-P3 Learning

流利说-L7-U2-P3 Learning

作者: sindy00 | 来源:发表于2021-02-27 07:51 被阅读0次

    On Machine Intelligence 2(4’58)

    — — Zeynep Yufekci


    Machine intelligence is here.

    机器智能来了。

    We're now using computation to make all sorts of decisions, but also new kinds of decisions.

    我们不仅用计算机来做各种决策,而且包括各种新的决策。

    We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden

    我们询问计算机没有单一答案的,主观的、开放的、价值取向的问题。

    We're asking questions like, who should the company hire?

    我们咨询类似问题:“公司应该雇佣谁?"

    Which update from which friend should you be shown?

    你的哪位朋友的更新应该被你看到?

    Which convict(定罪 有罪) is more likely to reoffend(再犯罪 再犯法)?

    那个罪行更容易再犯?

    Which new item or movie should be recommended to people?

    那些新的事项或电影应该被推荐给人们?

    Look, yes, we've been using computers for a while, but this is different.

    看,是的,我们使用计算机已经有一段时间了,但是这是不同的。

    This is a historical twist, because we can not anchor computation for such subjective decisions, the way we can anchor computation for flying airplanes, building bridges, going to the moon.

    这是历史性的转折,因为在这些主观决策上我们不能导向计算机,我可以在飞行飞机,建造桥梁,登陆月球上锚向计算机。

    You know, our airplanes safer?

    你知道,我们的飞机更安全吗?

    Did the bridges sway and fall?

    桥会摇晃倒塌吗?

    There we have the great-upon(伟大的), fairly clear benchmarks(基准率), and we have laws of nature to guide us.

    我们有伟大的、非常清晰的准则,我们有自然法则来引导我们。

    We have no such anchors and benchmarks for decisions in messy human affairs.

    在复杂的人事上我们没有如此的锚向或准则。


    1. What does  Yufekci mean by historical twist?

    ...Computers are being used to solve subjective problems for the first time in history.

    2. With the development of machine intelligence,...

    ....algorithms(算法 计算程序) are now being used to answer subjective questions(主观题).

    3. If someone reflects our personal values, it is ...value-laden(受的主观价值影响的 主观的).

    4.排序

    1)This is a historical twist, because we can not anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.

    2)Are our airplanes safer.  Did the bridges sway and fall?

    3)There we have the great-upon, fairly clear benchmarks, and we have laws of nature to guide us.

    4)We have no such anchors and benchmarks for decisions in messy human affairs.


    To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex.

    让我们的事情变得更复杂的事,我们的软件变得更强大,但是,它也变得更加不透明,更加复杂。

    Recently,  in the past decade, complex algorithms(算法 计算程序 ) have made great strides(大步走 大步发展).

    最近,在过去的数十年,复杂的算法取得了长足发展。

    They can recognize human faces.

    他们能够识别人脸。

    They can decipher(破译 辨认) handwriting.

    他们能够辨认笔迹。

    They can detect(发现 查明) credit card fraud(欺骗罪 欺诈罪) and block spam(垃圾邮件) and they can translate  between languages, they can detect tumors in medical imaging.

    他们能够发现信用卡欺诈,阻挡垃圾邮件,他们可以语言翻译,他们可以通过医疗影像发现肿瘤。

    They can beat humans in chess and Go.

    他们可以在棋类比赛中击败人们。

    Much of this progress comes from a method called machine learning.

    很多这种发展来自于一种称作”机器学习“的方法。

    Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking(需细心的 辛苦的 需专注的) instructions. 

    机器学习与传统程序不同,需要给计算机详细的,精确地、精准的指令。

    It's more like, you take the system and you feed it lots data, including unstructured(结构凌乱的 无条理的 紊乱的) data, like the kind we generate in our digital(数字式的 数字显示的) lives.

    它更像,你采用了这系统,你投喂它很多数据,包括非结构化数据,例如我们数位生活中产生的那种。

    And the system learns by churning(剧烈翻滚的 湍急的) through this data.

    系统扎入这些数据中学习。

    And also, crucially, these systems don't operated under a single-answer logic.

    但是,重要的是,这些系统也没有在单一答案逻辑下运行。

    They don't produce a simple answer; it's more probabilistic(给予概率的 ):" this one is probably more like what you looking for.

    他们产生的不是简单的答案,而是更概率性的,这个更像你正在寻找的。

    Now, the upside(好的一面 正面的) is: This method is really powerful.

    现在,好的一面是,这个办法真的很有力。

    The head of Google's AI systems called it, the unreasonable(不合理的 不公正的 期望过高的) effectiveness of data.

    谷歌AI系统负责人称它,不合理的数据效率。

    The downside is, we don't really understand what the system learned. In fact that's its power.

    不好的一面是,我们并不真正了解这些系统都学了什么。事实上这就是它的力量之所在。

    This is less like giving instructions to a computer, it's more like training a puppy-machine-creature we don't really understand or control.

    这不像是给一台计算机指令,更像是训练一条机器小狗,我们并不了解和控制。

    So this is our problem.

    所以这就是我们的问题。

    It's a problem when this artificial intelligence system gets things wrong.

    当人工智能系统会出错。

    It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem.

    当他们得出正确答案时,这也是一个问题。因为当是一个主观问题时,我们并不知道那个是哪个

    We don't know what this thing is thinking...

    我们不知道这个东西它在想什么。。


    1. Why is machine intelligence unpredictable?

    ..it is often unclear how it comes to its conclusions.

    2. What is one characteristic of traditional programming?

    .. it requires explicit instructions.

    3. If a method or argument is probabilistic, it is..

    ....based on what is most likely to be true.


    So, consider a hiring algorithm--a system used to hire people, right, using machine-learning system.

    所以,考虑一下招聘算法-一个使用机器学习系统来招聘职员的系统。

    Such a system would have been trained on previous employee's data and instructed to find and hire people like the existing high performers in the company.

    这样的系统将会采用以前的员工数据进行培训,被指示去寻找和招聘那些公司里现存的表现非常好的职员。

    Sounds good.

    听起来不错。

    I once attended a conference that brought together human resources managers and executives, high-level people, using such system in hiring.

    我曾参加一个会议,聚集了人力资源经理、总监、高层人士,正是采用这个系统进行招聘。

    They were super excited.

    他们非常兴奋。

    They thought that this would make hiring more objective(客观的 就事论事的 不带个人情感的), less biased, and given women and  

    minorities a better shot against biased human managers.

    他们认为这个将会使招聘更客观,少一些偏见,针对有偏见的人力资源经理,给予女性和少数群体一个更好的机会

    Look, human hiring is biased.

    看,招聘职员是存在偏见的。

    I know, I mean, in one of my early jobs as a programmer,

    我知道,我的意思是,在我早期做程序员时,

    my immediate manager(直接主管) would sometimes come down to where I was, really early in the morning or really late in the afternoon, and she'd say," Zeynep, let's go to lunch."

    我的直接管管有时会屈尊来到我工作的地方,真的早晨很早或是下午很晚的时候,她说:“泽伊内普,我们去吃午饭吧”

    I'd be puzzled by the wired timing. It's 4pm. Lunch?

    我被这奇怪的时间点给搞糊涂了,下午4点,吃午饭?

    I was broke, so free lunch. I always went.

    我那时很穷,所以,免费午餐,我总是会去。

    I later realized what was happening.

    后来我认识到发生了什么。

    My immediate managers had not confessed to their high-ups that the programmer they hired for a serious job was a teen girl who wore jean and sneakers to work.

    我的直接经理没有向他们的高层要员坦白,他们雇佣了一个穿着牛寨库,运动鞋的青少年小姑娘来做这项重要的编程工作。

    I was doing a good job, I just looked wrong and was the wrong age and gender.

    我工作做的很好,我只是看起来不合适,不合适的年龄和性别。

    So hiring in a gender- and - race -blind way certainly sounds good to me.

    所以,在一个性别种族无法看到的方式下招聘,听起来对我是好的。

    But with these system, it is more complicated, and here's why.

    但是,有了这些系统,它更复杂,这就是问什么

    Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things.

    当下,计算机系统能从你的零星数据推断出关于你的一切事情,甚至那些你没有公开的事。

    They can infer your sexual orientation, your personality traits, your political learnings.

    他们能够推断出你的性别取向,你的性格特征,你的政治倾向。

    They have predictive power with high levels of accuracy.

    他们有高度精准的预测能力。

    Remember, for things you haven't even closed. This is inference.

    记住,是那些你没有公开的事情。这就是推断。


    1. What does Yufekci personal  experience with her immediate manager suggest?

    ...Human bias is a problem in the workplace.

    2. To make an inference means...

    ...to form an opinion based on the available information.

    3. 选词填空

    Such a system would have been trained on previous employee's data and instructed to find and hire people like the existing high performers in the company.

    4. 听复述

    Recently, in the past decade, complex algorithms have made great strides.

    5. The downside is, we don't really understand what the system learned.

    6. Hiring in a gender-and race-blind way certainly sounds good to me.

    相关文章

      网友评论

          本文标题:流利说-L7-U2-P3 Learning

          本文链接:https://www.haomeiwen.com/subject/musbfltx.html