美文网首页
《经济学人》精读54:Humans may not always

《经济学人》精读54:Humans may not always

作者: VictorLiNZ | 来源:发表于2018-02-23 03:54 被阅读510次

    Humans may not always grasp why AIs act. Don’t panic

    Humans are inscrutable too. Existing rules and regulations can apply to artificial intelligence

    THERE is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.

    Handing complicated tasks to computers is not new. But a recent spurt of progress in machine learning, a subfield of artificial intelligence (AI), has enabled computers to tackle many problems which were previously beyond them. The result has been an AI boom, with computers moving into everything from medical diagnosis and insurance to self-driving cars.

    There is a snag, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do (see article). When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer a car through a crowded city, it is potentially harmful. And when things go wrong—as, even with the best system, they inevitably will—then customers, regulators and the courts will want to know why.

    spurt: an amount of liquid, flame, etc. that comes out of something suddenly

    snag: an unexpected problem or difficulty

    parole: permission given to a prisoner to leave prison before the end of a sentence usually as a reward for behaving well

    人工智能在处理下棋,推荐影片等问题,背后怎么运作的原理都可以被忽略,我们可以不搞清楚它是怎么决定的。但是当人工智能要决定谁能获得贷款,谁能获得保释,或者怎么驾驶一辆车行走在拥挤的城市的时候,不弄清楚背后的原理有可能导致大祸!


    For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained.But that is an overreaction. Despite their futuristic sheen, the difficulties posed by clever computers are not unprecedented. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings. Adding new ones will pose a challenge, but not an insuperable one. In response to the flaws in humans, society has evolved a series of workable coping mechanisms, called laws, rules and regulations. With a little tinkering, many of these can be applied to machines as well.

    Be open-minded

    Start with human beings. They are even harder to understand than a computer program. When scientists peer inside their heads, using expensive brain-scanning machines, they cannot make sense of what they see. And although humans can give explanations for their own behaviour, they are not always accurate. It is not just that people lie and dissemble.Even honest humans have only limited access to what is going on in their subconscious mind. The explanations they offer are more like retrospective rationalisations than summaries of all the complex processing their brains are doing. Machine learning itself demonstrates this. If people could explain their own patterns of thought, they could program machines to replicate them directly, instead of having to get them to teach themselves through the trial and error of machine learning.

    sheen: a soft, smooth, shiny quality 

    有人觉得政府部门不应该用人工智能和那些算法来做决定,因为不知道它背后是怎么运作的,但是我们人类处理过更多这种情况就是学习我们人类的大脑本身,我们制订了法律法规来规范人类自身的行为,同样我们也可以给人工智能也制定一些规则,让它“好好做机器人”

    dissemble: to hide your true feelings, opinions, etc. 

    dissembling: such dissembling from a politician is nothing new


    Away from such lofty philosophy, humans have worked with computers on complex tasks for decades. As well as flying aeroplanes, computers watch bank accounts for fraud and adjudicate insurance claims. One lesson from such applications is that, wherever possible, people should supervise the machines. For all the jokes, pilots are vital in case something happens that is beyond the scope of artificial intelligence. As computers spread, companies and governments should ensure the first line of defence is a real person who can overrule the algorithms if necessary.

    Even when people are not “in the loop”, as with an entirely self-driving cars, today’s liability laws can help. Courts may struggle to assign blame when neither an algorithm nor its programmer can properly account for its actions. But it is not necessary to know exactly what went on in a brain—of either the silicon or biological variety—to decide whether an accident could have been avoided. Instead courts can ask the familiar question of whether a different course of action might have reasonably prevented the mistake. If so, liability could fall back onto whoever sold the product or runs the system.

    adjudicate: to make an official decision about who is right in a dispute

    人类和机器已经和谐相处,一起工作了几十年了,从驾驶飞机到银行系统监测诈骗等。在任何情况下,人类都必须掌控这个人工智能,飞行员的职责就是当人工智能没法处理问题的时候,就该到他表现了


    There are other worries. A machine trained on old data might struggle with new circumstances, such as changing cultural attitudes. There are examples of algorithms which, after being trained by people, end up discriminating over race and sex. But the choice is not between prejudiced algorithms and fair-minded humans. It is between biased humans and the biased machines they create. A racist human judge may go uncorrected for years. An algorithm that advises judges might be applied to thousands of cases each year. That will throw off so much data that biases can rapidly be spotted and fixed.

    AI is bound to suffer some troubles—how could it not? But it also promises extraordinary benefits and the difficulties it poses are not unprecedented. People should look to the data, as machines do.Regulators should start with a light touch and demand rapid fixes when things go wrong. If the new black boxes prove tricky, there will be time to toughen the rules.

    还有一个问题就是在旧数据培训出来的机器人在处理遇到的新问题时会很难办,很挣扎,譬如不断在变的社会观念。有例子证明,人类设计和培训出来的算法,有种族歧视和性别歧视!

    人工智能肯定会遇到一些问题,怎么可能不呢。但这些问题不是史无前例的,早点发现问题早点解决,如果最后发现它很狡猾的话,就制定更严格的规则!

    总结:人工智能是未来发展的新风口,赶紧找到一个进入行业的入口,深挖下去你就站在下一个风口!

    --------------------------------------------------------------------------------------------------------------------

    Results

    Lexile®Measure: 1000L - 1100L

    Mean Sentence Length: 15.26

    Mean Log Word Frequency: 3.33

    Word Count: 824

    这篇文章的蓝思值是在1000-1100L, 适合英语专业大一大二的水平学习,是经济学人里最简单的,好像没见过低于1000的!

    使用kindle断断续续地读《经济学人》三年,发现从一开始磕磕碰碰到现在比较顺畅地读完,进步很大,推荐购买!点击这里可以去亚马逊官网购买~

    相关文章

      网友评论

          本文标题:《经济学人》精读54:Humans may not always

          本文链接:https://www.haomeiwen.com/subject/jgetxftx.html