美文网首页
W3-D1 Limiting the downsides of

W3-D1 Limiting the downsides of

作者: 罗禹 | 来源:发表于2018-04-16 22:33 被阅读8次

    地道表达

    1. The latest report on the potentially malicious uses of artificial intelligence reads like a pitch for the next series of the dystopian TV show Black Mirror.
      [malicious]怀有恶意的
      [pitch]宣传广告语
      [dystopian]反乌托邦(utopia)

    2. Drones using facial recognition technology to hunt down and kill victims. Information being manipulated to distort the social media feeds of targeted individuals. Cleaning robots being hacked to bomb VIPs. The potentially harmful uses of AI are as vast as the human imagination.
      [Drones]无人驾驶飞机
      [hunt down] 穷追猛打;
      [victims]受害者、牺牲者;
      [manipulated](熟练的)操作处理;
      [bomb] 轰炸
      [vast]巨大的
      [imagination]想象力

    3. One of the big questions of our age is: how can we maximise the undoubted benefits of AI while limiting its downsides? It is a tough challenge. All technologies are dualistic, particularly so with AI given it can significantly increase the scale and potency of malicious acts and lower their costs.
      [dualistic]二元论的(dual-双重的)
      [potency]能力、权利、效力(potent-强有力的,有影响力的,有权利的,有效力效能的)

    4. The report, written by 26 researchers from several organisations including OpenAI, Oxford and Cambridge universities, and the Electronic Frontier Foundation, performs a valuable, if scary, service in flagging the threats from the abuse of powerful technology by rogue states, criminals and terrorists. Where it is less compelling is coming up with possible solutions.
      [rogue]胡作非为的,无赖的,
      [rogue states]美国对世界上那些不恪守国际法则、对内集权、对外要挟性年夜的国度,如伊拉克、伊朗、朝鲜、利比亚、苏丹等的称谓。
      [compelling]令人信服的;引起兴趣的;强迫;

    5. Much of the public concern about AI focuses on the threat of an emergent superintelligence and the mass extinction of our species. There is no doubt that the issue of how to “control” artificial general intelligence, as it is known, is a fascinating and worthwhile debate. But in the words of one AI expert, it is probably “a second half of the 21st century problem”.
      [emergent]新生的;
      [extinction] 绝种;
      [debate]辩论,讨论;

    6. The latest report highlights how we should already be worrying today about the abuse of relatively narrow AI. Human evil, incompetence and poor design will remain a bigger threat for the foreseeable future than some omnipotent and omniscient Terminator-style Skynet.
      incompetence不称职;(competence胜任;能力;)

    7. AI academics have led a commendable campaign to highlight the dangers of so-called [lethal autonomous weapons systems]. The United Nations is now trying to turn that initiative into workable international protocols.
      [commendable]值得称赞的
      [campaign]运动;作战
      [lethal]致命的
      [initiative] 主动的,主动行为的
      [protocols]协议

    8. Some interested philanthropists, including Elon Musk and Sam Altman, have also sunk money into research institutes focusing on AI safety, including one that co-wrote the report. Normally, researchers who call for more money to be spent on research should be treated with some scepticism. But there are estimated to be just 100 researchers in the western world grappling with the issue. That seems far too few, given the scale of the challenge.
      [philanthropists]慈善家
      [sunk money] 浪费金钱(sunk cost 沉默成本);
      [scepticism]怀疑态度
      [estimated]估算
      [grappling] 努力解决

    9. Governments need to raise their understanding in this area. In the US, the creation of a federal robotics commission to develop relevant governmental expertise would be a good idea. The British government is sensibly expanding the remit of the [Alan Turing Institute] to encompass AI.
      ** [federal]**联邦
      [expertise]专长,专门知识
      [remit]职权范围
      [encompass]包含

    10. Some tech companies have already engaged the public on ethical issues concerning AI, and the rest should be encouraged to do so. Arguably, they should also be held liable for the misuse of their AI-enabled products in the same way that pharmaceutical firms are responsible for the harmful side-effects of their drugs.
      [pharmaceutical firms]制药业

    11. Companies should be deterred from rushing AI-enabled products to market before they have been adequately tested. Just as the potential flaws of cyber security systems are sometimes explored by co-operative hackers, so AI services should be stress-tested by other expert users before their release.
      [deter]威慑;吓住;阻止
      [flaws]瑕疵,生裂缝,变得有瑕疵;

    12. Ultimately, we should be realistic that only so much can ever be done to limit the abuse of AI. Rogue regimes will inevitably use it for bad ends. We cannot uninvent scientific discovery. But we should, at least, do everything possible to restrain its most immediate and obvious downsides.
      [regimes] 政权;Rogue regimes=rogue states
      [restrain]阻止

    背景知识

    [Electronic Frontier Foundation]电子前线基金会(Electronic Frontier Foundation)(又译电子前锋基金会或电子前哨基金会)是一个国际非营利性的宣传数字版权和法律组织,总部设在美国。

    相关文章

      网友评论

          本文标题:W3-D1 Limiting the downsides of

          本文链接:https://www.haomeiwen.com/subject/fqfykftx.html