美文网首页
科普:如何让人工智能造福人类(2)

科普:如何让人工智能造福人类(2)

作者: 槑焁 | 来源:发表于2024-07-31 22:24 被阅读0次

How to make artificial intelligence benefit mankind(2)

Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell:“We’ve seen papers...that address the technical problem of safety.”
为了确保人工智能有利于人类,已经建立了一些行业和学术计划。其中包括由IBM等公司创建的人工智能造福人类和社会合作组织(Partnership on AI to Benefit People and Society),以及涉及哈佛大学(Harvard)和麻省理工学院(MIT)的一项2700万美元计划。得到埃隆•马斯克(Elon Musk)和谷歌(Google)支持的OpenAI等组织已取得进展,拉塞尔教授表示:“我们看到了一些论文……它们针对安全性的技术问题。”

There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his company’s developers to combat computer malware. His “trustworthy computing” initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.
这方面有一些过去应对新技术影响努力的回声。微软(Microsoft)首席执行官萨蒂亚•纳德拉(Satya Nadella)将其与15年前相比,当时比尔•盖茨(Bill Gates)动员公司的开发人员抗击电脑恶意程序。他发起的“可信计算”倡议是一个分水岭。纳德拉在接受英国《金融时报》采访时表示,他希望采取类似的举措以确保人工智能造福于人类。

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed.“Many of our data sets have been collected...with assumptions we may not deeply understand, and we don’t want our machine-learned applications...to be amplifying cultural biases,” he said.

然而,人工智能带来了一些棘手的问题。机器学习系统从大量数据中得出见解。微软高管埃里克•霍维茨(Eric Horvitz)去年底在美国参议院听证会上表示,这些数据集可能本身就存在问题。他表示:“我们的很多数据集是……在假设我们可能并不深入理解的情况下收集的,我们不希望让我们的机器学习应用……放大文化偏见。”

Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.

新闻机构ProPublica去年进行的一项调查发现,美国司法机构用来确定刑事被告人是否有可能再次犯罪的算法存在种族偏见。再次犯罪风险较低的黑人被告比白人被告更容易被标记为高风险。

Greater transparency is one way forward, for example making it clear what information AI systems have used. But the“thought processes” of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. “We need to understand how to justify [their] decisions and how the thinking is done.”提高透明度是一条出路,比如明确人工智能系统使用了哪些信息。但深度学习系统的“思维过程”不容易加以审查。霍维茨表示,人类很难理解这种系统。“我们需要理解如何证明(它们的)决策合理,以及这种思考是如何完成的。”

整理:2024年8月1日于北城家园

相关文章

网友评论

      本文标题:科普:如何让人工智能造福人类(2)

      本文链接:https://www.haomeiwen.com/subject/liyghjtx.html