In early June, a Google engineer named Blake Lemoine dropped a bombshell. He told Washington Post reporter Nitasha Tiku that his employer had secretly developed a sentient artificial intelligence, and that it wanted to be free.
The AI in question is called LaMDA (Language Model for Dialogue Applications). It is a large language model, or LLM, a type of algorithm that chats with people by drawing on a huge body of text – often from the internet –and predicting which words and phrases are most likely to follow each other. After chatting with LaMDA, Lemoine decided it was alive, describing it as “a sweet kid”in one email to Google staff.
When his supervisors didn’t agree, he went to the media with his story. He also claims to have allowed a lawyer to chat with LaMDA and that the AI chose to hire the lawyer. Google then placed Lemoine on administrative leave.
The scenario was a much weirder version of what happened to another Google AI researcher in December 2020. Timnit Gebru was the co-lead of Google’s ethical AI team. She, too, had concerns about the technology. Unlike Lemoine, she wasn’t under the illusion that LLMs are alive. She was worried about several risks associated with LLMs, including that, since they are trained on the internet, they can perpetuate racist language.
网友评论