I Think Therefore AI Part 1

Astonishing Legends - En podkast av Astonishing Legends Productions

On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us? Visit our webpage on this episode for a lot more information.

Visit the podcast's native language site