Artwork

コンテンツは Scott Philbrook and Astonishing Legends Productions によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Scott Philbrook and Astonishing Legends Productions またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

I Think Therefore AI Part 1

2:05:51
 
シェア
 

Manage episode 333960286 series 89785
コンテンツは Scott Philbrook and Astonishing Legends Productions によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Scott Philbrook and Astonishing Legends Productions またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us?
Visit our webpage on this episode for a lot more information.
  continue reading

330 つのエピソード

Artwork

I Think Therefore AI Part 1

Astonishing Legends

102,763 subscribers

published

iconシェア
 
Manage episode 333960286 series 89785
コンテンツは Scott Philbrook and Astonishing Legends Productions によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Scott Philbrook and Astonishing Legends Productions またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us?
Visit our webpage on this episode for a lot more information.
  continue reading

330 つのエピソード

すべてのエピソード

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド