Artwork

コンテンツは レアジョブ英会話 によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、レアジョブ英会話 またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

Former OpenAI employees lead push to protect whistleblowers flagging artificial intelligence risks

2:33
 
シェア
 

Manage episode 428223810 series 2530089
コンテンツは レアジョブ英会話 によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、レアジョブ英会話 またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
A group of OpenAI’s current and former workers is calling on the ChatGPT-maker and other artificial intelligence (AI) companies to protect employees who flag safety risks about AI technology. An open letter published in June asks tech companies to establish stronger whistleblower protections, so researchers have the “right to warn” about AI dangers without fear of retaliation. The development of more powerful AI systems is “moving fast and there are a lot of strong incentives to barrel ahead without adequate caution,” said former OpenAI engineer Daniel Ziegler, one of the organizers behind the open letter. Ziegler said in an interview he didn’t fear speaking out internally during his time at OpenAI between 2018 and 2021, in which he helped develop some of the techniques that would later make ChatGPT so successful. But he now worries that the race to rapidly commercialize the technology is putting pressure on OpenAI and its competitors to disregard the risks. Another co-organizer, Daniel Kokotajlo, said he quit OpenAI earlier this year “because I lost hope that they would act responsibly,” particularly as it attempts to build better-than-human AI systems known as artificial general intelligence. “They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo said in a written statement. OpenAI said in response to the letter that it already has measures for employees to express concerns, including an anonymous integrity hotline. “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said the company’s statement. “We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.” The letter has 13 signatories, most of whom are former employees of OpenAI and two who work or worked for Google’s DeepMind. Four are listed as anonymous current employees of OpenAI. The letter asks that companies stop making workers enter into “non-disparagement” agreements that can punish them by taking away a key financial perk—their equity investments—if they criticize the company after they leave. This article was provided by The Associated Press.
  continue reading

2272 つのエピソード

Artwork
iconシェア
 
Manage episode 428223810 series 2530089
コンテンツは レアジョブ英会話 によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、レアジョブ英会話 またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
A group of OpenAI’s current and former workers is calling on the ChatGPT-maker and other artificial intelligence (AI) companies to protect employees who flag safety risks about AI technology. An open letter published in June asks tech companies to establish stronger whistleblower protections, so researchers have the “right to warn” about AI dangers without fear of retaliation. The development of more powerful AI systems is “moving fast and there are a lot of strong incentives to barrel ahead without adequate caution,” said former OpenAI engineer Daniel Ziegler, one of the organizers behind the open letter. Ziegler said in an interview he didn’t fear speaking out internally during his time at OpenAI between 2018 and 2021, in which he helped develop some of the techniques that would later make ChatGPT so successful. But he now worries that the race to rapidly commercialize the technology is putting pressure on OpenAI and its competitors to disregard the risks. Another co-organizer, Daniel Kokotajlo, said he quit OpenAI earlier this year “because I lost hope that they would act responsibly,” particularly as it attempts to build better-than-human AI systems known as artificial general intelligence. “They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo said in a written statement. OpenAI said in response to the letter that it already has measures for employees to express concerns, including an anonymous integrity hotline. “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” said the company’s statement. “We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.” The letter has 13 signatories, most of whom are former employees of OpenAI and two who work or worked for Google’s DeepMind. Four are listed as anonymous current employees of OpenAI. The letter asks that companies stop making workers enter into “non-disparagement” agreements that can punish them by taking away a key financial perk—their equity investments—if they criticize the company after they leave. This article was provided by The Associated Press.
  continue reading

2272 つのエピソード

كل الحلقات

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド