Artwork

コンテンツは The Federalist Society によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Federalist Society またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

Deep Dive 179 – Artificial Intelligence and Bias

56:32
 
シェア
 

Manage episode 314171201 series 3276400
コンテンツは The Federalist Society によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Federalist Society またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?
Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?
Featuring:
- Stewart A. Baker, Partner, Steptoe & Johnson LLP
- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley
- [Moderator] Curt Levey, President, Committee for Justice
Visit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.
  continue reading

374 つのエピソード

Artwork
iconシェア
 
Manage episode 314171201 series 3276400
コンテンツは The Federalist Society によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Federalist Society またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作権で保護された作品をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?
Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?
Featuring:
- Stewart A. Baker, Partner, Steptoe & Johnson LLP
- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley
- [Moderator] Curt Levey, President, Committee for Justice
Visit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.
  continue reading

374 つのエピソード

Όλα τα επεισόδια

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド