Artwork

コンテンツは Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

🔥 Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01

26:54
 
シェア
 

Manage episode 431306220 series 3578042
コンテンツは Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 つのエピソード

Artwork
iconシェア
 
Manage episode 431306220 series 3578042
コンテンツは Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal

Got questions or comments or topics you want us to cover? Text us!

In this episode of irResponsible AI, we discuss
✅ GenAI is cool, but do you really need it for your use case?
✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases?
✅ How may we get out of this problem?
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
00:00 - Introduction
01:28 - Misuse of Generative AI
02:27 - Glue example of google gen AI
03:18 - The Challenge of Public Trust and Misinformation
03:45 - Why is this a serious problem?
04:49 - Why should businesses need to worry about it?
05:32 - Auditing Generative AI Systems and Liability Risks
07:18 - Why is this GenAI hype happening?
09:20 - Competitive Pressure and Funding Influence
14:29 - How to avoid failure: investing in Problem Understanding
14:48 - Good use cases of GenAI
17:05 - LLMs are only useful if you know the answer
17:30 - Text-based based video editing as a good example
21:40 - Need for GenAI literacy amongst tech execs
23:30 - Takeaways
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

1. Introduction (00:00:00)

2. Misuse of Generative AI (00:01:28)

3. The Challenge of Public Trust and Misinformation (00:03:18)

4. Why is this a serious problem? (00:03:45)

5. Why should businesses need to worry about it? (00:04:49)

6. Auditing Generative AI Systems and Liability Risks (00:05:32)

7. Why is this GenAI hype happening? (00:07:18)

8. Competitive Pressure and Funding Influence (00:09:20)

9. How to avoid failure: investing in Problem Understanding (00:14:48)

10. Good use cases of GenAI (00:14:48)

11. LLMs are only useful if you know the answer (00:17:05)

12. Need for GenAI literacy amongst tech execs (00:21:40)

13. Takeaways (00:23:30)

6 つのエピソード

すべてのエピソード

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド