Player FMアプリでオフラインにしPlayer FMう!
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
Manage episode 389405391 series 3428190
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.
15 つのエピソード
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
Artificial General Intelligence (AGI) Show with Soroush Pour
Manage episode 389405391 series 3428190
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.
In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to help listeners quickly understand the technical alignment research landscape.
We talk to Thomas about a huge breadth of technical alignment areas including:
* Prosaic alignment
* Scalable oversight (e.g. RLHF, debate, IDA)
* Intrepretability
* Heuristic arguments, from ARC
* Model evaluations
* Agent foundations
* Other areas more briefly:
* Model splintering
* Out-of-distribution (OOD) detection
* Low impact measures
* Threat modelling
* Scaling laws
* Brain-like AI safety
* Inverse reinforcement learning (RL)
* Cooperative AI
* Adversarial training
* Truthful AI
* Brain-machine interfaces (Neuralink)
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- About Thomas --
Thomas studied Computer Science & Mathematics at U. Michigan where he first did ML research in the field of computer vision. After graduating, he completed the MATS AI safety research scholar program before doing a stint at MIRI as a Technical AI Safety Researcher. Earlier this year, he moved his work into AI policy by co-founding the Center for AI Policy, a nonprofit, nonpartisan organisation focused on getting the US government to adopt policies that would mitigate national security risks from AI. The Center for AI Policy is not connected to foreign governments or commercial AI developers and is instead committed to the public interest.
* Center for AI Policy - https://www.aipolicy.us
* LinkedIn - https://www.linkedin.com/in/thomas-larsen/
* LessWrong - https://www.lesswrong.com/users/thomas-larsen
-- Further resources --
* Thomas' post, "What Everyone in Technical Alignment is Doing and Why" https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is
* Please note this post is from Aug 2022. The podcast should be more up-to-date, but this post is still a valuable and relevant resource.
15 つのエピソード
すべてのエピソード
×プレーヤーFMへようこそ!
Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。