Artwork

コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

LW - QandA on Proposed SB 1047 by Zvi

1:09:29
 
シェア
 

Manage episode 416140825 series 3337129
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Q&A on Proposed SB 1047, published by Zvi on May 2, 2024 on LessWrong. Previously: On the Proposed California SB 1047. Text of the bill is here. It focuses on safety requirements for highly capable AI models. This is written as an FAQ, tackling all questions or points I saw raised. Safe & Secure AI Innovation Act also has a description page. Why Are We Here Again? There have been many highly vocal and forceful objections to SB 1047 this week, in reaction to a (disputed and seemingly incorrect) claim that the bill has been 'fast tracked.' The bill continues to have substantial chance of becoming law according to Manifold, where the market has not moved on recent events. The bill has been referred to two policy committees one of which put out this 38 page analysis. The purpose of this post is to gather and analyze all objections that came to my attention in any way, including all responses to my request for them on Twitter, and to suggest concrete changes that address some real concerns that were identified. 1. Some are helpful critiques pointing to potential problems, or good questions where we should ensure that my current understanding is correct. In several cases, I suggest concrete changes to the bill as a result. Two are important to fix weaknesses, one is a clear improvement, the others are free actions for clarity. 2. Some are based on what I strongly believe is a failure to understand how the law works, both in theory and in practice, or a failure to carefully read the bill, or both. 3. Some are pointing out a fundamental conflict. They want people to have the ability to freely train and release the weights of highly capable future models. Then they notice that it will become impossible to do this while adhering to ordinary safety requirements. They seem to therefore propose to not have safety requirements. 4. Some are alarmist rhetoric that has little tether to what is in the bill, or how any of this works. I am deeply disappointed in some of those using or sharing such rhetoric. Throughout such objections, there is little or no acknowledgement of the risks that the bill attempts to mitigate, suggestions of alternative ways to do that, or reasons to believe that such risks are insubstantial even absent required mitigation. To be fair to such objectors, many of them have previously stated that they believe that future more capable AI poses little catastrophic risk. I get making mistakes, indeed it would be surprising if this post contained none of its own. Understanding even a relatively short bill like SB 1047 requires close reading. If you thoughtlessly forward anything that sounds bad (or good) about such a bill, you are going to make mistakes, some of which are going to look dumb. What is the Story So Far? If you have not previously done so, I recommend reading my previous coverage of the bill when it was proposed, although note the text has been slightly updated since then. In the first half of that post, I did an RTFB (Read the Bill). I read it again for this post. The core bill mechanism is that if you want to train a 'covered model,' meaning training on 10^26 flops or getting performance similar or greater to what that would buy you in 2024, then you have various safety requirements that attach. If you fail in your duties you can be fined, if you purposefully lie about it then that is under penalty of perjury. I concluded this was a good faith effort to put forth a helpful bill. As the bill deals with complex issues, it contains both potential loopholes on the safety side, and potential issues of inadvertent overreach, unexpected consequences or misinterpretation on the restriction side. In the second half, I responded to Dean Ball's criticisms of the bill, which he called 'California's Effort to Strangle AI.' 1. In the section What Is a Covered Model,...
  continue reading

1658 つのエピソード

Artwork
iconシェア
 
Manage episode 416140825 series 3337129
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Q&A on Proposed SB 1047, published by Zvi on May 2, 2024 on LessWrong. Previously: On the Proposed California SB 1047. Text of the bill is here. It focuses on safety requirements for highly capable AI models. This is written as an FAQ, tackling all questions or points I saw raised. Safe & Secure AI Innovation Act also has a description page. Why Are We Here Again? There have been many highly vocal and forceful objections to SB 1047 this week, in reaction to a (disputed and seemingly incorrect) claim that the bill has been 'fast tracked.' The bill continues to have substantial chance of becoming law according to Manifold, where the market has not moved on recent events. The bill has been referred to two policy committees one of which put out this 38 page analysis. The purpose of this post is to gather and analyze all objections that came to my attention in any way, including all responses to my request for them on Twitter, and to suggest concrete changes that address some real concerns that were identified. 1. Some are helpful critiques pointing to potential problems, or good questions where we should ensure that my current understanding is correct. In several cases, I suggest concrete changes to the bill as a result. Two are important to fix weaknesses, one is a clear improvement, the others are free actions for clarity. 2. Some are based on what I strongly believe is a failure to understand how the law works, both in theory and in practice, or a failure to carefully read the bill, or both. 3. Some are pointing out a fundamental conflict. They want people to have the ability to freely train and release the weights of highly capable future models. Then they notice that it will become impossible to do this while adhering to ordinary safety requirements. They seem to therefore propose to not have safety requirements. 4. Some are alarmist rhetoric that has little tether to what is in the bill, or how any of this works. I am deeply disappointed in some of those using or sharing such rhetoric. Throughout such objections, there is little or no acknowledgement of the risks that the bill attempts to mitigate, suggestions of alternative ways to do that, or reasons to believe that such risks are insubstantial even absent required mitigation. To be fair to such objectors, many of them have previously stated that they believe that future more capable AI poses little catastrophic risk. I get making mistakes, indeed it would be surprising if this post contained none of its own. Understanding even a relatively short bill like SB 1047 requires close reading. If you thoughtlessly forward anything that sounds bad (or good) about such a bill, you are going to make mistakes, some of which are going to look dumb. What is the Story So Far? If you have not previously done so, I recommend reading my previous coverage of the bill when it was proposed, although note the text has been slightly updated since then. In the first half of that post, I did an RTFB (Read the Bill). I read it again for this post. The core bill mechanism is that if you want to train a 'covered model,' meaning training on 10^26 flops or getting performance similar or greater to what that would buy you in 2024, then you have various safety requirements that attach. If you fail in your duties you can be fined, if you purposefully lie about it then that is under penalty of perjury. I concluded this was a good faith effort to put forth a helpful bill. As the bill deals with complex issues, it contains both potential loopholes on the safety side, and potential issues of inadvertent overreach, unexpected consequences or misinterpretation on the restriction side. In the second half, I responded to Dean Ball's criticisms of the bill, which he called 'California's Effort to Strangle AI.' 1. In the section What Is a Covered Model,...
  continue reading

1658 つのエピソード

すべてのエピソード

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド