Show notes are at https://stevelitchfield.com/sshow/chat.html
…
continue reading
コンテンツは LessWrong によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、LessWrong またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal。
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!
Player FMアプリでオフラインにしPlayer FMう!
“Shallow review of technical AI safety, 2024” by technicalities, Stag, Stephen McAleese, jordine, Dr. David Mathers
Manage episode 458257246 series 3364760
コンテンツは LessWrong によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、LessWrong またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal。
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
402 つのエピソード
Manage episode 458257246 series 3364760
コンテンツは LessWrong によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、LessWrong またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal。
from aisafety.world
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
…
continue reading
The following is a list of live agendas in technical AI safety, updating our post from last year. It is “shallow” in the sense that 1) we are not specialists in almost any of it and that 2) we only spent about an hour on each entry. We also only use public information, so we are bound to be off by some additional factor.
The point is to help anyone look up some of what is happening, or that thing you vaguely remember reading about; to help new researchers orient and know (some of) their options; to help policy people know who to talk to for the actual information; and ideally to help funders see quickly what has already been funded and how much (but this proves to be hard).
“AI safety” means many things. We’re targeting work that intends to prevent very competent [...]
---
Outline:
(01:33) Editorial
(08:15) Agendas with public outputs
(08:19) 1. Understand existing models
(08:24) Evals
(14:49) Interpretability
(27:35) Understand learning
(31:49) 2. Control the thing
(40:31) Prevent deception and scheming
(46:30) Surgical model edits
(49:18) Goal robustness
(50:49) 3. Safety by design
(52:57) 4. Make AI solve it
(53:05) Scalable oversight
(01:00:14) Task decomp
(01:00:28) Adversarial
(01:04:36) 5. Theory
(01:07:27) Understanding agency
(01:15:47) Corrigibility
(01:17:29) Ontology Identification
(01:21:24) Understand cooperation
(01:26:32) 6. Miscellaneous
(01:50:40) Agendas without public outputs this year
(01:51:04) Graveyard (known to be inactive)
(01:52:00) Method
(01:55:09) Other reviews and taxonomies
(01:56:11) Acknowledgments
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
December 29th, 2024
Source:
https://www.lesswrong.com/posts/fAW6RXLKTLHC3WXkS/shallow-review-of-technical-ai-safety-2024
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
402 つのエピソード
すべてのエピソード
×プレーヤーFMへようこそ!
Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。