Artwork

コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

EA - The Best Argument is not a Simple English Yud Essay by Jonathan Bostock

6:35
 
シェア
 

Manage episode 440897165 series 3314709
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Argument is not a Simple English Yud Essay, published by Jonathan Bostock on September 20, 2024 on The Effective Altruism Forum.
I was encouraged to post this here, but I don't yet have enough EA forum karma to crosspost directly!
Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique.
If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk.
Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms.
One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all.
I'm going to critique three examples which I think typify these:
Failure to Adapt Concepts
I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested.
I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind.
Here's an attempt to do better:
1. So far, humans have mostly developed technology by understanding the systems which the technology depends on.
2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does.
3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways.
4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic.
And here's Claude's just for fun:
1. Up until now, humans have created new technologies by understanding how they work.
2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside.
3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects.
4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful.
I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary.
Failure to Filter Information
When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI:
When I showed this to ...
  continue reading

2437 つのエピソード

Artwork
iconシェア
 
Manage episode 440897165 series 3314709
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Best Argument is not a Simple English Yud Essay, published by Jonathan Bostock on September 20, 2024 on The Effective Altruism Forum.
I was encouraged to post this here, but I don't yet have enough EA forum karma to crosspost directly!
Epistemic status: these are my own opinions on AI risk communication, based primarily on my own instincts on the subject and discussions with people less involved with rationality than myself. Communication is highly subjective and I have not rigorously A/B tested messaging. I am even less confident in the quality of my responses than in the correctness of my critique.
If they turn out to be true, these thoughts can probably be applied to all sorts of communication beyond AI risk.
Lots of work has gone into trying to explain AI risk to laypersons. Overall, I think it's been great, but there's a particular trap that I've seen people fall into a few times. I'd summarize it as simplifying and shortening the text of an argument without enough thought for the information content. It comes in three forms.
One is forgetting to adapt concepts for someone with a far inferential distance; another is forgetting to filter for the important information; the third is rewording an argument so much you fail to sound like a human being at all.
I'm going to critique three examples which I think typify these:
Failure to Adapt Concepts
I got this from the summaries of AI risk arguments written by Katja Grace and Nathan Young here. I'm making the assumption that these summaries are supposed to be accessible to laypersons, since most of them seem written that way. This one stands out as not having been optimized on the concept level. This argument was below-aveage effectiveness when tested.
I expect most people's reaction to point 2 would be "I understand all those words individually, but not together". It's a huge dump of conceptual information all at once which successfully points to the concept in the mind of someone who already understands it, but is unlikely to introduce that concept to someone's mind.
Here's an attempt to do better:
1. So far, humans have mostly developed technology by understanding the systems which the technology depends on.
2. AI systems developed today are instead created by machine learning. This means that the computer learns to produce certain desired outputs, but humans do not tell the system how it should produce the outputs. We often have no idea how or why an AI behaves in the way that it does.
3. Since we don't understand how or why an AI works a certain way, it could easily behave in unpredictable and unwanted ways.
4. If the AI is powerful, then the consequences of unwanted behaviour could be catastrophic.
And here's Claude's just for fun:
1. Up until now, humans have created new technologies by understanding how they work.
2. The AI systems made in 2024 are different. Instead of being carefully built piece by piece, they're created by repeatedly tweaking random systems until they do what we want. This means the people who make these AIs don't fully understand how they work on the inside.
3. When we use systems that we don't fully understand, we're more likely to run into unexpected problems or side effects.
4. If these not-fully-understood AI systems become very powerful, any unexpected problems could potentially be really big and harmful.
I think it gets points 1 and 3 better than me, but 2 and 4 worse. Either way, I think we can improve upon the summary.
Failure to Filter Information
When you condense an argument down, you make it shorter. This is obvious. What is not always as obvious is that this means you have to throw out information to make the core point clearer. Sometimes the information that gets kept is distracting. Here's an example from a poster a friend of mine made for Pause AI:
When I showed this to ...
  continue reading

2437 つのエピソード

All episodes

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド