Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
…
continue reading
Future Matters Reader releases audio versions of most of the writings summarized in the Future Matters newsletter
…
continue reading
1
#8: Bing Chat, AI labs on safety, and pausing Future Matters
41:48
41:48
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
41:48
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters …
…
continue reading
1
Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck
19:26
19:26
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
19:26
Success without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky. https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Note: Footnotes in the original article have been omitted.
…
continue reading
1
Larks — A Windfall Clause for CEO could worsen AI race dynamics
14:35
14:35
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
14:35
In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk. https://forum.effectivealtrui…
…
continue reading
1
Otto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch public
7:15
7:15
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
7:15
This is Otto Barten's summary of 'The effectiveness of AI existential risk communication to the American and Dutch public' by Alexia Georgiadis. In this paper Alexia measures changes in participants' awareness of AGI risks after consuming various media interventions. Summary: https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary…
…
continue reading
1
Shulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited role
57:30
57:30
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
57:30
Carl Shulman & Elliott Thornley argue that the goal of longtermists should be to get governments to adopt global catastrophic risk policies based on standard cost-benefit analysis rather than arguments that stress the overwhelming importance of the future. https://philpapers.org/archive/SHUHMS.pdf Note: Tables, notes and references in the original …
…
continue reading
1
Elika Somani — Advice on communicating in and around the biosecurity policy community
13:06
13:06
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
13:06
"The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make …
…
continue reading
1
Riley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.
7:10
7:10
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
7:10
The Global Priorities Institute has published a new paper summary: 'Are we living at the hinge of history?' by William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
…
continue reading
1
Riley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskill
5:32
5:32
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
5:32
The Global Priorities Institute has published a new paper summary: 'Longtermist institutional reform' by Tyler John & William MacAskill. https://globalprioritiesinstitute.org/summary-summary-longtermist-institutional-reform/ Note: Footnotes and references in the original article have been omitted.
…
continue reading
1
Hayden Wilkinson — Global priorities research: Why, how, and what have we learned?
44:42
44:42
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
44:42
The Global Priorities Institute has released Hayden Wilkinson's presentation on global priorities research. (The talk was given in mid-September last year but remained unlisted until now.) https://globalprioritiesinstitute.org/hayden-wilkinson-global-priorities-research-why-how-and-what-have-we-learned/…
…
continue reading
1
Piper — What should be kept off-limits in a virology lab?
7:49
7:49
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
7:49
New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk. https://www.vox.com/future-perfect/2023/2/1/23580528/gain-of-function-virology-covid-monkeypox-catastrophic-risk-pandemic-lab-accident
…
continue reading
"One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough." https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html…
…
continue reading
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters …
…
continue reading
1
#6: FTX collapse, value lock-in, and counterarguments to AI x-risk
37:47
37:47
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
37:47
Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in …
…
continue reading
1
#5: supervolcanoes, AI takeover, and What We Owe the Future
31:26
31:26
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
31:26
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Tw…
…
continue reading
1
#4: AI timelines, AGI risk, and existential risk from climate change
31:13
31:13
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
31:13
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Tw…
…
continue reading
1
#3: digital sentience, AGI ruin, and forecasting track records
34:05
34:05
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
34:05
Episode Notes Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum a…
…
continue reading
1
#2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research
23:07
23:07
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
23:07
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on T…
…
continue reading
1
#1: AI takeoff, longtermism vs. existential risk, and probability discounting
29:55
29:55
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
29:55
The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors.— John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der …
…
continue reading
> We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star.> — Ralph Waldo EmersonWelcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism co…
…
continue reading