This is an alternate universe story, where Petunia married a scientist. Harry enters the wizarding world armed with Enlightenment ideals and the experimental spirit.
…
continue reading
AI safety, philosophy and other things.
…
continue reading
Join Steven and Brian as we dive into the world of Harry Potter and the Methods of Rationality! Steven will play the role of the tour guide while doing his best to not spoil any of the surprises and Brian will play the seasoned adventurer who is new to this particular work.
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
…
continue reading
1
Man in the High Castle 02: Politics Kill Dreams
54:03
54:03
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
54:03
Welcome back to PKD! Brian and Steven continue our reading of Man in the High Castle. Lot’s of fun plot lines get started in these chapters. Come back next week for chapters 5 and 6!The Methods of Rationality Podcast による
…
continue reading
1
#15 Should we be engaging in civil disobedience to protest AGI development?
1:18:20
1:18:20
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:18:20
StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest. In this episode, I chatted with three of founders of StopAI – Remmelt Ellen, Sam Kirchner and Guido Reichstadter. We talked about what protest tactics StopAI have been using, and why they want a stop (and not just a pause!) in the developme…
…
continue reading
Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI control, why we shouldn't feel confident that witnessing an AI escape attempt would persuade labs to undeploy dangerous models, lessons from the vetoing of SB1047, the importance of lab security and more. Pos…
…
continue reading
1
Man in the High Castle 01: Parallel Kafkaesque Dystopia
56:25
56:25
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
56:25
Welcome back to PKD! Brian and Steven kick off our second book, Man in the High Castle! So far it’s lots of fun, even if the world is a complete nightmare.The Methods of Rationality Podcast による
…
continue reading
1
Ubik 06: Pondering Kafkaesque Dimensions
1:21:44
1:21:44
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:21:44
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! In this episode, we wrap up Ubik and talk about some of the cool, mind-bending stuff it makes you think about. Come back next week for the first two chapters of Man in the High Castle!The Methods of Rationality Podcast による
…
continue reading
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! In this episode, the weirdness keeps on coming! Come back next week for the last 4 chapters!The Methods of Rationality Podcast による
…
continue reading
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! In this episode, things are getting super weird. Are they dead? Is everyone dead? Come back next week for chapters 11, 12, and 13! Note: I’m giving myself a pass for being less coherent than usual. I didn’t check until after recording…
…
continue reading
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! In this episode, the crew gets to the moon and shit goes sideways right away. Come back next week for chapters 8, 9, and 10!The Methods of Rationality Podcast による
…
continue reading
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! In this episode we assemble the crew with chapters 3, 4, and 5. Next week we’ll see what this rag-tag group of meta humans gets up to in chapters 6 and 7!The Methods of Rationality Podcast による
…
continue reading
1
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion
1:53:51
1:53:51
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:53:51
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even lar…
…
continue reading
Welcome back to PKD, the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick! We started off light with just the first couple of chapters to get our sea legs. Next week we’re reading chapters 3, 4 and 5.The Methods of Rationality Podcast による
…
continue reading
Welcome the show where Brian and Steven dive into the weird world and weirder mind of Phillip K. Dick. This is an introduction where Brian explains a bit about who PDK was and gets up prepped to start our first of three of his books. We’re starting with Ubik, chapters 1 and 2 next week! (We change the name next episode as the one we went for with t…
…
continue reading
Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute. In this episode, we discuss how forecasting can be used to help humanity coordinate around reducing existential risks, Deger's advice for aspiring forecasters, the future of using AI for forecasting and more! Enter Metaculus's Q3 AI Forecasting Be…
…
continue reading
Buckle up as Eneasz and Steven discuss some of the feedback about this short series and then have an untamed discussion about how capitalism/capitalists are portrayed in popular culture full of all kinds of digressions and random thoughts. We hope you enjoy it! :)The Methods of Rationality Podcast による
…
continue reading
Steven and Eneasz wrap up their discussion on the Fallout TV show and plan out future content. Continue readingThe Methods of Rationality Podcast による
…
continue reading
Enjoy the 7th episode of Uranium Fever! Everyone’s always trying to get some head. In this case it’s more literal. Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk
1:16:34
1:16:34
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:16:34
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the world's largest survey of machine learning researchers. We talked about the most interesting results from the survey, Katja's views on whether we should slow down AI progr…
…
continue reading
Our heroes get trapped by evil mutants! Plus we get some ghoulishly awesome flashbacks from before the end of the world. Don’t forget to check out AskWho’s awesome Substack. Eneasz recommended this reading of a book review of the Old Testament and it’s awesome. Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more
1:54:22
1:54:22
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:54:22
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with state-of-art AI capabilities, whether we should be advocating for a pause on scaling frontier models, Nathan's Red T…
…
continue reading
Eneasz and Steven chat about the fifth episode of the Fallout tv show. Remember when we thought we’d keep these around 15 minutes each? Ha! Continue readingThe Methods of Rationality Podcast による
…
continue reading
This episode gets us back into awesome character stuff and we had a blast (bomb pun intended). Continue readingThe Methods of Rationality Podcast による
…
continue reading
This was more of a plot moving episode and less of a deep diving conversation starter, but we still found some stuff to talk about. I hope you enjoy the transition between the main episode and the last few minutes – the sophisticated among you will get it. ;) Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development
49:50
49:50
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
49:50
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sheha founded Encode Justice, the importance of youth advocacy in AI safety, and what the movement can learn from climate…
…
continue reading
Eneasz and Steven continue the discussion about the Fallout tv show and digress hard about the role of luck in our life circumstances. The outro song is from The Righteous Gemstones sung by Walton Goggins. :) Continue readingThe Methods of Rationality Podcast による
…
continue reading
Eneasz and Steven wanted to chat about the new and awesome Fallout tv series. We were under the delusion we could keep this to like 15 minutes but you know us and that was never going to happen. Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter
1:28:07
1:28:07
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:28:07
Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more! Follow Nathan on Twitter: Nathan 🔍 (@NathanpmYoung) / X (twitter.com) Nathan's substack: Predictive Text | Nathan Young | Substack My Tw…
…
continue reading
1
#7 Noah Topper helps me understand Eliezer Yudkowsky
1:28:31
1:28:31
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:28:31
A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an attempt to more completely understand why AI is going to kill us all, I enlisted the help of Noah Topper, recent Computer Science Masters graduate and long-time EY fan, to help me break down A List of Le…
…
continue reading
1
#6 Holly Elmore on pausing AI, protesting, warning shots & more
1:48:01
1:48:01
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:48:01
Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising protests against frontier AGI research, the danger of relying on warning shots, the prospect of techno-utopia, possible risks of pausing and more! Follow Holly on Twitter: Holly ⏸️ Elmore (@ilex_ulmus) / …
…
continue reading
1
Oscar Catch-Up 2023 – Not Everything Bad Is A Satire – The Boy, The Mole, The Fox, and The Horse
33:38
33:38
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
33:38
Is the 2023 winner for animated short a brilliant work of satirical performance art, or just bad, or just good? Eneasz, Matt, and Jen have different opinions, and fortunately we totally decide which one is right. Watch it here Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#5 Joep Meindertsma on founding PauseAI and strategies for communicating AI risk
46:43
46:43
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
46:43
In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising existential risks, strategies for communicating AI risk, his assessment of recent AI policy developments and more! Find out more about PauseAI at www.pauseai.info…
…
continue reading
1
#4 Émile P. Torres and I discuss where we agree and disagree on AI safety
1:47:21
1:47:21
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:47:21
Émile P. Torres is a philosopher and historian known for their research on the history and ethical implications of human extinction. They are also an outspoken critic of Effective Altruism, longtermism and the AI safety movement. In this episode, we chatted about why Émile opposes both the 'doomer' and accelerationist factions, and identified some …
…
continue reading
1
#3 Darren McKee on explaining AI risk to the public & navigating the AI safety debate
51:40
51:40
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
51:40
Darren McKee is an author, speaker and policy advisor who has recently penned a beginner-friendly introduction to AI Safety named Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. We chatted about the best arguments for worrying about AI, responses to common objections, how to navigate the online AI safety s…
…
continue reading
Steven and Jennifer continue the dream in a discussion with the story’s author, Eneasz Brodski. The book is What Lies Dreaming. Read this book and buy it here. The short story this is an expansion of is Of All Possible Worlds. Check out that story and everything else he’s published… Continue readingThe Methods of Rationality Podcast による
…
continue reading
1
#2 Akash Wasil on transitioning into AI safety & promising proposals for AI governance
1:10:57
1:10:57
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:10:57
Akash is an AI policy researcher working on ways to reduce global security risks from advanced AI. He has worked at the Center for AI Safety, Center for AI Policy, and Control AI. Before getting involved in AI safety, he was a PhD student studying technology & mental health at the University of Pennsylvania. We chatted about why he decided to work …
…
continue reading
1
#1 Aaron Bergman and Max Alexander argue about moral realism while I smile and nod
1:08:17
1:08:17
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
1:08:17
In this inaugural episode of Consistently Candid, Aaron Bergman and Max Alexander each try to convince me of their position on moral realism, and I settle the issue once and for all. Featuring occasional interjections from the sat-nav in the Uber Aaron was taking at the time.My Twitter: https://twitter.com/littIeramblings Max's Twitter: https://twi…
…
continue reading