rational
Issue #223: Full List
31 July, 2022 // View curated list# Instrumental
Focusing // CFAR 2017, 16 min
Hiring Programmers in Academia // jkaufman, 1 min
# Epistemic
Levels of Pluralism // adamShimi, 17 min
Bucket Errors // CFAR 2017, 12 min
Questions for a Theory of Narratives // Marv K, 5 min
Double Crux // CFAR 2017, 13 min
Forer Statements As Updates And Affirmations // Scott Alexander, 5 min
# Ai
Reward is not the optimization target // TurnTrout, 13 min
AGI ruin scenarios are likely (and disjunctive) // So8res, 7 min
Conjecture: Internal Infohazard Policy // NPCollapse, 21 min
Brainstorm of things that could force an AI team to burn their lead // So8res, 14 min
Principles of Privacy for Alignment Research // johnswentworth, 8 min
Abstracting The Hardness of Alignment: Unbounded Atomic Optimization // adamShimi, 20 min
Distillation Contest - Results and Recap // Aris, 8 min
AGI-level reasoner will appear sooner than an agent; what the humanity will do with this reasoner is critical // Roman Leventov, 1 min
AGI-level reasoner will appear sooner than an agent; and the humanity will do with this reasoner is critical // Roman Leventov, 1 min
AI ethics vs AI alignment // Wei_Dai, 1 min
Finding Skeletons on Rashomon Ridge // David Udell, 8 min
How transparency changed over time // ViktoriaMalyasova, 8 min
Alignment being impossible might be better than it being really difficult // martinsq, 2 min
[ASoT] Humans Reflecting on HRH // leogao, 2 min
Incoherence of unbounded selfishness // emmab, 1 min
Comparing Four Approaches to Inner Alignment // Lucas Teixeira, 11 min
Seeking beta readers who are ignorant of biology but knowledgeable about AI safety // Holly_Elmore, 1 min
How optimistic should we be about AI figuring out how to interpret itself? // oh54321, 1 min
Quantum Advantage in Learning from Experiments // dennis-towne, 1 min
Active Inference as a formalisation of instrumental convergence // Roman Leventov, 3 min
Defining Optimization in a Deeper Way Part 4 // Jemist, 6 min
How much should we worry about mesa-optimization challenges? // sudo, 2 min
Information theoretic model analysis may not lend much insight, but we may have been doing them wrong! // D0TheMath, 12 min
Does agent foundations cover all future ML systems? // Jonas Hallgren, 1 min
ELK And The Problem Of Truthful AI // Scott Alexander, 23 min
# Meta-ethics
Moral strategies at different capability levels // ricraz, 6 min
# Anthropic
Cook’s Critique of Our Earliness Argument // Robin Hanson, 4 min
# Decision theory
Unifying Bargaining Notions (1/2) // Diffractor, 20 min
«Boundaries», Part 1: a key missing concept from utility theory // Andrew_Critch, 8 min
The Reader's Guide to Optimal Monetary Policy // ege-erdil, 17 min
The Most Important Century: The Animation // Writer, 24 min
Unifying Bargaining Notions (2/2) // Diffractor, 29 min
«Boundaries» Sequence (Index Post) // Andrew_Critch, 1 min
My Bitcoin Thesis @2022 - Part 1 // aysajan, 15 min
Utility functions and probabilities are entangled // thomas-kwa, 1 min
Protectionism in One Country: How Industrial Policy Worked in Canada // Davis Kedrosky, 20 min
Index Post for the «Boundaries» Sequence // Andrew_Critch, 1 min
Who Should Be Our “Adults”? // Robin Hanson, 6 min
# Math and cs
Eavesdropping on Aliens: A Data Decoding Challenge // anonymousaisafety, 3 min
The generalized Sierpinski-Mazurkiewicz theorem. // donald-hobson, 2 min
# Books
Your Book Review: Viral // Scott Alexander, 26 min
Not Sickening Enough // Josh Mitteldorf, 19 min
# Community
Opening Session Tips & Advice // CFAR 2017, 17 min
For Better Commenting, Stop Out Loud // AllAmericanBreakfast, 1 min
# Misc
Technocracy and the Space Age // jasoncrawford, 3 min
Drexler’s Nanotech Forecast // PeterMcCluskey, 4 min
Mistakes as agency // pchvykov, 4 min
Relationship between subjective experience and intelligence? // Q Home, 11 min
“Fanatical” Longtermists: Why is Pascal’s Wager wrong? // yitz, 1 min
How Promising is Theoretical Research on Rationality? Seeking Career Advice // Aspirant223, 3 min
Beware Upward Reference Classes // Robin Hanson, 3 min
Hating On Personal Equity // Robin Hanson, 3 min
Beware Sacred Cows // Robin Hanson, 4 min
# Podcasts
166 – Jugaad Ethics // The Bayesian Conspiracy, 102 min
Currents 067: Zak Stein on Ending Nihilistic Design // The Jim Rutt Show, 66 min
# Videos of the week
Why TeamSeas Doesn't Work: Their Interceptors // Simon Clark, 21 min