rational
Issue #222: Full List
24 July, 2022 // View curated list# Instrumental
Changing the world through slack & hobbies // steve2152, 12 min
On Akrasia, Habits and Reward Maximization // Aiyen, 7 min
YouTubeTV and Spoilers // Zvi, 9 min
# Epistemic
Curating "The Epistemic Sequences" (list v.0.1) // Andrew_Critch, 7 min
Internal Double Crux // CFAR 2017, 14 min
Criticism Of Criticism Of Criticism // Scott Alexander, 11 min
# Ai
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover // ajeya-cotra, 101 min
Connor Leahy on Dying with Dignity, EleutherAI and Conjecture // mtrazzi, 17 min
Examples of AI Increasing AI Progress // ThomasWoodside, 1 min
Which values are stable under ontology shifts? // ricraz, 3 min
How to Diversify Conceptual Alignment: the Model Behind Refine // adamShimi, 9 min
Help ARC evaluate capabilities of current language models // beth-barnes, 1 min
Conditioning Generative Models for Alignment // Jozdien, 25 min
Robustness to Scaling Down: More Important Than I Thought // adamShimi, 3 min
Making DALL-E Count // AllAmericanBreakfast, 5 min
Machine Learning Model Sizes and the Parameter Gap [abridged] // pvs, 1 min
How much to optimize for the short-timelines scenario? // SoerenMind, 1 min
Four questions I ask AI safety researchers // akash-wasil, 1 min
Which singularity schools plus the no singularity school was right? // sharmake-farah, 11 min
Pitfalls with Proofs // scasper, 10 min
Forecasting ML Benchmarks in 2023 // jsteinhardt, 14 min
Our Existing Solutions to AGI Alignment (semi-safe) // michael-soareverix, 4 min
At what point will we know if Eliezer’s predictions are right or wrong? // anonymous123456, 1 min
Why you might expect homogeneous take-off: evidence from ML research // inwaves, 12 min
A daily routine I do for my AI safety research work // scasper, 1 min
Training goals for large language models // Johannes_Treutlein, 23 min
Enlightenment Values in a Vulnerable World // maxwell-tabarrok, 37 min
A Critique of AI Alignment Pessimism // ExCeph, 11 min
What Environment Properties Select Agents For World-Modeling? // Thane Ruthenis, 16 min
Conditioning Generative Models with Restrictions // adam-jermyn, 10 min
Abram Demski's ELK thoughts and proposal - distillation // Rubi, 19 min
Quantilizers and Generative Models // adam-jermyn, 4 min
Why I Think Abrupt AI Takeoff // lincolnquirk, 1 min
Symbolic distillation, Diffusion, Entropy, Replicators, Agents, oh my (a mid-low quality thinking out loud post) // lahwran, 6 min
How Interpretability can be Impactful // Connall Garrod, 43 min
Reward models can act like they are deceptively aligned // joshua-clymer, 1 min
AI Hiroshima (Does A Vivid Example Of Destruction Forestall Apocalypse?) // Sable, 2 min
Trying out Prompt Engineering on TruthfulQA // megan-kinniment, 8 min
Defining Optimization in a Deeper Way Part 3 // Jemist, 2 min
AI Safety Cheatsheet / Quick Reference // zohar-jackson, 1 min
Countering arguments against working on AI safety // rauno-arike, 8 min
Bounded complexity of solving ELK and its implications // Rubi, 21 min
Why AGI Timeline Research/Discourse Might Be Overrated // sharmake-farah, 1 min
# Longevity
Cognitive Risks of Adolescent Binge Drinking // pktechgirl, 12 min
What is in E5? Harold Katcher’s patent // Josh Mitteldorf, 3 min
# Decision theory
Don't take the organizational chart literally // lc, 5 min
# Books
Wyclif's Dust: the missing chapter // david-hugh-jones, 4 min
Your Book Review: The Society Of The Spectacle // Scott Alexander, 38 min
# Ea
A Bias Against Altruism // conor-sullivan, 2 min
Spending Update 2022 // jkaufman, 3 min
# Relationships
For What Do We Want To Be Wanted? // Robin Hanson, 4 min
# Community
Personal forecasting retrospective: 2020-2022 // elifland, 9 min
Easy guide for running a local Rationality meetup // nikita-sokolsky, 7 min
# Culture war
Culture wars in riddle format // Elmer of Malmesbury, 3 min
Is Gas Green? // ChristianKl, 1 min
Why are politicians polarized? // ErnestScribbler, 9 min
Objectification As Emotional Labor // Robin Hanson, 1 min
# Misc
Sexual Abuse attitudes might be infohazardous // Pseudonymous Otter, 1 min
Addendum: A non-magical explanation of Jeffrey Epstein // lc, 12 min
Eating Boogers // George3d6, 7 min
How the ---- did Feynman Get Here !? // George3d6, 4 min
Marburg Virus Pandemic Prediction Checklist // AllAmericanBreakfast, 4 min
What are the simplest questions in applied rationality where you don't know the answer to? // ChristianKl, 1 min
Are Intelligence and Generality Orthogonal? // cubefox, 1 min
Getting Unstuck on Counterfactuals // Chris_Leong, 2 min
# Podcasts
#134 - Ian Morris on what big picture history teaches us // , 221 min
Currents 066: Matthew Pirkowski on Emergence in Possibility Space // The Jim Rutt Show, 67 min
c40: Script Draft, beats 10-13 // Constellation, 41 min
EP 161 Greg Thomas on Untangling the Gordian Knot of Race // The Jim Rutt Show, 81 min
# Videos of the week
Is Civilization on the Brink of Collapse? // Kurzgesagt – In a Nutshell, 11 min