rational
Issue #219: Full List
3 July, 2022 // View curated list# Instrumental
What Are You Tracking In Your Head? // johnswentworth, 4 min
Air Conditioner Repair // Zvi, 4 min
Seven ways to become unstoppably agentic // evie.cottrell, 10 min
My current take on Internal Family Systems “parts” // Kaj_Sotala, 4 min
A summary of every Replacing Guilt post // akash-wasil, 12 min
Kevin Kelly's "103 Bits of Advice," Expanded // dalton-mabery, 5 min
any good rationalist guides to nutrition / healthy eating? // Ben A, 1 min
My cognitive inertia cycle // MSRayne, 5 min
# Epistemic
Limits to Legibility // Jan_Kulveit, 5 min
The Track Record of Futurists Seems ... Fine // HoldenKarnofsky, 13 min
CFAR Handbook: Introduction // CFAR 2017, 1 min
Units of Exchange // CFAR 2017, 13 min
Forecasts are not enough // ege-erdil, 6 min
Metacognition in the Rat // Jacobian, 7 min
How to Navigate Evaluating Politicized Research? // Davis_Kingsley, 1 min
What is the contrast to counterfactual reasoning? // dominic-roser, 1 min
Examples of practical implications of Judea Pearl's Causality work // ChristianKl, 1 min
On viewquakes // dalton-mabery, 1 min
A Quick Ontology of Agreement // ravedon, 2 min
# Ai
Safetywashing // adam_scholl, 1 min
GPT-3 Catching Fish in Morse Code // megan-kinniment, 7 min
Trends in GPU price-performance // marius-hobbhahn, 1 min
Naive Hypotheses on AI Alignment // DarkSym, 5 min
[Linkpost] Solving Quantitative Reasoning Problems with Language Models // yitz, 2 min
Will Capabilities Generalise More? // ramana-kumar, 5 min
Four reasons I find AI safety emotionally compelling // ea247, 4 min
Formal Philosophy and Alignment Possible Projects // Whispermute, 9 min
The Basics of AGI Policy (Flowchart) // TrevorWiesinger, 2 min
[Linkpost] Existential Risk Analysis in Empirical Research Papers // dan-hendrycks, 1 min
[Yann Lecun] A Path Towards Autonomous Machine Intelligence // DragonGod, 1 min
Scott Aaronson and Steven Pinker Debate AI Scaling // Liron, 1 min
Agenty AGI – How Tempting? // PeterMcCluskey, 6 min
Selection processes for subagents // ryankidd44, 11 min
What Is The True Name of Modularity? // TheMcDouglas, 15 min
Paper: Forecasting world events with neural nets // Owain_Evans, 4 min
What success looks like // marius-hobbhahn, 1 min
The Tree of Life: Stanford AI Alignment Theory of Change // gabe-mukobi, 17 min
Can We Align AI by Having It Learn Human Preferences? I’m Scared (summary of last third of Human Compatible) // apollonianblues, 7 min
Latent Adversarial Training // adam-jermyn, 6 min
Most Functions Have Undesirable Global Extrema // En Kepeig, 3 min
What about transhumans and beyond? // AlignmentMirror, 1 min
Yann LeCun, A Path Towards Autonomous Machine Intelligence [link] // bill-benzon, 1 min
Robin Hanson asks "Why Not Wait On AI Risk?" // Gunnar_Zarncke, 1 min
AXRP Episode 16 - Preparing for Debate AI with Geoffrey Irving // DanielFilan, 46 min
Quick survey on AI alignment resources // frances_lorenz, 1 min
Doom doubts - is inner alignment a likely problem? // Crissman, 1 min
Epistemic modesty and how I think about AI risk // alenglander, 5 min
AI safety university groups: a promising opportunity to reduce existential risk // michael-chen, 16 min
AGI alignment with what? // AlignmentMirror, 1 min
Gradient hacking: definitions and examples // ricraz, 5 min
Training Trace Priors and Speed Priors // adam-jermyn, 3 min
Could an AI Alignment Sandbox be useful? // michael-soareverix, 1 min
Is General Intelligence "Compact"? // DragonGod, 13 min
Correcting human error vs doing exactly what you're told - is there literature on this in context of general system design? // przemyslaw-czechowski, 1 min
Is General Intelligence "Simple"? // DragonGod, 13 min
Some alternative AI safety research projects // Michele Campolo, 3 min
Deliberation Everywhere: Simple Examples // Oliver Sourbut, 17 min
A Path Towards Autonomous Machine Intelligence // DragonGod, 1 min
# Meta-ethics
Deontological Evil // lsusr, 2 min
# Longevity
Lifespan of Harold Katcher’s Rats // Josh Mitteldorf, 6 min
# Anthropic
The table of different sampling assumptions in anthropics // avturchin, 10 min
A physicist's approach to Origins of Life // pchvykov, 19 min
# Decision theory
Failing to fix a dangerous intersection // alyssavance, 2 min
Abadarian Trades // David Udell, 2 min
How do poor countries get rich: some theories // NathanBarnard, 11 min
How should I talk about optimal but not subgame-optimal play? // elephantiskon, 3 min
# Math and cs
Five views of Bayes' Theorem // adam-scherlis, 1 min
One is (almost) normal in base π // adam-scherlis, 1 min
Defining Optimization in a Deeper Way Part 1 // Jemist, 3 min
# Books
What Diet Books Don't Teach: A book review and a request for more reading // conor-sullivan, 5 min
Your Book Review: The Internationalists // Scott Alexander, 36 min
# Relationships
Limerence Messes Up Your Rationality Real Bad, Yo // Raemon, 4 min
Are long-form dating profiles productive? // AABoyles, 1 min
# Community
Looking back on my alignment PhD // TurnTrout, 12 min
Who is this MSRayne person anyway? // MSRayne, 13 min
Do You Care Whether There Are "Successful" Rationalists? // Matt Goldwater, 8 min
# Culture war
Limits of Bodily Autonomy // jkaufman, 1 min
Why is so much political commentary misleading? // contrarianbrit, 6 min
# Misc
Contest: An Alien Message // DaemonicSigil, 1 min
Reflections on Living in "Guess Culture" // dalton-mabery, 3 min
Formalizing Deception // AtlasOfCharts, 6 min
Branding Report Professions // Robin Hanson, 4 min
What Caused The 2020 Homicide Spike? // Scott Alexander, 11 min
Selling Safaris // Robin Hanson, 2 min
# Podcasts
#133 - Max Tegmark on how a 'put-up-or-shut-up' resolution led him to work on AI and algorithmic news selection // , 177 min
Currents 065: Alexander Bard on Protopian Narratology // The Jim Rutt Show, 57 min
165 – DREAM: Dunbar Rules Everything Around Me // The Bayesian Conspiracy, 92 min
YANSS 236 – How Minds Change //
# Videos of the week
Kurzgesagt – The Last Human (Youtube) // habryka4, 1 min
Adversarial Examples in Deep Learning // Simons Institute, 93 min