rational
Issue #211: Full List
17 April, 2022 // View curated list# Instrumental
Obscure Pregnancy Interventions: Much More Than You Wanted To Know // Scott Alexander, 41 min Favorite
How I use Anki: expanding the scope of SRS // TheMcDouglas, 22 min
Useful Vices for Wicked Problems // HoldenKarnofsky, 20 min
Time-Time Tradeoffs // akash-wasil, 3 min
US Taxes: Adjust Withholding When Donating? // jkaufman, 1 min
What is your advice for elder care, particularly taking care of dementia patients? // JohannWolfgang, 1 min
# Epistemic
Epistemic Slipperiness // Raemon, 8 min
The Cage of the Language // sustrik, 2 min
Features that make a report especially helpful to me // lukeprog, 3 min
Summary: "How to Do Research" by OSP's Red // pablo-repetto-1, 3 min
Is there an equivalent of the CDF for grading predictions? // Optimization Process, 1 min
# Ai
Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe // yitz, 2 min
A Quick Guide to Confronting Doom // Ruby, 3 min
Takeoff speeds have a huge effect on what it means to work on AI x-risk // Buck, 2 min
Finally Entering Alignment // ulisse-mini, 1 min
The Regulatory Option: A response to near 0% survival odds // Matthew Lowenstein, 7 min
We should stop being so confident that AI coordination is unlikely // TrevorWiesinger, 2 min
Goodhart's Law Causal Diagrams // JustinShovelain, 7 min
Rationalist Should Win. Not Dying with Dignity and Funding WBE. // CitizenTen, 5 min
Convincing People of Alignment with Street Epistemology // elriggs, 3 min
Another list of theories of impact for interpretability // beth-barnes, 5 min
A Small Negative Result on Debate // sbowman, 1 min
Is technical AI alignment research a net positive? // cranberry_bear, 2 min
Clippy's modest proposal // Daphne_W, 11 min
“Fragility of Value” vs. LLMs // not-relevant, 1 min
Design, Implement and Verify // rwallace, 4 min
What more compute does for brain-like models: response to Rohin // nathan-helm-burger, 14 min
Three questions about mesa-optimizers // UnexpectedValues, 3 min
What can people not smart/technical enough for AI research/AI risk work do to reduce AI-risk/maximize AI safety? (which is most people?) // alex-k-chen, 3 min
Worse than an unaligned AGI // shminux, 1 min
Reward model hacking as a challenge for reward learning // ejenner, 10 min
Is it time to start thinking about what AI Friendliness means? // ZT5, 4 min
An AI-in-a-box success model // azsantosk, 11 min
How can I determine that Elicit is not some weak AGI's attempt at taking over the world ? // lucie-philippon, 1 min
Is Fisherian Runaway Gradient Hacking? // ryankidd44, 4 min
Could we set a resolution/stopper for the upper bound of the utility function of an AI? // FinalFormal2, 1 min
Exploring toy neural nets under node removal. Section 1. // donald-hobson, 9 min
What's a good probability distribution family (e.g. "log-normal") to use for AGI timelines? // capybaralet
A predictor wants to be a consequentialist // Lauro Langosco, 5 min
The Peerless // carado-1, 1 min
What's a good probability distribution to use for AGI timelines? // capybaralet
Does non-access to outputs prevent recursive self-improvement? // Gunnar_Zarncke, 1 min
Unchangeable Code possible ? // AntonTimmer, 1 min
Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It // Scott Alexander, 13 min
# Longevity
Is partial iPSC reprogramming to rejuvenate human cells a big deal? // blackstampede, 1 min
# Anthropic
Genetic Enhancement: a Strategy for Long(ish) AGI Timeline Worlds // kman, 3 min
What is the most efficient way to create more worlds in the many worlds interpretation of quantum mechanics? // seank, 1 min
What do you think will most probably happen to our consciousness when our simulation ends? // richard-ford, 1 min
Hello Alien Polls // Robin Hanson, 1 min
# Decision theory
The Efficient LessWrong Hypothesis - Stock Investing Competition // ViktorThink, 2 min
# Books
The Amish // PeterMcCluskey, 8 min
Review: Structure and Interpretation of Computer Programs // LRudL, 10 min
# Community
Editing Advice for LessWrong Users // JustisMills, 7 min
Does the rationalist community have a membership funnel? // Alex_Altair, 1 min
# Culture war
Ukraine Post #10: Next Phase // Zvi, 17 min
# Misc
Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points // TurnTrout, 10 min
A Brief Excursion Into Molecular Neuroscience // jan-2, 21 min
Rambling thoughts on having multiple selves // cranberry_bear, 3 min
The Accuracy of Authorities // Robin Hanson, 3 min
# Rational fiction
Lies Told To Children // Eliezer_Yudkowsky, 8 min
How dath ilan coordinates around solving alignment // thomas-kwa, 6 min
Make a Movie Showing Alignment Failures // elriggs, 2 min
Post-history is written by the martyrs // Veedrac, 22 min
The Platonist’s Dilemma: A Remix on the Prisoner's. // james-camacho, 6 min
# Videos of the week
Grimes: Music, AI, and the Future of Humanity | Lex Fridman Podcast #281 // Lex Fridman, 124 min