rational
Issue #216: Full List
12 June, 2022 // View curated list# Instrumental
Entitlement as a major amplifier of unhappiness // VipulNaik, 10 min
Show LW: YodaTimer.com // adamzerner, 1 min
Website For Yoda Timers // adamzerner, 1 min
Where to Live for Happiness // ethanmorse, 31 min
What board games would you recommend? // yair-halberstadt, 1 min
Health & Lifestyle Interventions With Heavy-Tailed Outcomes? // MondSemmel, 1 min
What journaling prompts do you use? // ChristianKl, 1 min
# Ai
AGI Ruin: A List of Lethalities // Eliezer_Yudkowsky, 37 min Favorite
AGI Safety FAQ / all-dumb-questions-allowed thread // alenglander, 4 min
Godzilla Strategies // johnswentworth, 3 min
Who models the models that model models? An exploration of GPT-3's in-context model fitting ability // Lovre, 10 min
AI Could Defeat All Of Us Combined // HoldenKarnofsky, 20 min
A descriptive, not prescriptive, overview of current AI Alignment Research // jan-2, 7 min
"Pivotal Acts" means something specific // Raemon, 2 min
why assume AGIs will optimize for fixed goals? // nostalgebraist, 5 min
Why I don't believe in doom // adrian-arellano-davin, 4 min
Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment? // P., 4 min
Pitching an Alignment Softball // mu_(negative), 12 min
How Do Selection Theorems Relate To Interpretability? // johnswentworth, 4 min
Summary of "AGI Ruin: A List of Lethalities" // stephen-mcaleese, 10 min
Tao, Kontsevich & others on HLAI in Math // interstice, 2 min
Poorly-Aimed Death Rays // Thane Ruthenis, 4 min
[linkpost] The final AI benchmark: BIG-bench // RomanS, 1 min
Epistemological Vigilance for Alignment // adamShimi, 13 min
How fast can we perform a forward pass? // jsteinhardt, 19 min
Eliciting Latent Knowledge (ELK) - Distillation/Summary // marius-hobbhahn, 25 min
Steganography and the CycleGAN - alignment failure case study // przemyslaw-czechowski, 4 min
Open Problems in AI X-Risk [PAIS #5] // dan-hendrycks, 43 min
Why agents are powerful // daniel-kokotajlo, 9 min
I No Longer Believe Intelligence to be "Magical" // DragonGod, 5 min
A plausible story about AI risk. // delesley-hutchins, 5 min
[Linkpost & Discussion] AI Trained on 4Chan Becomes ‘Hate Speech Machine’ [and outperforms GPT-3 on TruthfulQA Benchmark?!] // yitz, 2 min
Grokking “Forecasting TAI with biological anchors” // anson.ho, 15 min
How dangerous is human-level AI? // Alex_Altair, 9 min
If there was a millennium equivalent prize for AI alignment, what would the problems be? // yair-halberstadt, 1 min
Embodiment is Indispensable for AGI // p-g-keerthana-gopalakrishnan, 7 min
There's probably a tradeoff between AI capability and safety, and we should act like it // david-johnston, 1 min
Today in AI Risk History: The Terminator (1984 film) was released. // Impassionata, 1 min
Why do some people try to make AGI? // TekhneMakre, 3 min
Another plausible scenario of AI risk: AI builds military infrastructure while collaborating with humans, defects later. // avturchin, 1 min
Operationalizing two tasks in Gary Marcus’s AGI challenge // bill-benzon, 10 min
Confused Thoughts on AI Afterlife (seriously) // epirito, 1 min
ELK Proposal - Make the Reporter care about the Predictor’s beliefs // adam-jermyn, 6 min
AGI Safety Communications Initiative // ines, 1 min
You Only Get One Shot: an Intuition Pump for Embedded Agency // Oliver Sourbut, 2 min
Kolmogorov's AI Forecast // interstice, 1 min
Is AI Alignment Impossible? // Heighn, 1 min
Thoughts on Formalizing Composition // Frederik, 9 min
Could Patent-Trolling delay AI timelines? // pablo-repetto-1, 1 min
If no near-term alignment strategy, research should aim for the long-term // harsimony, 1 min
Transformer Research Questions from Stained Glass Windows // Stefan42, 2 min
DALL-E 2 - Unofficial Natural Language Image Editing, Art Critique Survey // bakztfuture
Thinking about Broad Classes of Utility-like Functions // Jemist, 5 min
Give the model a model-builder // adam-jermyn, 6 min
Noisy environment regulate utility maximizers // niclas-kupper, 9 min
Progress Report 6: get the tool working // nathan-helm-burger, 2 min
Miriam Yevick on why both symbols and networks are necessary for artificial minds // bill-benzon, 4 min
AGI Ruin: A List of Lethalities // Eliezer Yudkowsky, 37 min
Somewhat Contra Marcus On AI Scaling // Scott Alexander, 16 min
Six Dimensions of Operational Adequacy in AGI Projects // Eliezer Yudkowsky, 16 min
My Bet: AI Size Solves Flubs // Scott Alexander, 14 min
# Meta-ethics
Why it's bad to kill Grandma // dynomight, 9 min
# Longevity
The “mind-body vicious cycle” model of RSI & back pain // steve2152, 15 min
# Anthropic
Untypical SIA // avturchin, 2 min
Beware Cosmic Errors // Robin Hanson, 3 min
# Decision theory
Making stable, free nations as a hobby // blackstampede, 7 min
Why has no person / group ever taken over the world? // alenglander, 1 min
Optimization and Adequacy in Five Bullets // james.lucassen, 5 min
New cooperation mechanism - quadratic funding without a matching pool // Filip Sondej, 6 min
# Math and cs
Turning Some Inconsistent Preferences into Consistent Ones // niplav, 9 min
Expected Value vs. Expected Growth // tom-pollak, 1 min
# Books
Book Review: How the World Became Rich // Davis Kedrosky, 13 min
Your Book Review: The Dawn Of Everything // Scott Alexander, 38 min
# Ea
Transcript of a Twitter Discussion on EA from June 2022 // Zvi, 1 min
# Community
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" // AnnaSalamon, 20 min
Leaving Google, Joining the Nucleic Acid Observatory // jkaufman, 3 min
Why don't you introduce really impressive people you personally know to AI alignment (more often)? // Verden, 1 min
Russian x-risks newsletter May 2022 + short history of "methodologists" // avturchin, 1 min
A gaming group for rationality-aware people // dhatas, 1 min
# Culture war
Staying Split: Sabatini and Social Justice // Duncan_Sabien, 25 min
Steelmanning Marxism/Communism // Suh_Prance_Alot, 1 min
Against "There Are Two X-Wing Parties" // Scott Alexander, 2 min
Which Party Has Gotten More Extreme Faster? // Scott Alexander, 10 min
# Misc
Stephen Wolfram's ideas are under-appreciated // Kenny, 1 min
Silly Online Rules // Gunnar_Zarncke, 1 min
Forestalling Atmospheric Ignition // conor-sullivan, 1 min
How Does Cognitive Performance Translate to Real World Capability? // DragonGod, 1 min
Some questions I've been pondering... // dalton-mabery, 1 min
Why Allow Line Cutting? // Robin Hanson, 2 min
# Podcasts
EP 158 Remzi Bajrami on Flow Currency // The Jim Rutt Show, 90 min
Currents 063: Jessica Flack on nth-Order Effects of the Russia-Ukraine War // The Jim Rutt Show, 65 min
# Rational fiction
The Mountain Troll // lsusr, 2 min
# Videos of the week
The Epic History of Artificial Intelligence // John Coogan, 16 min