rational
Issue #221: Full List
17 July, 2022 // View curated listAn analysis estimates that about one-fifth of adults are childfree and finds that most chose not to have children early in life. // www.nature.com
# Coronavirus
Covid 7/14/22: BA.2.75 Plus Tax // Zvi, 10 min
# Instrumental
How do AI timelines affect how you live your life? // Quadratic Reciprocity, 1 min Favorite
Resolve Cycles // CFAR 2017, 12 min
Comfort Zone Exploration // CFAR 2017, 14 min
Any tips for eliciting one's own latent knowledge? // MSRayne, 1 min
Taste & Shaping // CFAR 2017, 19 min
Systemization // CFAR 2017, 14 min
Upcoming heatwave: advice // stavros, 3 min
Wacky, risky, anti-inductive intelligence-enhancement methods? // NicholasKross, 1 min
How do you concisely communicate & navigate the politics / culture at your job working at a large corporation or institution? // Eh_Yo_Lexa, 1 min
To-do waves // pawel-sysiak, 4 min
How Can I Maximize My Happiness? // Matt Goldwater, 9 min
# Epistemic
Don't use 'infohazard' for collectively destructive info // Eliezer_Yudkowsky, 1 min
Rainmaking // WalterL, 1 min
Mosaic and Palimpsests: Two Shapes of Research // adamShimi, 12 min
Straw-Steelmanning // chrisvm, 1 min
Everyone is an Imposter // tharin, 11 min
Inward and outward steelmanning // Q Home, 21 min
Hiding Motives From Yourself // Robin Hanson, 3 min
# Ai
Humans provide an untapped wealth of evidence about alignment // TurnTrout, 11 min
A note about differential technological development // So8res, 7 min
Safety Implications of LeCun's path to machine intelligence // ivan-vendrov, 6 min
Circumventing interpretability: How to defeat mind-readers // Lee_Sharkey, 43 min
The Alignment Problem // lsusr, 3 min
Response to Blake Richards: AGI, generality, alignment, & loss functions // steve2152, 21 min
Slowing down AI progress is an underexplored alignment strategy // Norman Borlaug, 6 min
Deep learning curriculum for large language model alignment // Jacob_Hilton, 1 min
Artificial Sandwiching: When can we test scalable alignment protocols without humans? // sbowman, 5 min
Checksum Sensor Alignment // lsusr, 1 min
Peter Singer's first published piece on AI // yatangle, 1 min
Which AI Safety research agendas are the most promising? // Chris_Leong
MIRI Conversations: Technology Forecasting & Gradualism (Distillation) // TheMcDouglas, 23 min
Goal Alignment Is Robust To the Sharp Left Turn // Thane Ruthenis, 4 min
Acceptability Verification: A Research Agenda // David Udell, 1 min
Alignment as Game Design // DarkSym, 2 min
QNR Prospects // PeterMcCluskey, 11 min
How to impress students with recent advances in ML? // charbel-raphael-segerie, 1 min
Proposed Orthogonality Theses #2-5 // rjbg, 2 min
What is wrong with this approach to corrigibility? // rafael-cosman-1, 1 min
Musings on the Human Objective Function // michael-soareverix, 3 min
John von Neumann on how to safely progress with technology // dalton-mabery, 1 min
The Easiest Solution to AI Alignment // michael-soareverix, 2 min
We are now at the point of deepfake job interviews // TrevorWiesinger, 1 min
# Meta-ethics
Comment on "Propositions Concerning Digital Minds and Society" // Zack_M_Davis, 9 min
Notes on Love // David_Gross, 41 min
# Longevity
Potato diet: A post mortem and an answer to SMTM's article // joy_void_joy, 19 min
# Anthropic
Cognitive Instability, Physicalism, and Free Will // dadadarren, 3 min
Space Econ HowTo // Robin Hanson, 3 min
# Decision theory
Immanuel Kant and the Decision Theory App Store // daniel-kokotajlo, 8 min
Making decisions using multiple worldviews // ricraz, 13 min
Risk Management from a Climbers Perspective // jorge-velez, 7 min
# Math and cs
Avoid the abbreviation "FLOPs" – use "FLOP" or "FLOP/s" instead // Daniel_Eth, 1 min
A time-invariant version of Laplace's rule // Jsevillamol, 23 min
Alien Message Contest: Solution // DaemonicSigil, 4 min
Hessian and Basin volume // Vivek, 5 min
A review of Nate Hilger's The Parent Trap // david-hugh-jones, 4 min
An attempt to break circularity in science // fryloysis, 1 min
# Books
Review of The Engines of Cognition // william-gasarch, 18 min
Book Review: Neal Stephenson’s “Termination Shock” // Tyler Simmons, 38 min
Highlights from the memoirs of Vannevar Bush // jasoncrawford, 15 min
Your Book Review: The Righteous Mind // Scott Alexander, 49 min
Book Review: The Man From The Future // Scott Alexander, 22 min
# Ea
Criticism of EA Criticism Contest // Zvi, 37 min
Passing Up Pay // jkaufman, 6 min
Announcing Future Forum - Apply Now // daniel-wang, 4 min
# Community
A summary of every "Highlights from the Sequences" post // akash-wasil, 19 min
Why Portland // adamzerner, 11 min
# Misc
Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments // jeff-ladish, 7 min
My Opportunity Costs // abstractapplic, 3 min
Moneypumping Bryan Caplan's Belief in Free Will // Morpheus, 1 min
Why We Blame Victims // Robin Hanson, 3 min
Impact Markets: The Annoying Details // Scott Alexander, 34 min
# Podcasts
166 – Getting Lucky // The Bayesian Conspiracy, 111 min
YANSS 237 – How to bridge divides on wedge issues by revealing shared values and avoiding reactance //
# Rational fiction
A story about a duplicitous API // john-cheng, 1 min
# Videos of the week
John Carmack: Doom, Quake, VR, AGI, Programming, Video Games, and Rockets | Lex Fridman Podcast #309 // Lex Fridman, 314 min