rational
Issue #235: Full List
23 October, 2022 // View curated list# Instrumental
Popular Personal Financial Advice versus the Professors (James Choi, NBER) // aidan-fitzgerald, 3 min
My search for a reliable breakfast // [email protected], 3 min
How to Write Readable Posts // david-hartsough
Combatting perfectionism // [email protected], 3 min
# Epistemic
Wisdom Cannot Be Unzipped // Sable, 8 min
When apparently positive evidence can be negative evidence // cata, 1 min
Is the meaning of words chosen/interpreted to maximize correlations with other relevant queries? // tailcalled, 1 min
# Ai
The heritability of human values: A behavior genetic critique of Shard Theory // geoffreymiller, 24 min
How To Make Prediction Markets Useful For Alignment Work // johnswentworth, 2 min
Scaling Laws for Reward Model Overoptimization // leogao, 1 min
Response to Katja Grace's AI x-risk counterarguments // ejenner, 18 min
They gave LLMs access to physics simulators // ryan_b, 1 min
An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers // neel-nanda-1, 13 min
Science of Deep Learning - a technical agenda // marius-hobbhahn, 5 min
A conversation about Katja's counterarguments to AI risk // matthew-barnett, 40 min
AI Research Program Prediction Markets // tailcalled, 1 min
Is GitHub Copilot in legal trouble? // tcelferact, 1 min
Should we push for requiring AI training data to be licensed? // ChristianKl, 1 min
Learning societal values from law as part of an AGI alignment strategy // john-nay, 65 min
AI Safety Ideas: An Open AI Safety Research Platform // esben-kran
Cooperators are more powerful than agents // ivan-vendrov, 4 min
The reward function is already how well you manipulate humans // Kerry, 2 min
What Does AI Alignment Success Look Like? // shminux, 1 min
Cruxes in Katja Grace's Counterarguments // azsantosk, 8 min
Hacker-AI and Digital Ghosts – Pre-AGI // Erland, 9 min
Distilled Representations Research Agenda // Hoagy, 9 min
Creating superintelligence without AGI // darustc4, 1 min
A pragmatic metric for Artificial General Intelligence // lorenzo-rex, 1 min
A framework and open questions for game theoretic shard modeling // D0TheMath, 4 min
AGI misalignment x-risk may be lower due to an overlooked goal specification technology // john-nay, 65 min
How easy is it to supervise processes vs outcomes? // sharmake-farah, 1 min
Where can I find solution to the exercises of AGISF? // charbel-raphael-segerie, 1 min
When trying to define general intelligence is ability to achieve goals the best metric? // jmh, 1 min
Trajectories to 2036 // ukc10014, 22 min
Infinite Possibility Space and the Shutdown Problem // magfrump, 3 min
Metaculus is building a team dedicated to AI forecasting // ChristianWilliams
Is GPT-N bounded by human capacities? No. // strawberry calm, 2 min
Simple question about corrigibility and values in AI. // jmh, 1 min
# Longevity
Luck based medicine: my resentful story of becoming a medical miracle // pktechgirl, 15 min
Designing a Methylation Clock that Reliably Evaluates Anti-aging Interventions // Josh Mitteldorf, 11 min
# Anthropic
How to Take Over the Universe (in Three Easy Steps) // Writer, 14 min
What is Consciousness? // belkarx, 4 min
# Decision theory
Decision theory does not imply that we get to have nice things // So8res, 34 min
Plans Are Predictions, Not Optimization Targets // johnswentworth, 5 min
Notes on "Can you control the past" // So8res, 26 min
Legal Brief: Plurality Voting is Unconstitutional // ctrout, 13 min
Rough Sketch for Product to Enhance Citizen Participation in Politics // rodeo_flagellum, 1 min
Maximal lotteries for value learning // ViktoriaMalyasova, 6 min
Plurality Voting is Unconstitutional // ctrout, 13 min
New Tax Career Agent Test // Robin Hanson, 4 min
Testing Tax Career Agents // Robin Hanson, 2 min
# Math and cs
Open Problem in Voting Theory // Scott Garrabrant, 7 min
Maximal Lottery-Lotteries // Scott Garrabrant, 4 min
# Books
Book Review: Rhythms Of The Brain // Scott Alexander, 15 min
Fossil Future // Robin Hanson, 2 min
# Relationships
A scheme to become sexier // rockthecasbah, 3 min
# Community
aisafety.community - A living document of AI safety communities // zeshen, 1 min
# Culture war
The harms you don't see // ViktoriaMalyasova, 12 min
# Fun
Another Bay Area House Party // Scott Alexander, 15 min
# Misc
Age changes what you care about // Dentin, 2 min
Untapped Potential at 13-18 // belkarx, 1 min
The importance of studying subjective experience // Q Home, 8 min
Crypto loves impact markets: Notes from Schelling Point Bogotá // wearsshoes
Moorean Statements // David Udell, 1 min
Significance of the Language of Thought Hypothesis? // DrFlaggstaff, 1 min
# Podcasts
173 – Oh Lawd, Strong AI is Comin’ // The Bayesian Conspiracy, 124 min
Currents 072: Ben Goertzel on Viable Paths to True AGI // The Jim Rutt Show, 110 min
# Videos of the week
[LEAKED] Google’s new AI is absolutely TERRIFYING. // Jake Tran, 16 min