rational
Issue #250: Full List
5 February, 2023 // View curated listAnomalous tokens: a mysterious failure mode for GPT // www.lesswrong.com
# Coronavirus
Covid 2/2/23: The Emergency Ends on 5/11 // Zvi, 8 min
Whatever their arguments, Covid vaccine sceptics will probably never convince me // contrarianbrit, 3 min
Response To Alexandros Contra Me On Ivermectin // Scott Alexander, 49 min
# Instrumental
Focus on the places where you feel shocked everyone's dropping the ball // So8res, 4 min Favorite
How Likely is Losing a Google Account? // jkaufman, 3 min
Nice Clothes are Good, Actually // gworley, 4 min
Tools for finding information on the internet // r, 1 min
Small Talk is Good, Actually // gworley, 4 min
The Future of Structured Self Improvement // Raven, 1 min
Why and How to Graduate Early [U.S.] // Tego, 10 min
Don't Judge a Tool by its Average Output // silentbob, 4 min
No boats without a lake // logan-kieller, 3 min
# Epistemic
Simulacra Levels Summary // Zvi, 8 min
Aiming for Convergence Is Like Discouraging Betting // Zack_M_Davis, 14 min
What fact that you know is true but most people aren't ready to accept it? // lorenzo-rex, 1 min
Saying things because they sound good // adamzerner, 2 min
Takeaways from calibration training // jarviniemi, 4 min
# Ai
SolidGoldMagikarp (plus, prompt generation) // jessica-cooper, 14 min
What I mean by "alignment is in large part about making cognition aimable at all" // So8res, 2 min
Why I hate the "accident vs. misuse" AI x-risk dichotomy (quick thoughts on "structural risk") // capybaralet, 1 min
You are probably not a good alignment researcher, and other blatant lies // zrkrlc, 6 min
Compendium of problems with RLHF // charbel-raphael-segerie, 12 min
Inner Misalignment in "Simulator" LLMs // adam-scherlis, 5 min
Against Boltzmann mesaoptimizers // porby, 5 min
[Linkpost] Google invested $300M in Anthropic in late 2022 // akash-wasil, 1 min
Research agenda: Formalizing abstractions of computations // ejenner, 39 min
Path-Dependence in ChatGPT's Political Outputs // lsusr, 4 min
MIRI didn't "give up" (fairly obviously IMO?) // Raemon, 3 min
More findings on Memorization and double descent // marius-hobbhahn, 22 min
Heritability, Behaviorism, and Within-Lifetime RL // steve2152, 5 min
Mechanistic Interpretability Quickstart Guide // neel-nanda-1, 7 min
Product safety is a poor model for AI governance // Grothor, 12 min
Taboo P(doom) // NathanBarnard, 1 min
No Really, Attention is ALL You Need - Attention can do feedforward networks // Robert_AIZI, 7 min
Language Models can be Utility-Maximising Agents // Raymond D, 2 min
Call for submissions: “(In)human Values and Artificial Agency”, ALIFE 2023 // lahwran, 1 min
formal alignment: what it is, and some proposals // carado-1, 1 min
Normative vs Descriptive Models of Agency // mattmacdermott, 5 min
“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers // Vael Gates
Criticism of the main framework in AI alignment // Michele Campolo, 8 min
Trends in the dollar training cost of machine learning systems // ben-cottier, 4 min
AI Safety Arguments: An Interactive Guide // Lukas T, 3 min
Abstraction As Symmetry and Other Thoughts // Numendil, 2 min
The effect of horizon length on scaling laws // Jacob_Hilton, 1 min
Structure, creativity, and novelty // TsviBT, 8 min
Interviews with 97 AI Researchers: Quantitative Analysis // msherm, 7 min
Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook) // hastings-greer, 4 min
What is the ground reality of countries taking steps to recalibrate AI development towards Alignment first? // Nebuch, 3 min
What I mean by “alignment is in large part about making cognition aimable at all” // Nate Soares, 2 min
# Longevity
Exercise is Good, Actually // gworley, 3 min
Andrew Huberman on How to Optimize Sleep // leon-lang, 6 min
Can we “cure” cancer? // jasoncrawford, 2 min
Schizophrenia as a deficiency in long-range cortex-to-cortex communication // steve2152, 12 min
How can I help inflammation-based nerve damage be temporary? // Optimization Process, 1 min
Managing Your Health by Analyzing Your Personal Health and Fitness Data // cwikman, 18 min
# Decision theory
Beware of Fake Alternatives // silentbob, 5 min
The Energy Requirements and Feasibility of Off-World Mining // clans, 9 min
How Planning Helps // Robin Hanson, 2 min
# Math and cs
2+2=π√2+n // logan-zoellner, 1 min
# Ea
EA novel published on Amazon // timothy-underwood-1
# Community
I don't think MIRI "gave up" // Raemon, 5 min
2022 Unofficial LessWrong General Census // Screwtape, 2 min
Voting Results for the 2021 Review // Raemon, 5 min
Retrospective on the AI Safety Field Building Hub // Vael Gates
Jordan Peterson: Guru/Villain // Bryan Frances, 11 min
Epoch Impact Report 2022 // Jsevillamol
# Fun
Fucking Goddamn Basics of Rationalist Discourse // BrienneYudkowsky, 1 min
# Misc
Why Is Everyone So Boring? // Robin Hanson, 2 min
Mostly Skeptical Thoughts On The Chatbot Propaganda Apocalypse // Scott Alexander, 11 min
Religion Can Divert Sacred Energy // Robin Hanson, 2 min
# Podcasts
AXRP Episode 19 - Mechanistic Interpretability with Neel Nanda // DanielFilan, 144 min
EP 174 Fred Beuttler and Mark Stahlman on Trivium University // The Jim Rutt Show, 91 min
Currents 081: Layman Pascal Interviews Jim Rutt on Twitter as Collective Intelligence // The Jim Rutt Show, 69 min
c47: Script Draft 1, Beats 32-36 // Constellation, 45 min
c46: Script Draft 1, Beats 27-31 // Constellation, 54 min
Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning // The Jim Rutt Show, 80 min