rational
Issue #224: Full List
7 August, 2022 // View curated list# Instrumental
Six weeks doesn’t make a habit // lynettebye, 4 min
Perverse Independence Incentives // jkaufman, 1 min
Meditation course claims 65% enlightenment rate: my review // ea247, 17 min
My advice on finding your own path // alex-ray, 3 min
On akrasia: starting at the bottom // seecrow, 4 min
Flash Classes: Polaris, Seeking PCK, and Five-Second Versions // CFAR 2017, 9 min
Two Kids Crosswise // jkaufman, 1 min
Time-logging programs and/or spreadsheets (2022) // mikbp, 1 min
# Epistemic
Don't be a Maxi // cole-killian, 2 min
How does one recognize information and differentiate it from noise? // M. Y. Zuo, 1 min
Absurdity Bias, Neom Edition // Scott Alexander, 7 min
# Ai
Two-year update on my personal AI timelines // ajeya-cotra, 19 min
chinchilla's wild implications // nostalgebraist, 14 min
What do ML researchers think about AI in 2022? // KatjaGrace, 2 min
Rant on Problem Factorization for Alignment // johnswentworth, 8 min
Externalized reasoning oversight: a research direction for language model alignment // tamera, 7 min
A Data limited future // donald-hobson, 2 min
The Pragmascope Idea // johnswentworth, 4 min
Transformer language models are doing something more general // Numendil, 2 min
Why I Am Skeptical of AI Regulation as an X-Risk Mitigation Strategy // alex-ray, 2 min
Convergence Towards World-Models: A Gears-Level Model // Thane Ruthenis, 17 min
Where are the red lines for AI? // Karl von Wendt, 7 min
Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination // alex-lintz, 14 min
What are the Red Flags for Neural Network Suffering? - Seeds of Science call for reviewers // rogersbacon, 1 min
Surprised by ELK report's counterexample to Debate, IDA // Evan R. Murphy, 6 min
Interpretability isn’t Free // joel-burget, 2 min
Would "Manhattan Project" style be beneficial or deleterious for AI Alignment? // Just Learning, 1 min
Some doubts about Non Superintelligent AIs // aditya-malik, 1 min
How likely do you think worse-than-extinction type fates to be? // span1, 1 min
Which intro-to-AI-risk text would you recommend to... // Sherrinford, 1 min
A Deceptively Simple Argument in favor of Problem Factorization // logan-zoellner, 1 min
What drives progress, theory or application? // brglnd, 1 min
Deontology and Tool AI // Nathan1123, 7 min
Precursor checking for deceptive alignment // evhub, 17 min
AI alignment: Would a lazy self-preservation instinct be sufficient? // BrainFrog, 1 min
Law-Following AI 4: Don't Rely on Vicarious Liability // Cullen_OKeefe, 3 min
My takeaways from the EA In-depth 4th week discussion: Animal Welfare // kriz-royce-tahimic, 1 min
# Anthropic
Slightly Against Underpopulation Worries // Scott Alexander, 10 min
# Decision theory
"Just hiring people" is sometimes still actually possible // lc, 7 min
Newcombness of the Dining Philosophers Problem // Nathan1123, 2 min
Bridging Expected Utility Maximization and Optimization // Whispermute, 17 min
# Math and cs
Announcing Squiggle: Early Access // ozziegooen, 7 min
Metaculus and medians // rossry, 5 min
# Books
Your Book Review: Exhaustion // Scott Alexander, 16 min
# Relationships
Anatomy of a Dating Document // squidious, 5 min
# Community
Running a Basic Meetup // Screwtape, 2 min
# Culture war
Conservatism is a rational response to epistemic uncertainty // contrarianbrit, 10 min
# Misc
Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!” // eukaryote, 11 min
Clapping Lower // jkaufman, 1 min
The Falling Drill // Screwtape, 1 min
Wolfram Research v Cook // Kenny, 9 min
PredictIt is closing due to CFTC changing its mind // eigen, 1 min
Calibration Trivia // Screwtape, 3 min
Cambist Booking // Screwtape, 2 min
An attempt to understand the Complexity of Values // dalton-mabery, 5 min
A Word is Worth 1,000 Pictures // Kully, 2 min
# Videos of the week
DreamStudio AI (Stable Diffusion) FIRST LOOK and Guide - Stable Diffusion Full Release // MattVidPro, 24 min