rational
Issue #233: Full List
9 October, 2022 // View curated list# Coronavirus
Covid 10/6/22: Overreactions Aplenty // Zvi, 38 min
Ivermectin: Much Less Than You Needed To Know // George3d6, 1 min
# Instrumental
A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox // henrik-karlsson, 13 min
Consider your appetite for disagreements // adamzerner, 7 min
Sleep Training // jkaufman, 2 min
Finding Great Tutors // ulisse-mini, 1 min
Will you let your kid play football? // 5hout, 2 min
Deliberate practice for research? // Alex_Altair, 1 min
Baby Monitor with Delay // jkaufman, 1 min
Good Bets and Internal Resources // Raymond D, 4 min
Does Google still hire people via their foobar challenge? // Algon, 1 min
# Epistemic
The "you-can-just" alarm // Emrik North, 1 min
Truth seeking is motivated cognition // gworley, 3 min
What makes a probability question "well-defined"? // noah-topper, 8 min
Self-defeating conspiracy theorists and their theories // M. Y. Zuo, 3 min
# Ai
Warning Shots Probably Wouldn't Change The Picture Much // So8res, 3 min
Don't leave your fingerprints on the future // So8res, 6 min
AI Timelines via Cumulative Optimization Power: Less Long, More Short // jacob_cannell, 30 min
The Teacup Test // lsusr, 2 min
Polysemanticity and Capacity in Neural Networks // Buck, 3 min
More Recent Progress in the Theory of Neural Networks // jylin04, 4 min
A shot at the diamond-alignment problem // TurnTrout, 24 min
What does it mean for an AGI to be 'safe'? // So8res, 3 min
Paper+Summary: OMNIGROK: GROKKING BEYOND ALGORITHMIC DATA // marius-hobbhahn, 1 min
More examples of goal misgeneralization // rohinmshah, 2 min
Humans aren't fitness maximizers // So8res, 6 min
Paper: Large Language Models Can Self-improve [Linkpost] // Evan R. Murphy, 1 min
Smoke without fire is scary // adam-jermyn, 4 min
my current outlook on AI risk mitigation // carado-1, 13 min
Alignment Might Never Be Solved, By Humans or AI // interstice, 4 min
A review of the Bio-Anchors report // jylin04, 1 min
How are you dealing with ontology identification? // ejenner, 4 min
Four usages of "loss" in AI // TurnTrout, 6 min
CHAI, Assistance Games, And Fully-Updated Deference [Scott Alexander] // brglnd, 21 min
How many GPUs does NVIDIA make? // leogao, 1 min
No free lunch theorem is irrelevant // Dmitry Savishchev, 1 min
confusion about alignment requirements // carado-1, 3 min
Is there a culture overhang? // aleksi-liimatainen, 1 min
The probability that Artificial General Intelligence will be developed by 2043 is Zero // cveres
Tracking Compute Stocks and Flows: Case Studies? // Cullen_OKeefe, 1 min
[Linkpost] "Blueprint for an AI Bill of Rights" - Office of Science and Technology Policy, USA (2022) // rodeo_flagellum, 2 min
If you want to learn technical AI safety, here's a list of AI safety courses, reading lists, and resources // ea247, 1 min
The Lebowski Theorem — Charitable Reads of Anti-AGI-X-Risk Arguments, Part 2 // sstich, 8 min
Any further work on AI Safety Success Stories? // Krieger, 1 min
Against the weirdness heuristic // ea-1, 2 min
Toy alignment problem: Social Nework KPI design // qbolec, 1 min
Analysing a 2036 Takeover Scenario // ukc10014, 43 min
linkpost: neuro-symbolic hybrid ai // nathan-helm-burger, 1 min
Charitable Reads of Anti-AGI-X-Risk Arguments, Part 1 // sstich, 3 min
Generative, Episodic Objectives for Safe AI // Michael Glass, 9 min
Visualizing Learned Representations of Rice Disease // muhia_bee, 4 min
My tentative interpretability research agenda - topology matching. // maxwell-clarke, 4 min
AGI Timelines from Cumulative Optimization Power: Less Long, More Short // jacob_cannell, 25 min
# Meta-ethics
Against Arguments For Exploitation // blackstampede, 8 min
The Biggest Problem with Deontology: The Aggregation Problem // arjun-panickssery, 3 min
Reflection Mechanisms as an Alignment target: A follow-up survey // marius-hobbhahn, 8 min
# Longevity
How Trustworthy Are Supplements? // Scott Alexander, 27 min Favorite
# Anthropic
How does anthropic reasoning and illusionism/eliminitivism interact? // Shiroe, 1 min
# Decision theory
Analysis: US restricts GPU sales to China // Aidan O'Gara, 5 min
Adversarial vs Collaborative Contexts // jkaufman, 1 min
Deprecated: Some humans are fitness maximizers // DarkSym, 7 min
Do uncertainty/planning costs make convex hulls unrealistic? // edward-pierzchalski, 1 min
Easy fixing Voting // charbel-raphael-segerie
Required reading for understanding the current nuclear standoff // TrevorWiesinger, 1 min
Some types of privileges need to remain maximally inaccessible - which seems to guarantee everlasting inequality. If so, is increasing inequality desirable? // M. Y. Zuo, 1 min
Divorcing Tax Career Agents // Robin Hanson, 3 min
# Math and cs
Paper: Discovering novel algorithms with AlphaTensor [Deepmind] // LawChan, 1 min
Boolean Primitives for Coupled Optimizers // paulbricman, 9 min
# Books
Notes on Notes on the Synthesis of Form // Vaniver, 7 min
American invention from the “heroic age” to the system-building era // jasoncrawford, 12 min
# Culture war
Why I think there's a one-in-six chance of an imminent global nuclear war // MaxTegmark, 4 min
The Village and the River Monsters... Or: Less Fighting, More Brainstorming // ExCeph, 10 min
# Misc
Calibrate - New Chrome Extension for hiding numbers so you can guess // cmessinger, 1 min
Research Deprioritizing External Communication // jkaufman, 9 min
Quick notes on “mirror neurons” // steve2152, 2 min
Signaling Guilt // Krieger, 1 min
Dependency Tree For The Development Of Plate Tectonics // pktechgirl, 4 min
Accrue Nuclear Dignity Points // DonyChristie, 6 min
Introducing the Basic Post-scarcity Map // [email protected], 2 min
Statistics for objects with shared identities // Q Home, 4 min
Needed: World Suggestion Box // Robin Hanson, 2 min
A Columbian Exchange // Scott Alexander, 13 min
# Podcasts
Currents 070: Brian Chau on Propaganda & Populism // The Jim Rutt Show, 53 min
172 – Virtue Ethics // The Bayesian Conspiracy, 127 min
Currents 069: Bonnitta Roy and Euvie Ivanova on Collective Intimacy // The Jim Rutt Show, 45 min
# Rational fiction
The Shape of Things to Come // alexbeyman, 9 min
The Patent Clerk // alexbeyman, 4 min
The Beautiful Ones // alexbeyman, 52 min
Not Long Now // alexbeyman, 81 min
The Three Cardinal Sins // alexbeyman, 9 min
The Answer // alexbeyman, 5 min
# Videos of the week
How Gaming Can Be a Force for Good | Noah Raford | TED // TED, 14 min