rational
Issue #229: Full List
11 September, 2022 // View curated list# Instrumental
Linkpost: Github Copilot productivity experiment // daniel-kokotajlo, 1 min
Solar Blackout Resistance // jkaufman, 3 min
Put Dirty Dishes in the Dishwasher // jkaufman, 1 min
How to Do Research. v1 // pablo-repetto-1, 47 min
Beta Readers are Great // HoldenKarnofsky, 1 min
Turn your flashcards into Art // Heye Groß, 1 min
# Ai
Most People Start With The Same Few Bad Ideas // johnswentworth, 3 min
The shard theory of human values // quintin-pope, 31 min
Monitoring for deceptive alignment // evhub, 10 min
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.] // capybaralet, 6 min
Samotsvety's AI risk forecasts // elifland, 4 min
Searching for Modularity in Large Language Models // Nicky, 16 min
Thoughts on AGI consciousness / sentience // steve2152, 7 min
Generators Of Disagreement With AI Alignment // George3d6, 11 min
Private alignment research sharing and coordination // porby, 5 min
AI alignment with humans... but with which humans? // geoffreymiller, 3 min
Framing AI Childhoods // David Udell, 5 min
No, human brains are not (much) more efficient than computers // jhoogland, 4 min
AI Governance Needs Technical Work // Mauricio, 10 min
Progress Report 7: making GPT go hurrdurr instead of brrrrrrr // nathan-helm-burger, 4 min
program searches // carado-1, 2 min
Oversight Leagues: The Training Game as a Feature // paulbricman, 12 min
Swap and Scale // LosPolloFowler, 1 min
What Should AI Owe To Us? Accountable and Aligned AI Systems via Contractualist AI Alignment // xuan, 29 min
All AGI safety questions welcome (especially basic ones) [~monthly thread] // ete, 3 min
Can "Reward Economics" solve AI Alignment? // Q Home, 22 min
AlexaTM - 20 Billion Parameter Model With Impressive Performance // ViktorThink, 1 min
AI-assisted list of ten concrete alignment things to do right now // lcmgcd, 4 min
Is training data going to be diluted by AI-generated content? // hannes-thurnherr, 1 min
It's (not) how you use it // ea-1, 2 min
How Josiah became an AI safety researcher // neil-crawford, 1 min
Gatekeeper Victory: AI Box Reflection // Double, 9 min
How To Know What the AI Knows - An ELK Distillation // Fabien, 6 min
Turning WhatsApp Chat Data into Prompt-Response Form for Fine-Tuning // hatta_afiq, 1 min
# Meta-ethics
The ethics of reclining airplane seats // braces, 1 min
Should you refrain from having children because of the risk posed by artificial intelligence? // Mientras, 1 min
Notes on Resolve // David_Gross, 39 min
# Anthropic
ethics and anthropics of homomorphically encrypted computations // carado-1, 3 min Favorite
# Decision theory
90% of anything should be bad (& the precision-recall tradeoff) // cartografie, 6 min
Understanding and avoiding value drift // TurnTrout, 7 min
Breaking Newcomb's Problem with Non-Halting states // Hivewired, 6 min
Unbounded utility functions and precommitment // MichaelStJules, 1 min
Sacred Distance Hides Motives // Robin Hanson, 4 min
Hail Industrial Organization // Robin Hanson, 3 min
# Math and cs
Prototyping in C // jkaufman, 2 min
Keeping Time in Epoch Seconds // gworley, 2 min
# Ea
EA, Veganism and Negative Animal Utilitarianism // yair-halberstadt, 1 min
# Community
My emotional reaction to the current funding situation // sam-4, 6 min
Overton Gymnastics: An Exercise in Discomfort // DarkSym, 4 min
An unofficial "Highlights from the Sequences" tier list // akash-wasil, 5 min
Find out how utilitarian you are - a mega thread of philosophy polls // spencerg, 1 min
Impact Shares For Speculative Projects // pktechgirl, 8 min
Russian Food for Petrov Day // weft, 1 min
Community Building for Graduate Students: A Targeted Approach // neil-crawford, 3 min
# Fun
Rejected Early Drafts of Newcomb's Problem // zahmahkibo, 3 min
Galaxy Trucker Needs a New Second Half // jkaufman, 1 min
[Fun][Link] Alignment SMBC Comic // Gunnar_Zarncke, 1 min
# Misc
Let's Terraform West Texas // blackstampede, 5 min
What are you for? // lsusr, 1 min
First we shape our social graph; then it shapes us // henrik-karlsson, 9 min
Postmortem: Trying out for Manifold Markets // Milli, 3 min
Should you refrain from having children because of the existential risk posed by artificial intelligence? // Mientras, 1 min
Web4/Heaven - The Simulation // Dunning K., 1 min
(Link) I'm Missing a Chunk of My Brain // adrian-arellano-davin, 1 min
Shrödinger’s lottery or: Why you are going to live forever // Chase Dowdell, 4 min
In a lack of data, how should you weigh credences in theoretical physics's Theories of Everything, or TOEs? // sharmake-farah, 1 min
Pascal: The Greatness and Littleness of Man, A Thinking Reed // [email protected], 1 min
Interpreting Affordable Housing // jkaufman, 1 min
[Exploratory] Becoming more Agentic // johannes-c-mayer, 1 min
The Power (and limits?) of Chunking // NicholasKross, 1 min
# Podcasts
#137 – Andreas Mogensen on whether effective altruism is just for consequentialists // , 141 min
170 – By George, The Rent Is Too Damn High! // The Bayesian Conspiracy, 118 min
# Rational fiction
A Game About AI Alignment (& Meta-Ethics): What Are the Must Haves? // JonathanErhardt, 1 min
# Videos of the week
[ML News] OpenAI's Whisper | Meta Reads Brain Waves | AI Wins Art Fair, Annoys Humans // Yannic Kilcher, 42 min