rational
Issue #210: Full List
10 April, 2022 // View curated list# Instrumental
Working Out in VR Really Works // yonatan-cale-1, 3 min
Giving calibrated time estimates can have social costs // Alex_Altair, 6 min
Why Take Care Of Your Health? // MondSemmel, 7 min
Becoming a Staff Engineer // gworley, 7 min
You get one story detail // gworley, 5 min
A method of writing content easily with little anxiety // jessica.liu.taylor, 3 min
Prioritise Tasks by Rating not Sorting // neel-nanda-1, 2 min
"Don't Get Mad, Get Curious" // Rubix, 2 min
Baby Sleep: Multiple Rooms // jkaufman, 1 min
Buy-in Before Randomization // jkaufman, 1 min
Duncan Sabien On Writing // lynettebye, 20 min
A simple guide to life // jasoncrawford, 1 min
My Superpower: OODA Loops // gworley, 4 min
Summary: "Internet Search tips" by Gwern Branwen // pablo-repetto-1
Control the Density of Novelty in Your Writing // En Kepeig, 3 min
How to openly maintain a single identity while keeping it private? // identity.key, 3 min
What advice do you have for someone struggling to detach their grim-o-meter? // Zorger74, 1 min
Contra: Avoiding Sore Arms // jkaufman, 2 min
Why learn to code? // rockthecasbah, 1 min
Setting the Brains Difficulty-Anchor // johannes-c-mayer, 3 min
# Epistemic
Prompt Your Brain // En Kepeig, 2 min
A Word to the Wise is Sufficient because the Wise Know So Many Words // lsusr, 1 min
Convincing Your Brain That Humanity is Evil is Easy // johannes-c-mayer, 2 min
Predicting a global catastrophe: the Ukrainian model // RomanS, 2 min
Edge cases don't invalidate the rule // adam-selker, 2 min
Problem of Induction: What if Instants Were Independent // epirito, 3 min
# Ai
Don't die with dignity; instead play to your outs // jeff-ladish, 6 min
Playing with DALL·E 2 // dave-orr, 6 min
Testing PaLM prompts on GPT3 // yitz, 10 min
What Would A Fight Between Humanity And AGI Look Like? // johnswentworth, 3 min
PaLM in "Extrapolating GPT-N performance" // Lanrian, 2 min
Productive Mistakes, Not Perfect Answers // adamShimi, 7 min
The case for Doing Something Else (if Alignment is doomed) // sil-ver, 2 min
Save Humanity! Breed Sapient Octopuses! // yair-halberstadt, 1 min
Supervise Process, not Outcomes // stuhlmueller, 12 min
What I Was Thinking About Before Alignment // johnswentworth, 5 min
Takeaways From 3 Years Working In Machine Learning // George3d6, 13 min
[Link] A minimal viable product for alignment // janleike, 1 min
What are some ways in which we can die with more dignity? // Chris_Leong, 1 min
Language Model Tools for Alignment Research // elriggs, 2 min
[Link] Why I’m excited about AI-assisted human feedback // janleike, 1 min
Why is Toby Ord's likelihood of human extinction due to AI so low? // ChristianKl, 1 min
Believable near-term AI disaster // Dagon, 2 min
Why Instrumental Goals are not a big AI Safety Problem // jpaulson, 3 min
My Transhuman Dream // johannes-c-mayer, 4 min
Strategic Considerations Regarding Autistic/Literal AI // Chris_Leong, 1 min
AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra // Davidmanheim, 3 min
Is GPT3 a Good Rationalist? - InstructGPT3 [2/2] // WayZ, 8 min
AI Alignment and Recognition // Chris_Leong, 1 min
AI safety: the ultimate trolley problem // chaosmage, 1 min
My agenda for research into transformer capabilities - Introduction // p.b., 3 min
Research agenda: Can transformers do system 2 thinking? // p.b., 2 min
How BoMAI Might fail // donald-hobson, 2 min
Truthfulness, standards and credibility // Joe_Collman, 39 min
Should we push for banning making hiring decisions based on AI? // ChristianKl, 1 min
What would the creation of aligned AGI look like for us? // Perhaps, 1 min
What if we stopped making GPUs for a bit? // MrPointy, 1 min
The Explanatory Gap of AI // david-valdman, 5 min
Is there a possibility that the upcoming scaling of data in language models causes A.G.I.? // richard-ford, 1 min
Can AI systems have extremely impressive outputs and also not need to be aligned because they aren't general enough or something? // WilliamKiely, 1 min
Progress report 3: clustering transformer neurons // nathan-helm-burger, 2 min
What's the problem with having an AI align itself? // FinalFormal2, 1 min
What if "friendly/unfriendly" GAI isn't a thing? // homunq, 1 min
List of concrete hypotheticals for AI takeover? // yitz, 1 min
Reverse (intent) alignment may allow safer Oracles // azsantosk, 4 min
On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios // francis-rhys-ward, 9 min
Progress Report 4: logit lens redux // nathan-helm-burger, 2 min
Make friendly AIs by befriending them // eg, 1 min
[ASoT] Some thoughts about imperfect world modeling // leogao, 5 min
AIs should learn human preferences, not biases // Stuart_Armstrong, 1 min
What Should We Optimize - A Conversation // johannes-c-mayer, 18 min
AI Language Progress // Robin Hanson, 2 min
Yudkowsky Contra Christiano On AI Takeoff Speeds // Scott Alexander, 31 min
# Anthropic
A paradox of existence // chrisvm, 6 min
The doomsday argument is normal // avturchin, 2 min
Why are we so early? // Flaglandbase, 1 min
# Decision theory
Ideal governance (for companies, countries and more) // HoldenKarnofsky, 16 min
How Real Moral Mazes (in Bay Area startups)? // gworley, 6 min
The Debtors' Revolt // Benquo, 50 min
Down By 30 // adamzerner, 4 min
Game Theory is Not Selfish // sil-ver, 4 min
Elasticity of Wheat Supply? // johnswentworth, 1 min
Nature's answer to the explore/exploit problem // lizard_brain, 1 min
Solving the Brazilian Children's Game of 007 // epirito, 3 min
# Math and cs
The Case for Frequentism: Why Bayesian Probability is Fundamentally Unsound and What Science Does Instead // lsusr, 7 min
Optimizing crop planting with mixed integer linear programming in Stardew Valley // hapanin, 7 min
Are the fundamental physical constants computable? // yair-halberstadt, 2 min
A Solution to the Unexpected Hanging Problem // dawn-drain, 5 min
Practical use of the Beta distribution for data analysis // maxwell-peterson, 3 min
Non-programmers intro to AI for programmers // Dustin, 2 min
Distilling and approaches to the determinant // AprilSR, 7 min
# Books
Dictator Book Club: Xi Jinping // Scott Alexander, 22 min Favorite
Convincing All Capability Researchers // elriggs, 3 min
Notes on the Autobiography of Malcolm X // Benquo, 3 min
[Book Review] Why Greatness Cannot Be Planned: The Myth of the Objective // Stuckwork, 1 min
# Community
I discovered LessWrong... during Good Heart Week // identity.key, 4 min
My Recollection of How This All Got Started // gworley, 5 min
5-Minute Advice for EA Global // elriggs, 2 min
What are rationalists worst at? // gworley, 1 min
A conversation about growing through the history of the rationality movement and some of it's history // Elo, 1 min
# Culture war
Ukraine Post #9: Again // Zvi, 19 min
What Twitter fixes should we advocate, now that Elon is on the board? // Jackson Wagner, 1 min
The Jordan Peterson vs Sam Harris Debate // lsusr, 7 min
Why Iraq is so violent // rockthecasbah, 4 min
Highlights From The Comments On Self-Determination // Scott Alexander, 13 min
# Misc
The case for using the term 'steelmanning' instead of 'principle of charity' // ChristianKl, 3 min
Case for emergency response teams // Jan_Kulveit, 6 min
The Zombie Argument for Empiricists // JonathanErhardt, 4 min
Science is Mining, not Foraging // Seb Farquhar, 18 min
# Podcasts
DeepMind: The Podcast - Excerpts on AGI // WilliamKiely, 6 min
AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy // DanielFilan, 64 min
#126 - Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs // , 135 min
EP 154 Iain McGilchrist on The Matter With Things // The Jim Rutt Show, 107 min
159 – Cryonics Concerns, and Theses On Sleep // The Bayesian Conspiracy, 111 min
# Rational fiction
[Invisible Networks] Goblin Marketplace // Kaj_Sotala, 3 min
Bayeswatch 9.5: Rest & Relaxation // lsusr, 2 min
If Dumbledore Was Named Eldore // TourmalineCupcakes, 7 min
Bayeswatch 6.5: Therapy // lsusr, 1 min
# Videos of the week
AI vs Humans | Fighting Money Laundering // Sumsub, 20 min