The best known posts of LessWrong are "The Sequences", a series of essays which aim to describe how to avoid the typical failure modes of human reasoning with the goal of improving decision-making and the evaluation of evidence.[3][4] One suggestion is the use of Bayes' theorem as a decision-making tool.[2] There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases that have been studied by the psychologist Daniel Kahneman.[5]
LessWrong is also concerned with artificial intelligence, transhumanism, existential threats and the singularity. The New York Observer in 2019 noted that "Despite describing itself as a forum on 'the art of human rationality,' the New York Less Wrong group... is fixated on a branch of futurism that would seem more at home in a 3D multiplex than a graduate seminar: the dire existential threat—or, with any luck, utopian promise—known as the technological Singularity... Branding themselves as 'rationalists,' as the Less Wrong crew has done, makes it a lot harder to dismiss them as a 'doomsday cult'."[6]
History
LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with artificial intelligence researcher Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog.[7] In 2013, a significant portion of the rationalist community shifted focus to Scott Alexander's Slate Star Codex.[3] In 2017, a "Lesswrong 2.0" effort reinvigorated the site,[8] with LessWrong's reported user activity on the site since returning to roughly pre-2013 levels.[9]
Artificial Intelligence
Discussions of AI within LessWrong include AI alignment, AI safety,[10] and machine consciousness.[citation needed] Articles posted on LessWrong about AI have been cited in the news media.[10][11]LessWrong, and its surrounding movement work on AI are the subjects of the 2019 book The AI Does Not Hate You, written by former BuzzFeed science correspondent Tom Chivers.[12][13][14]
Effective altruism
LessWrong played a significant role in the development of the effective altruism (EA) movement,[15] and the two communities are closely intertwined.[16]: 227 In a survey of LessWrong users in 2016, 664 out of 3,060 respondents, or 21.7%, identified as "effective altruists". A separate survey of effective altruists in 2014 revealed that 31% of respondents had first heard of EA through LessWrong,[16] though that number had fallen to 8.2% by 2020.[17]
In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system an incentive to try such blackmail.[18][19][6]
Neoreaction
The comment section of Overcoming Bias attracted prominent neoreactionaries such as Curtis Yarvin (pen name Mencius Moldbug), the founder of the neoreactionary movement,[20] and Hanson posted his side of a debate versus Moldbug on futarchy.[21] After LessWrong split from Overcoming Bias, it too attracted some individuals affiliated with neoreaction with discussions of eugenics and evolutionary psychology.[22] However, Yudkowsky has strongly rejected neoreaction.[23][24] In a survey among LessWrong users in 2016, 28 out of 3060 respondents (0.92%) identified as "neoreactionary".[25]
Notable users
LessWrong has been associated with several influential contributors. Founder Eliezer Yudkowsky established the platform to promote rationality and raise awareness about potential risks associated with artificial intelligence.[26]Scott Alexander became one of the site's most popular writers before starting his own blog, Slate Star Codex, contributing discussions on AI safety and rationality.[26]
Further notable users on LessWrong include Paul Christiano, Wei Dai and Zvi Mowshowitz. A selection of posts by these and other contributors, selected through a community review process,[27] were published as parts of the essay collections "A Map That Reflects the Territory"[28] and "The Engines of Cognition".[29][27][30]
^ abChivers, Tom (22 November 2023). "What we've learned about the robot apocalypse from the OpenAI debacle". Semafor. Archived from the original on 3 March 2024. Retrieved 14 July 2024. Since the late 1990s those worries have become more specific, and coalesced around Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies and Eliezer Yudkowsky's blog LessWrong.
^de Lazari-Radek, Katarzyna; Singer, Peter (27 September 2017). Utilitarianism: A Very Short Introduction. Oxford University Press. p. 110. ISBN9780198728795.
^ abChivers, Tom (2019). "Chapter 38: The Effective Altruists". The AI Does Not Hate You. Weidenfeld & Nicolson. ISBN978-1474608770.
^Sandifer, Elizabeth (2018). Neoreaction a Basilisk: Essays On and Around the Alt-Right (2nd ed.). Eruditorum Press. one of the sites where [Moldbug] got his start as a commenter was on Overcoming Bias, i.e. where Yudkowsky was writing before LessWrong.
^Keep, Elmo (22 June 2016). "The Strange and Conflicting World Views of Silicon Valley Billionaire Peter Thiel". Fusion. Archived from the original on 13 February 2017. Retrieved 5 October 2016. Thanks to LessWrong's discussions of eugenics and evolutionary psychology, it has attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls.
^Riggio, Adam (23 September 2016). "The Violence of Pure Reason: Neoreaction: A Basilisk". Social Epistemology Review and Reply Collective. 5 (9): 34–41. ISSN2471-9560. Archived from the original on 5 October 2016. Retrieved 5 October 2016. Land and Yarvin are openly allies with the new reactionary movement, while Yudkowsky counts many reactionaries among his fanbase despite finding their racist politics disgusting.
^Eliezer Yudkowsky (8 April 2016). "Untitled". Optimize Literally Everything (blog). Archived from the original on 26 May 2019. Retrieved 7 October 2016.
^ abMiller, J.D. (2017). "Reflections on the Singularity Journey". In Callaghan, V.; Miller, J.; Yampolskiy, R.; Armstrong, S. (eds.). The Technological Singularity. The Frontiers Collection. Berlin, Heidelberg: Springer. pp. 225–226. ISBN978-3-662-54033-6. Yudkowsky helped create the Singularity Institute (now called the Machine Intelligence Research Institute) to help mankind achieve a friendly Singularity. (Disclosure: I have contributed to the Singularity Institute.) Yudkowsky then founded the community blog http://LessWrong.com, which seeks to promote the art of rationality, to raise the sanity waterline, and to in part convince people to make considered, rational charitable donations, some of which, Yudkowsky (correctly) hoped, would go to his organization.
^ abGasarch, William (2022). "Review of "A Map that Reflects the Territory: Essays by the LessWrong Community"". ACM SIGACT News. 53 (1): 13–24. doi:10.1145/3532737.3532741. Users wrote reviews of the best posts of 2018, and voted on them using the quadratic voting system, popularized by Glen Weyl and Vitalik Buterin. From the 2000+ posts published that year, the Review narrowed down the 44 most interesting and valuable posts.
^Gasarch, William (2022). "Review of "The Engines of Cognition: Essays by the Less Wrong Community"". ACM SIGACT News. 53 (3): 6–16. doi:10.1145/3561066.3561064 (inactive 28 July 2024).{{cite journal}}: CS1 maint: DOI inactive as of July 2024 (link)