Center on Long-Term Risk
It is proposed that this article be deleted because of the following concern:
If you can address this concern by improving, copyediting, sourcing, renaming, or merging the page, please edit this page and do so. You may remove this message if you improve the article or otherwise object to deletion for any reason. Although not required, you are encouraged to explain why you object to the deletion, either in your edit summary or on the talk page. If this template is removed, do not replace it. The article may be deleted if this message remains in place for seven days, i.e., after 02:38, 25 December 2024 (UTC). Find sources: "Center on Long-Term Risk" – news · newspapers · books · scholar · JSTOR |
This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (December 2024) |
The Center on Long-Term Risk (CLR) is a research institute dedicated to mitigating s-risks (risks of astronomical suffering) posed by advanced AI. Its primary focus is promoting cooperative behavior and preventing conflicts between transformative AI systems.[1]
History
[edit]Founded in July 2013 as the Foundational Research Institute (FRI), CLR adopted its current name in March 2020. It operates under the Effective Altruism Foundation. By June 2022, CLR had received over $1.2 million from the Survival and Flourishing Fund.[2]
The roots of CLR can be traced back to Brian Tomasik's Essays on Reducing Suffering, which were first published in June 2006 on his website utilitarian-essays.com.
[3] Tomasik registered the domain for the website on June 12, 2006.[4] In July 2013, Tomasik conducted a Facebook poll to name a new organization focused on researching future suffering. This initiative led to the creation of the Foundational Research Institute, which registered its domain, foundational-research.org, in August 2013.[5][6] Brian Tomasik left his position at Microsoft in October 2013 to focus on his research at FRI.[7]
In 2014, FRI expanded its outreach through additional domains, including reducing-suffering.org and crucialconsiderations.org, which hosted blogs such as "Crucial Considerations."[8][9][10]
FRI’s research gained recognition in 2016 when David Althaus and Lukas Gloor published "Reducing Risks of Astronomical Suffering: A Neglected Priority," which introduced s-risks as a focus area.[11]
In 2017, s-risks were formally introduced to the Effective Altruism community during a presentation at EA Global Boston by Max Daniel, a researcher at FRI.[12][13]
In March 2020, FRI was renamed the Center on Long-Term Risk, reflecting its refined mission to address s-risks posed by advanced AI systems.[14]
Research and Impact
[edit]CLR’s research includes the publication of papers and frameworks to address cooperation problems in AI, fostering global coordination and reducing conflict risks among AI systems. The institute’s work contributes to the broader goals of Effective Altruism by focusing on reducing suffering on a large scale.
By June 2022, CLR had secured significant funding, including over $1.2 million from the Survival and Flourishing Fund.[15]
See also
[edit]References
[edit]- ^ "Center on Long-Term Risk". forum.effectivealtruism.org. Retrieved 15 December 2024.
- ^ "Center on Long-Term Risk". forum.effectivealtruism.org. Retrieved 15 December 2024.
- ^ Tomasik, Brian. "History of This Website". Essays on Reducing Suffering. Retrieved February 9, 2018.
- ^ "Showing results for: utilitarian-essays.com". ICANN WHOIS. Retrieved February 9, 2018.
- ^ "Transparency". Foundational Research Institute. Retrieved July 27, 2017.
- ^ "Showing results for: FOUNDATIONAL-RESEARCH.ORG". ICANN WHOIS. Retrieved February 9, 2018.
- ^ "Interview with Brian Tomasik". Everyday Utilitarian. Retrieved February 10, 2018.
- ^ "Showing results for: REDUCING-SUFFERING.ORG". ICANN WHOIS. Retrieved February 9, 2018.
- ^ "Showing results for: CRUCIALCONSIDERATIONS.ORG". ICANN WHOIS. Retrieved February 9, 2018.
- ^ "About". Crucial Considerations. November 19, 2015. Retrieved February 9, 2018.
- ^ "Reducing Risks of Astronomical Suffering: A Neglected Priority". Foundational Research Institute. January 15, 2018. Retrieved February 10, 2018.
- ^ "Max Daniel: S risks why they are the worst existential risks, and how to prevent them". YouTube. June 18, 2017. Retrieved February 9, 2018.
- ^ "S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017)". Foundational Research Institute. June 20, 2017. Retrieved February 9, 2018.
- ^ "Center on Long-Term Risk". forum.effectivealtruism.org. Retrieved 15 December 2024.
- ^ "Center on Long-Term Risk". forum.effectivealtruism.org. Retrieved 15 December 2024.