Jump to content

PauseAI

From Wikipedia, the free encyclopedia
PauseAI
FormationMay 2023; 1 year ago (2023-05)
FounderJoep Meindertsma
Founded atUtrecht, Netherlands
TypeAdvocacy group, Nonprofit
PurposeMitigating the existential risk from artificial general intelligence and other risks of advanced artificial intelligence
Region
International
Websitepauseai.info

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control.[1] The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.[2][3][4]

Proposal

[edit]

PauseAI's stated goal is to “implement a pause on the training of AI systems more powerful than GPT-4”. Their website lists some proposed steps to achieve this goal:[1]

  • Set up an international AI safety agency, similar to the IAEA.
  • Only allow training of general AI systems more powerful than GPT-4 if their safety can be guaranteed.
  • Only allow deployment of models after no dangerous capabilities are present.

Background

[edit]

During the late 2010s and early 2020s, a rapid improvement in the capabilities of artificial intelligence models known as the AI boom was underway, which included the release of large language model GPT-3, its more powerful successor GPT-4, and image generation models Midjourney and DALL-E. This led to an increased concern about the risks of advanced AI, causing the Future of Life Institute to release an open letter calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter was signed by thousands of AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, and Elon Musk.[5][6][7]

History

[edit]

Founder Joep Meindertsma first became worried about the existential risk from artificial general intelligence after reading philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies. He founded PauseAI in May 2023, putting his job as the CEO of a software firm on hold. Meindertsma claimed the rate of progress in AI alignment research is lagging behind the progress in AI capabilities, and said "there is a chance that we are facing extinction in a short frame of time". As such, he felt an urge to organise people to act.[3][8][4]

PauseAI's first public action was to protest in front of Microsoft's Brussels lobbying office in May 2023 during an event on artificial intelligence.[4] In November of the same year, they protested outside the inaugural AI Safety Summit at Bletchley Park.[9] The Bletchley Declaration that was signed at the summit, which acknowledged the potential for catastrophic risks stemming from AI, was perceived by Meindertsma to be a small first step. But, he argued "binding international treaties" are needed. He mentioned the Montreal Protocol and treaties banning blinding laser weapons as examples of previous successful global agreements.[3]

In February 2024, members of PauseAI gathered outside OpenAI's headquarters in San Francisco, in part due to OpenAI changing its usage policy that prohibited the use of its models for military purposes.[10]

On 13 May 2024, protests were held across thirteen different countries before the AI Seoul Summit, including the United States, the United Kingdom, Brazil, Germany, Australia, and Norway. Meindertserma said that those attending the summit "need to realize that they are the only ones who have the power to stop this race". Protesters in San Francisco held signs reading "When in doubt, pause", and "Quit your job at OpenAI. Trust your conscience".[11][12][3][13] Jan Leike, head of the "superalignment" team at OpenAI, resigned 2 days later due to his belief that "safety culture and processes [had] taken a backseat to shiny products".[14]

See also

[edit]

References

[edit]
  1. ^ a b "PauseAI Proposal". PauseAI. Retrieved 2024-05-02.
  2. ^ Meaker, Morgan. "Meet the AI Protest Group Campaigning Against Human Extinction". Wired. ISSN 1059-1028. Retrieved 2024-04-30.
  3. ^ a b c d Reynolds, Matt. "Protesters Are Fighting to Stop AI, but They're Split on How to Do It". Wired. ISSN 1059-1028. Retrieved 2024-08-20.
  4. ^ a b c "The rag-tag group trying to pause AI in Brussels". Politico. 2023-05-24. Retrieved 2024-04-30.
  5. ^ Hern, Alex; editor, Alex Hern UK technology (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN 0261-3077. Retrieved 2024-08-20. {{cite news}}: |last2= has generic name (help)
  6. ^ Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN 0362-4331. Retrieved 2024-08-20.
  7. ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-08-20.
  8. ^ www.euronews.com https://www.euronews.com/next/2023/06/14/could-ai-lead-us-to-extinction-this-brussels-based-group-believes-so. Retrieved 2024-08-20. {{cite web}}: Missing or empty |title= (help)
  9. ^ "What happens in Bletchley, stays in…". Islington Tribune. Retrieved 2024-05-08.
  10. ^ Nuñez, Michael (2024-02-13). "Protesters gather outside OpenAI office, opposing military AI and AGI". VentureBeat. Retrieved 2024-08-20.
  11. ^ Gordon, Anna (2024-05-13). "Why Protesters Are Demanding Pause on AI Development". TIME. Retrieved 2024-08-20.
  12. ^ Rodriguez, Joe Fitzgerald (2024-05-13). "As OpenAI Unveils Big Update, Protesters Call for Pause in Risky 'Frontier' Tech | KQED". www.kqed.org. Retrieved 2024-08-20.
  13. ^ "OpenAI launches new AI model GPT-4o, a conversational digital personal assistant". ABC7 San Francisco. 2024-05-14. Retrieved 2024-08-20.
  14. ^ Robison, Kylie (2024-05-17). "OpenAI researcher resigns, claiming safety has taken "a backseat to shiny products"". The Verge. Retrieved 2024-08-20.
[edit]