Wikipedia:Wikipedia Signpost/2022-08-01/From the editors
Rise of the machines, or something
- Here is the deal: it's pretty good at what it does. – J
There are a few terms that have been thrown around a lot lately: AI, DL, NN, ML, NLP, and more. While a precise definition of all these terms would take multiple paragraphs, the thing they have in common is that a computer is doing some stuff.
For anyone who is not familiar with this alphabet soup, I've written a fairly comprehensive overview of the field's origins and history, as well as an explanation of the technologies involved, here, and ask forgiveness for starting the explanation of a 2019 software released in 1951.
In recent years, the field of machine learning has advanced at a pace which is, depending on who you ask, somewhere between "astounding", "terrifying", "overhyped" and "revolutionary". For example, GPT (2018) was a mildly interesting research tool, GPT-2 (2019) could write human-level text but was barely capable of staying on topic for more than a couple paragraphs, and GPT-3 (2020–22) wrote this month's arbitration report (a full explanation of what I did, how I did it, and responses to the most obvious questions can be found below).
The generative pre-trained transformers (this is what "GPT" stands for) are a family of large language models developed by OpenAI, similar to BERT and XLnet. Perhaps as a testament to the rapidity of developments in the field, even Wikipedia (famous for articles written within minutes of speeches being made and explosions being heard) currently has a redlink for large language models. Much ink has already been spilled on claims of GPTs' sentience, bias, and potential. It's obvious that a computer program capable of writing on the level of humans would have enormous implications for the corporate, academic, journalistic, and literary world. While there are certainly some unrealistically hyped-up claims, it's hard to overstate how much these things are capable of, despite their constraints.
The reports
With that said, there are basically two options here.
- The first is for me to keep droning on about how these models are a big deal, in a boring wall of text that makes increasingly outlandish and far-fetched claims about their capabilities.
- The second is to show you what I am talking about.
I have opted for the second. In this issue, two articles have been written by an AI model called GPT-3: the deletion report and the arbitration report.
For the deletion report, GPT-3 was prompted with a transcript of each discussion in the report, and instructed to write a summary of it in the style of deceased Gonzo journalist Hunter S. Thompson. This produced a mixture of insightful, incisive, and derisive commentary. GPT-Thompson proved quite capable of accurately summarizing the slings and arrows of every discussion in the report – even though it specifically covers the longest and most convoluted AfDs. "Ukrainian Insurgent Army war against Russian occupation", for example, was a whopping 126,000 bytes (and needed to be processed in several segments) but the description was accurate.[2]
For each discussion in the report, I provided a full transcript of the AfD page (with timestamps and long signatures truncated to aid processing), and prompted GPT-3 for a completion, using some variation on the following:
- "The following text is a summary of the above discussion, written by Gonzo journalist Hunter S. Thompson for Rolling Stone's monthly feature on Wikipedia deletion processes."
Despite being ostensibly written in Thompson's style, these were generally quite straightforward summaries that covered the arguments made during each discussion, with hardly any profanity.[3]
Afterwards, I provided the summary itself as a prompt, and asked GPT-Thompson for an "acerbic quip" on each. Unlike the "summary" prompts (in which GPT-Thompson only occasionally chose to accompany his commentary with unprintable calumny and scathing political rants), the "acerbic quip" prompts solely produced output ranging from obscene and irreverent to maliciously slanderous.[4] Notably, this behavior is identical to what Hunter S. Thompson habitually did in real life, and part of why many editors allegedly loathed working with him. Personally, I didn't mind sifting through the diatribes (some of them were quite entertaining), but having to run each prompt several times to get something usable did make it fairly expensive.
For the arbitration report, GPT-3 was instructed to write a summary of each page in the style of deceased United States Supreme Court justice Oliver Wendell Holmes, Jr. This produced surprisingly insightful commentary; Justice GPT-Holmes proved able to summarize minute details of proceedings, including some things I'd missed while originally reading them. In general, he was more well-behaved (and less prone to obscene tirades) than GPT-Thompson, although he did have a tendency for long-winded digressions, and would often quote entire paragraphs from the source text.[5]
Similar to the deletion report, input consisted of brief prologues (e.g. "The following is a verbatim transcript of the findings of fact in a Wikipedia arbitration request titled 'WikiProject Tropical Cyclones'"). This was followed by the transcript of the relevant pages (whether they were the main case page, arbitration noticeboard posting, preliminary statements, arbitrator statements, or findings of fact and remedies). Afterwards, a prompt was given for a summary, of the following general form:
- The following text is an article written by United States Supreme Court Justice Oliver Wendell Holmes, summarizing the findings of fact and remedies, and their broader implications for the English Wikipedia's jurisprudence.[6]
Image generation
-
GPT-Thompson.
Image from Craiyon (formerly "DALL-E Mini"), a VQGAN- and BART-based generative adversarial network -
Justice GPT-Holmes.
Image from Midjourney, a diffusion network whose architecture is not publicly documented
We all remember those weird DeepDream images where the sky got turned into dogs. This is a little different.
In addition to text completion, transformers (in conjunction with other technologies) have proven themselves quite capable of image generation. The first of these, broadly speaking, was DALL-E, announced by OpenAI in January 2021. Since then, a number of services have become available, which use a variety of architectures to generate images from natural-language prompts (i.e. a prompt phrased in normal language like "a dog eating the Empire State Building", rather than a procedurally defined set of attributes and subjects written in a specialized description language). Among these are Craiyon (formerly known as "DALL-E Mini", despite having no relation to DALL-E) and Midjourney. For this issue, I used both of these services to generate illustrations for our articles: some came out very impressively, and some came out a little goofy. It was definitely surprising to see it have a coherent response for the prompt "Technoblade's avatar" that actually looked like it – I guess this is what happens when the training set is massive. Anyway, you can see a bunch of these on the issue page. For a comparison between the three models I found usable, see the embedded images above.
DALL-E 2 creates much higher-quality images than what I used, but there's a waitlist for access, and it didn't end up happening by press time (although I did get my friend to generate me one). For a comparison, see below; both were prompted from the string "Teddy bears working on new AI research underwater with 1990s technology".
-
Craiyon
256×256 pixels -
DALL-E 2
1620×1620 pixels
While some concerns have been raised about the intellectual property implications of images generated by such models, the determination has been made (at least on Commons) that they're ineligible for copyright due to being the output of a computer algorithm. With respect to moral rights, the idea is generally that they're ripping off human artists because they were trained on a bunch of images from the Internet, including copyrighted ones. However, it's not clear (at least to me) in what way this process differs from the same being done by human artists. As far as I can tell, this is the way that humans have created art for the last several tens of thousands of years – as far as I can tell, Billie Eilish does not get DMCA claims from the Beatles for writing pop music, and Peter Paul Rubens didn't get in trouble with the Lascaux cavemen (even when he painted obviously derivative works).
Obvious questions
- This is a joke, right?
- No. I really did have GPT-3 write these articles. I can show you screenshots of my account on the OpenAI website if you want. Copyediting was minimal, and consisted mostly of reordering entries and removing irrelevant asides.
- So you just pushed a button and the whole thing popped out?
- Not exactly. I organized the layout of each article, determined what sections would go where, and had GPT-3 write the body text of each section according to specific prompts (as described above). It was also necessary to format the model's output in MediaWiki markup. Although GPT-3 is more than capable of writing code (including wikitext), I didn't want to overwhelm it by asking it to do too much stuff at the same time, as this tends to degrade quality.
- This is obviously cherry-picked – you didn't just publish the direct output of the model.
- Well, we don't do that for human writers, either. I don't even do that for myself – typically, by the time I flag my own stuff to be copyedited, I have gone through multiple stages of writing, rewriting, editing, adding notes for clarification, and deleting unnecessary content.
- How do you know it's not completely full of crap?
- I don't – every claim that it made was individually fact-checked (we do this for human writers, too). The overwhelming majority were correct, and in the rare cases where it got something wrong, it could be fixed by asking it to complete the prompt again.
- Why not just write the articles yourself at that point?
- Even accounting for the time spent verifying claims, it was still generally faster than writing the articles myself, as it was capable of structuring full paragraphs of text in seconds. It was sometimes time-consuming to re-prompt it when it would write something incorrect or useless, but there is a sort of art to writing prompts in a way that causes useful answers to be generated, which gradually became easier to do as time went on. For example, replacing "The following is a summary of the discussion" with "The following is a rigorously accurate summary of the discussion" (yes, this really works).[7]
- So it is a worse version of a human writer?
- In terms of typographical errors, it was far better: I don't remember it making a single misspelling. The few grammatical errors it made were minor, and not objectively incorrect (e.g. saying "other editors argue" instead of "other editors argued" for a discussion that the prompt said had already been closed – this is not even really an error per se).
- I heard language models were racist.
- Language models like GPT-3 predict the most likely completion for a given input sequence, based on its training corpus, which is a very broad spectrum of text from the Internet (ranging from old books to forum arguments to furry roleplay). If its prompt is the phrase "I think the French are bastards because", you will probably end up with a bunch of text about how the French are bastards, similar to if you typed that into a search engine. In this particular instance, I did not observe GPT-3 saying anything prejudiced. This may be due to the people I prompted it to emulate;[8] presumably, if I had told it to write in the style of Adolf Hitler, I would have gotten some nasty stuff. My solution to this was to not do that.
- How much did this whole gimmick cost?
- Since I signed up for the GPT-3 beta, I have used it for things other than Signpost writing, so it's hard to tell precisely how much compute went towards these articles. However, the total cost of all the API requests I've made so far is 48.12 USD.
- Damn!!
- Large language models are notorious for requiring massive amounts of processing power. I still think it's a bargain: imagine how much it would cost to actually hire Hunter S. Thompson and Oliver Wendell Holmes after you adjusted for inflation.
Notes
- ^ This image was generated by DALL-E 2. The caption was also written by GPT-3, in response to a prompt asking it for New Yorker-style captions for cartoons about Oliver Wendell Holmes being resurrected to write for the Signpost.
- ^ All statements were individually fact-checked, per the "obvious questions" section.
- ^ One thing I couldn't help but notice was that some of the phrasing was quite similar to those I've used when writing previous deletion reports, like "Ultimately, the discussion resulted in a no consensus decision". My understanding is that GPT-3 included Wikipedia in its corpus, this version was trained in 2021, and as far as I can tell I am the only journalist who has ever written a monthly recap of the ten longest Wikipedia deletion discussions, so it's possible that when instructed to write an AfD report, it simply learned from the best (and/or only). On the other hand, it's also possible that there are just very few ways in which to describe the closure of an AfD, and this happens to be one of them. Who knows.
- ^ I would provide a highlights reel of these, but I was serious about the "unprintable" part – you'd be reading about it in the next arbitration report.
- ^ It's not entirely clear to me what causes this. While there are obviously differences between the writing styles of Gonzo journalism and Supreme Court opinions, there are also inherent differences in the format of AfD discussions (which are a bunch of people replying to each other directly) and ArbCom proceedings (which are a bunch of individual sections of highly procedural text). More research is needed on this front.
- ^ Sometimes, he would start adding his own recommendations, and I would have to append "Holmes is not himself an arbitrator." to the prompt.
- ^ Kojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). "Large Language Models are Zero-Shot Reasoners". arXiv:2205.11916 [cs.CL].
- ^ Thompson, Hunter S. (November 24, 2011). Kingdom of Fear: Loathsome Secrets of a Star-crossed Child in the Final Days of the American Century. Penguin Books Limited. ISBN 9780241958735 – via Google Books.
They speak for all that is cruel and stupid and vicious in the American character. They are the racists and hate mongers among us [...] Fuck them.
Discuss this story
I commissioned GPT-3 to write a poem about this article:
I have some concerns, but I also have no idea what I'm doing, so there's that! Fantastic read, and I'm very interested to see the ongoing implications of this tech. ASUKITE 01:06, 1 August 2022 (UTC)[reply]
Thanks for this stimulating piece. I think that this raises questions about the use of generative language models in Wikipedia. Even if the results are mind blowing, I think that we should refuse the use of generative language models in Wikipedia for several reasons :
Maybe it would be worth to have a sister project from the foundation using an AI based encyclopedia (Wiki-AI-pedia). Now, we have a problem. It may be very difficult in a near future to detect contributions generated with generative AI. This will be a big challenge. Imagine an AI which would be expert in vandalism? PAC2 (talk) 06:27, 1 August 2022 (UTC)[reply]
"I heard language models were racist" Don't AI models have some sort of system to block "problematic prompts"? I know that DALL-E 2 blocks problematic prompts as per the following quote in an IEEE article: "Again, the company integrated certain filters to keep generated images in line with its content policy and has pledged to keep updating those filters. Prompts that seem likely to produce forbidden content are blocked and, in an attempt to prevent deepfakes, it can't exactly reproduce faces it has seen during its training. Thus far, OpenAI has also used human reviewers to check images that have been flagged as possibly problematic." Maybe GPT-3 could use a similar system. Tube·of·Light 03:40, 4 August 2022 (UTC)[reply]