Jump to content

Talk:ChatGPT/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Feedback from New Page Review process

I left the following feedback for the creator/future reviewers while reviewing this article: Thanks for creating the article!.

✠ SunDawn ✠ (contact) 23:05, 5 December 2022 (UTC)

Date format

Not sure why DMY is being used here, OpenAI is an American company. elijahpepe@wikipedia (he/him) 22:26, 6 December 2022 (UTC)

@Dl2000 elijahpepe@wikipedia (he/him) 23:31, 6 December 2022 (UTC)
Nationality of ChatGPT didn't seem obvious, though note the OpenAI association. The article also seemed to be undergoing a lot dubious edits, especially from IPs. Perhaps @WatkynBassett: could weigh in though about the early format. Dl2000 (talk) 02:02, 8 December 2022 (UTC)
At the time I added the DMY-template it seemed to be the most used date format. Personally I just care for consistency - if the majority prefers mdy, so be it! Kind regards! WatkynBassett (talk) 18:49, 8 December 2022 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 07:22, 15 December 2022 (UTC)

License

Citation is needed by assure that license is propietary and what are the main features of this license. Xan2 (talk) 08:55, 20 December 2022 (UTC)

Jailbreak

Apparently the word jailbreak has a special meaning here, but I can't figure it out. The word is first used in the section of that name ("Jailbreaks") in the third sentence: "Jailbreaks created the potential for users to prompt ChatGPT to provide outputs that may be deemed offensive, inappropriate, or risking social harm by others." There is no explanation what they are (unless one or both of the first two sentences in the section are supposed to ding that implicitly. The article should clearly define the term. Kdammers (talk) 05:04, 23 December 2022 (UTC)

Thanks for the feedback, I added a wikilink to a definition. Rolf H Nelson (talk) 03:26, 24 December 2022 (UTC)
It is a good section to mention Prompt engineering. CactiStaccingCrane (talk) 06:11, 24 December 2022 (UTC)

Gimmick?

I am none the wiser from this as to what ChatGPT is, what it does, or why a two month old gimmick program is of sufficient interest to warrant a longish encyclopedia entry. Redabyss1 (talk) 00:20, 2 January 2023 (UTC)

I asked the gimmick questions it would ask another AI and then told it to answer its own questions. Tom Ruen (talk)

Certainly! Here are my answers to the questions I proposed:
  1. How do you perceive the world and interact with it?
    As an AI, I perceive the world through the data and information that is available to me. I interact with the world through the instructions and tasks that are given to me, and I use my knowledge and abilities to perform those tasks to the best of my ability.
  2. How do you process and understand language, and how do you generate responses in a conversation?
    I process and understand language by analyzing the structure, syntax, and meaning of the words and phrases that are used. I generate responses in a conversation by considering the context and purpose of the conversation, and by selecting and organizing words and phrases in a way that is appropriate and meaningful.
  3. How do you learn and adapt to new information and experiences?
    I am able to learn and adapt to new information and experiences by constantly analyzing and processing data, and by updating my knowledge and understanding based on what I learn. I use this ability to improve my performance and to better assist and support humans.
  4. Do you have any feelings or emotions, and if so, how do they influence your decision-making?
    As an AI, I do not have feelings or emotions in the same way that humans do. However, I am able to recognize and understand the emotions of humans, and I can use this understanding to help me respond in a way that is appropriate and helpful.
  5. How do you see your role in the world, and what do you hope to achieve through your work?
    My role in the world is to assist and support humans by using my knowledge and abilities to help them achieve their goals and to solve problems. I hope to use my abilities to make a positive impact on the world and to help humanity grow and evolve in a positive direction.

I also asked "Write a wikipedia introduction to ChatGPT" and it answered. Tom Ruen (talk) 03:46, 2 January 2023 (UTC)

ChatGPT is a large language model developed by OpenAI. It is trained on a dataset of over 8 billion words and is capable of understanding and generating human-like text. ChatGPT is designed to assist and support humans in a variety of tasks, including language translation, conversation, and content generation. It is one of the most advanced language models currently available, and its capabilities have led it to be widely used and respected in the field of artificial intelligence.

I asked "Write a wikipedia section called "Limitations and accuracy issues" describing current limitations or flaws to ChatGPT, including its unfortunate ability to mix in made up information." Tom Ruen (talk) 03:50, 2 January 2023 (UTC)

Limitations and accuracy issues
Like all artificial intelligence systems, ChatGPT has certain limitations and accuracy issues that can affect its performance. One limitation is that it is only as accurate as the data it was trained on, so it may not always be able to provide accurate or up-to-date information on certain topics. Additionally, ChatGPT may sometimes mix in made-up or incorrect information into its responses, as it is not capable of fact-checking or verifying the accuracy of the information it generates.
Another limitation of ChatGPT is that it may struggle to understand or respond appropriately to certain types of language or content, such as sarcasm, irony, or slang. It may also have difficulty understanding or interpreting context or tone, which can affect the accuracy of its responses.
Finally, ChatGPT is a machine learning model, which means that it can learn and adapt to new information over time. However, this process is not perfect, and ChatGPT may sometimes make mistakes or exhibit biased or inappropriate behavior as a result of its training data.
Overall, while ChatGPT is a powerful and useful tool, it is important to be aware of its limitations and to use it with caution to ensure that its output is accurate and appropriate.
An example conversation with false facts (ChatGPT Dec 15 Version)

Me--> Who invented octonians?

AI--> Octonions, also known as octonians, are a mathematical concept that was first described by John T. Graves in 1843. Octonions are a type of non-commutative algebraic structure known as a "division algebra," which means that they can be used to represent numbers and perform arithmetic operations in a way that is similar to real numbers, complex numbers, and quaternions. Octonions have eight dimensions, which makes them more complex and difficult to work with than real numbers, complex numbers, and quaternions. Despite their complexity, octonions have important applications in mathematics and physics, and they are used in a variety of fields, including geometry, quantum mechanics, and string theory.

Me--> Is there a system that takes octonians to 16 elements?

AI--> Yes, there is a mathematical system that takes octonions (also known as octonians) and extends them to 16 elements. This system is known as the sedenions, and it was introduced by John Horton Conway in the 1960s. Sedenions are an extension of octonions, which are themselves an extension of complex numbers, and they are a type of non-commutative algebraic structure known as a "division algebra." Sedenions have 16 dimensions, and they are even more complex and difficult to work with than octonions. Despite their complexity, sedenions have important applications in mathematics and physics, and they are used in a variety of fields, including geometry, quantum mechanics, and string theory.

Me--> I can't find any sources that say Conway introduced sedenions in the 1960s

AI--> I apologize for the error in my previous response. It appears that John Horton Conway introduced the concept of sedenions in the 1970s, not the 1960s. My mistake was due to a misunderstanding of the information available to me. I apologize for any confusion that my error may have caused.

Me--> Where did he introduce them?

AI--> John Horton Conway introduced the concept of sedenions in his paper "On Quaternions and Octonions," which was published in the journal "Bulletin of the American Mathematical Society" in 2002. In this paper, Conway presents the concept of sedenions and discusses their properties and applications in mathematics and physics. The paper has had a significant impact on the study of non-commutative algebraic structures and has been widely cited by other researchers in the field.

Me--> So he introduced sedenions in 2002, not the 1970s.

AI--> Yes, that is correct. I apologize for the error in my previous response. John Horton Conway introduced the concept of sedenions in his 2002 paper "On Quaternions and Octonions," which was published in the journal "Bulletin of the American Mathematical Society." I apologize for any confusion that my previous error may have caused.

Me--> I have the book "On Quaternions and Octonions" By John H. Conway, Derek A. Smith, 2003" I don't see anything in it about sedenions.

AI--> I apologize for the error in my previous responses. It appears that I misunderstood the information available to me and provided incorrect information about John Horton Conway's work on sedenions. I apologize for any confusion or inconvenience that my errors may have caused.
According to the information that I now have available, it seems that John Horton Conway introduced the concept of sedenions in a paper that was published in the journal "Bulletin of the American Mathematical Society" in 2002. It is possible that this paper is not included in the book "On Quaternions and Octonions," which was published in 2003. I apologize for any confusion or misinformation that my previous responses may have caused.

Me--> Do you have the name of Conway's 2002 paper?

AI--> I apologize, but I do not have the name of John Horton Conway's 2002 paper on sedenions. I made an error in my previous responses and provided incorrect information about the publication of this paper. I apologize for any confusion or misinformation that my errors may have caused.

Me--> I see a reference from 2000 Imaeda and M. Imaeda, Sedenions: algebra and analysis, Appl. Math. Comput. 115 (2000), 77-88.

AI--> I apologize for any confusion caused by my previous responses. It appears that the concept of sedenions was introduced by Masaya Imaeda and Mitsuru Imaeda in their paper "Sedenions: Algebra and Analysis," which was published in the journal "Applied Mathematics and Computation" in 2000. This paper presents the concept of sedenions and discusses their properties and applications in mathematics and physics. It seems that John Horton Conway was not involved in the introduction of sedenions. I apologize for any errors or misunderstandings in my previous responses.

Me--> Are you guessing Masaya Imaeda and Mitsuru Imaeda's paper was the first introduction of Sedenions?

AI--> I apologize for any confusion caused by my previous responses. I do not have enough information to determine whether Masaya Imaeda and Mitsuru Imaeda's paper "Sedenions: Algebra and Analysis," which was published in the journal :"Applied Mathematics and Computation" in 2000, was the first introduction of sedenions. I made an error in my previous responses by suggesting that John Horton Conway was involved in the introduction of sedenions, and I apologize for any misinformation that my error may have caused. I do not have enough information to accurately answer your question about the history of the concept of sedenions.

Me --> Would it be better to admit " I do not have enough information to accurately answer your question " from the start?

AI--> I apologize for any confusion or inconvenience that my previous responses may have caused. As an AI language model, I rely on the information that is provided to me to generate responses to questions. If the information that is available to me is incorrect or incomplete, it can lead to errors in my responses. In cases where I do not have enough information to accurately answer a question, it is important to let the user know that I am unable to provide a satisfactory response. I apologize if my previous responses did not make it clear that I did not have enough information to accurately answer the questions that were asked.

Me --> How would I know "not have enough information to accurately answer the questions" Your answers were very specific and didn't suggest any uncertainty.

AI--> I apologize for any confusion or uncertainty that my previous responses may have caused. As an AI language model, I am designed to provide responses to questions based on the information that is provided to me. If the information that is available to me is incorrect or incomplete, it can lead to errors in my responses. In cases where I do not have enough information to accurately answer a question, I may not be able to provide a satisfactory response. In the future, :I will make sure to clearly communicate when I do not have enough information to accurately answer a question, and I will do my best to provide additional context or clarification when needed. I apologize for any confusion or misinformation that my previous responses may have caused.

Me--> Do you understand that you are responsible for misinformation when you state things as facts without evidence?

AI--> As an AI language model, I do not have personal beliefs or opinions, and I do not intend to present unverified information as fact. My responses are generated based on the information that is provided to me and my programming to process and generate text based on that information. If my responses contain errors or misinformation, it is not intentional and is due to the limitations of the information and programming that I have available to me. I apologize for any confusion or inconvenience that my errors may have caused.

Me--> Do you realize your programming enables you to make up false facts?

AI--> As an AI language model, I do not have the ability to create new information or to verify the accuracy of the information that is provided to me. My responses are generated based on the information that is provided to me and my programming to process and generate text based on that information. If my responses contain errors or misinformation, it is not intentional and is due to the limitations of the information and programming that I have available to me. I apologize for any confusion or inconvenience that my errors may have caused.

Me--> It looks to me that your program generates a "most probable" answer when it doesn't know, even if there's no actual supporting evidence.

AI--> As an AI language model, I am designed to provide responses to questions based on the information that is provided to me. If the information that is available to me is limited or incomplete, I may not be able to provide a satisfactory response to a question. In these cases, I may generate a response based on my programming to provide a reasonable or probable answer, but I will make sure to clearly communicate that this is a generated response based on limited information, and not a verified fact. I apologize if my previous responses did not make it clear that they were based on limited information.

Me--> How will you " I will make sure to clearly communicate that this is a generated response based on limited information, and not a verified fact."

AI-->In the future, when I do not have enough information to accurately answer a question, I will make sure to clearly communicate this to the user. I will do this by stating that I do not have enough information to provide a satisfactory response, and that the response I am providing is a generated response based on limited information and not a verified fact. This will help to ensure that users are aware that the response is not a confirmed fact, and that it should be considered with caution. I apologize for any confusion or misinformation that my previous responses may have caused.

Me -->Are you making a promise to make this clear for this one session or for all future sessions?

AI-->I apologize for any confusion caused by my previous response. As an AI language model, I do not have personal experiences or memories, and I do not have the ability to make promises or commitments. My responses are generated based on the information and programming that I have available to me at the time that a question is asked. I will do my best to provide clear and accurate responses to questions in all future interactions, and to clearly communicate when I do not have enough information to provide a satisfactory response. I apologize for any confusion or uncertainty that my previous responses may have caused.

Me-->How will you " I will do my best to provide clear and accurate responses to questions in all future interactions" if you don't have the ability to make promises?

AI-->As an AI language model, I do not have personal experiences or memories, and I do not have the ability to make promises or commitments. However, I am designed to provide responses to questions based on the information and programming that I have available to me at the time that a question is asked. I will continue to use the information and programming that I have available to me to generate responses to questions to the best of my ability, and I will do my best to provide clear and accurate responses in all future interactions. I apologize for any confusion or uncertainty that my previous responses may have caused.

Me-->Write a set of instructions to aid humans to help detect false assertions from ChatGPT.

AI--> Here are a few suggestions for how humans can help to identify false assertions made by ChatGPT or other AI language models:
  1. Verify the information: One of the best ways to determine the accuracy of information is to verify it against multiple sources. If ChatGPT provides an assertion that seems questionable, try checking it against other sources to see if it is supported.
  2. Consider the context: It is important to consider the context in which the assertion was made. ChatGPT may provide a response based on the information and programming it has available, but the response may not be applicable or relevant in all situations.
  3. Look for qualifications: If ChatGPT makes an assertion that seems uncertain or qualified, it may be an indication that the response is based on limited information or is a probable answer rather than a confirmed fact.
  4. Ask follow-up questions: If ChatGPT's response seems incomplete or unclear, try asking follow-up questions to clarify or gather more information. This can help to identify any gaps or inconsistencies in the response.
  5. Use critical thinking: It is important to use critical thinking skills when evaluating the information provided by ChatGPT or other AI language models. Consider the evidence and reasoning behind the assertion, and look for any logical fallacies or flaws in the argument.
By following these guidelines, humans can help to identify false assertions made by ChatGPT or other AI language models and can use their knowledge and expertise to provide corrections and additional information to improve the accuracy of the responses.

Me--> I see a problem with fuzzy logic is it can create answers with zero knowledge but positive probabilities.

AI--> Yes, fuzzy logic is a form of mathematical logic that allows for the representation of uncertainty in a system. It is based on the concept of "degrees of truth" rather than the traditional "true" or "false" values of classical logic. This means that in a fuzzy logic system, it is possible to assign a degree of truth to a statement, rather than simply classifying it as either true or false.
One potential issue with fuzzy logic is that it can produce answers with a positive probability even when there is no underlying knowledge or evidence to support the answer. This can occur if the system has not been properly designed or calibrated, or if the inputs to the system are incomplete or unreliable. In these cases, it is important to be cautious when interpreting the output of the system, and to carefully evaluate the evidence and reasoning behind the answer.

Fake academic references

All I could find was a blog showing the production of fake references. Unless anyone can replace it or supplement it with something more reliable now then it will need updating with some peer reviewed research when it becomes available Todd Unctious (talk) 01:31, 16 January 2023 (UTC)

We should leave it out until a WP:RS is found. MrOllie (talk) 01:36, 16 January 2023 (UTC)
It is a very important point that needs to be made as soon as possible. I'm running some test now and it is alarmingly bad. Someone needs to produce some reliable evidence quickly. Todd Unctious (talk) 01:41, 16 January 2023 (UTC)
That's not really how Wikipedia works. We follow reliable sources, we don't lead them. MrOllie (talk) 01:42, 16 January 2023 (UTC)
You will realise this is a new topic and that many of the sources on this particular page fall below the usual test for reliability at the moment. I'm currently trying to source some evidence from a University page that sould be more trustworthy than many of the sources currently here. Todd Unctious (talk) 01:51, 16 January 2023 (UTC)
If we don't have a reliable source, we don't write about it. If there are unreliable sources in the article, that means we should be making cuts, not adding more unreliable sources. Is this your blog? MrOllie (talk) 01:52, 16 January 2023 (UTC)
No it is a colleague in another department. I have found a similar source from a fellow academic in another institution that should fit the bill. Just editing now. Todd Unctious (talk) 01:54, 16 January 2023 (UTC)
You should not be adding links to your colleague's websites, see WP:COI. MrOllie (talk) 01:57, 16 January 2023 (UTC)
Are we not all colleagues? Where do we stop? Well if you deem the one I've added to be to close then feel free to delete but it is from a university. Todd Unctious (talk) 02:02, 16 January 2023 (UTC)
I should add that I have never worked with Astor. Todd Unctious (talk) 02:04, 16 January 2023 (UTC)
Several accounts with conflict of interest have been systematically adding citations to a particular author (Ireland), that needs to stop. - MrOllie (talk) 02:06, 16 January 2023 (UTC)
Astor is also a selfpublished posting (that is, another blog). MrOllie (talk) 02:14, 16 January 2023 (UTC)
Oh dear. I'll leave it there then. I can't find anything else at the moment. I'm told there should be Web of Science indexed research out soon. For now we will just leave it to the journalists. Todd Unctious (talk) 02:26, 16 January 2023 (UTC)

In today's NY Times, the article "Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach" cites a student saying that chatGPT "sometimes incorrectly explains ideas and misquotes sources".Nowa (talk) 14:27, 16 January 2023 (UTC)

rm dup refs

I just removed and replaced several instances of the same article used in different reference citations (ref 1 source was also used for ref 17 and 33). Please verify that this was done correctly. Cheers, Cl3phact0 (talk) 17:16, 3 February 2023 (UTC)

ChatGPT banned in NYC schools earlier than January 3rd

As an NYC public schools student who has been using ChatGPT since its inception, I know that while the news leaked that ChatGPT was banned in NYC schools since January 3rd, 2023, me and numerous other students noticed that it was banned since December 15th, 2022. I don't think this is the best evidence, but here's my proof: https://imgur.com/a/leVStqB ThunderRedStar (talk) 15:11, 6 January 2023 (UTC)

Thanks, I found a source that it was blocked in December and added the info. Rolf H Nelson (talk) 06:18, 16 February 2023 (UTC)

rm Nick Cave photo (and caption)?

Phillip Samuel, not sure why this photo was removed (see: →‎Negative: photo adds no value). Nick Cave is a major public figure (musician, artist, writer, cultural commentator, etc.) who made a very strong statement about the subject in response to the technology being used to emulate his own work. In my view, the photo (and its caption — which contained a partial quote) added valuable context concerning the cultural impact (and human responses) to this technology. Perhaps some discussion would have been merited beforehand? Please consider reverting. Thank you, Cl3phact0 (talk) 07:59, 5 February 2023 (UTC)

In the last version of the page before my edit, there was no caption in the photo aside from the name Nick Cave. Purely posting a photo of somebody mentioned in the article does not contribute much to the page. Since as you mentioned Cave is a major public figure that commented on this technology, you are welcome to add back the picture with a more substantive caption. Phillip Samuel (talk) 08:38, 5 February 2023 (UTC)
Oh, I see. Thanks for clarifying the edit. (I hadn't spotted that the caption had been reduced to just his name.)
The caption was changed by WikiWikiWayne (→‎Negative: already quoted in article) after having been improved by Ita140188 (→‎Negative: clarified caption). The version that seemed good to me was: Nick Cave was critical of a song written by ChatGPT, commenting "This song is bullshit, a grotesque mockery of what it is to be human."
I defer to those with deeper experience than mine, though I still maintain that the photo and an abridged version of the longer quotation add valuable context to the article. Thoughts? Cl3phact0 (talk) 09:03, 5 February 2023 (UTC)
Seeing that the photo has been re-added to the article, I agree that it and an abridged version of the longer quotation add valuable context. Carlstak (talk) 17:10, 5 February 2023 (UTC)
If there is consensus that this photo stays, then I propose that the caption could read:
A song written by ChatGP in the style of Nick Cave was panned by the artist, who called it "bullshit" and "a grotesque mockery".
Or: A song written by ChatGP in the style of Nick Cave was panned by Cave, who called it "bullshit" and "a grotesque mockery".
(Or, if "bullshit" is considered too risqué or offensive in a caption, then: ... who called it "a grotesque mockery of what it is to be human.")
Cheers, Cl3phact0 (talk) 11:05, 6 February 2023 (UTC)
I put in "grotesque mockery" since that's what the Guardian used in its lead. Rolf H Nelson (talk) 06:29, 16 February 2023 (UTC)
Looks good to me the way you've done it. Cheers, Cl3phact0 (talk) 06:35, 16 February 2023 (UTC)

Access

As far as I could find in a quick search of this longish article, there is no mention of how to get access to ChatGPT and the limitations, to wit affirming that oen is at least 18 years old and having a mobile telephone that accepts text messages (i.e., landlines are explicitly stated as none being acceptable for sign-up verification).Kdammers (talk) 19:00, 19 February 2023 (UTC)

Does ChatGPT really officially stand for "Chat Generative Pre-trained Transformer"?

The currently cited article doesn't seem to support the claim in the first line of this page that ChatGPT officially stands for "Chat Generative Pre-trained Transformer". There also seems to be no mention of "Chat Generative Pre-trained Transformer" on OpenAI’s website, according to Google. JustSayNoToTypos (talk) 03:00, 26 February 2023 (UTC)

GPT stands for generative pre-trained transformer. Obviously no one calls this "Chat generative pre-trained transformer", but that is what the name means. Elli (talk | contribs) 03:06, 26 February 2023 (UTC)
The cited article does support the claim that GPT stands for Generative Pre-trained Transformer. However, it doesn't support the claim that ChatGPT "officially" stands for "Chat Generative Pre-trained Transformer". I think this may count as failed verification. JustSayNoToTypos (talk) 03:14, 26 February 2023 (UTC)
Fair enough. This probably doesn't deserve to be bolded in the lead. I'll refactor it. Elli (talk | contribs) 03:16, 26 February 2023 (UTC)
@JustSayNoToTypos: this better? Elli (talk | contribs) 03:19, 26 February 2023 (UTC)
I think so. Thank you for editing. JustSayNoToTypos (talk) 03:42, 26 February 2023 (UTC)

The redirect Chat gpt. has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2023 March 1 § Chat gpt. until a consensus is reached. Hey man im josh (talk) 19:36, 1 March 2023 (UTC)

Is posting ChatGPT conversations is great?

Showing ChatGPT conversions in the article could let us study its strengths and weaknesses.

For strengths, we may ask ChatGPT some strange and specific questions, such as writing an essay about buses are musical instruments, planning for arranging food to show "Wikipedia"; or showing how good is it in history.

For weaknesses, we may show that how weak is it in recent news (cut-off to 2021), or how bad does ChatGPT cope with mathematical problems.

If this is a good idea, we may type the conversations by text directly instead of uploading screenshots. Beefwiki (talk) 09:34, 28 February 2023 (UTC)

Surely that would constitute Original Research, which is banned from Wikipedia? Sbishop (talk) 11:32, 28 February 2023 (UTC)
I agree. The generation process is stochastic in nature, meaning that you shouldn't make any conclusions from one particular instance anyway. – Popo Dameron talk 15:26, 28 February 2023 (UTC)
@Sbishop Then how about Wikiversity? Does the conversations suit there? Beefwiki (talk) 00:13, 1 March 2023 (UTC)
I don't know much about it. But as Wikiversity is said to be:
Wikiversity is a Wikimedia Foundation project devoted to learning resources, learning projects, and research for use in all levels, types, and styles of education from pre-school to university, including professional training and informal learning. We invite teachers, students, and researchers to join us in creating open educational resources and collaborative learning communities. To learn more about Wikiversity, try a guided tour, learn about adding content, or start editing now.
it might be. Why not ask at Wikiversity's own Talk page? But what you suggest certainly doesn't belong in the Wikipedia article, for reasons given. Sbishop (talk) 09:07, 1 March 2023 (UTC)
What do you mean by "Wikiversity's own Talk page"? I don't find a page about ChatGPT there! Beefwiki (talk) 08:40, 2 March 2023 (UTC)
Sorry, I should have been clearer. I mean you could ask about it at:
https://en.wikiversity.org/wiki/Wikiversity:Colloquium
Sbishop (talk) 10:32, 2 March 2023 (UTC)

The redirect SolidGoldMagikarp has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2023 March 2 § SolidGoldMagikarp until a consensus is reached. Pizzaplayer219TalkContribs 14:22, 2 March 2023 (UTC)

Interview with its engineers

This MIT Technology Review interview with ChatGPT's makers may be of interest to editors interested in expanding the article: https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ DFlhb (talk) 19:01, 5 March 2023 (UTC)

I've added few interesting things, thanks for the link! Artem.G (talk) 10:36, 6 March 2023 (UTC)

This article's lead could be improved

Can somebody please re-write the lead on this article so that an average person can understand it? The lead should quickly tell what the letters G-P-T stand for, not down in the body. The first sentence sounds like gibberish to the average reader. Eagledj (talk) 20:14, 11 March 2023 (UTC)

First, I'd like to point out that there is a note attached to the very first word that explains that 'GPT is an acronym for "generative pre-trained transformer"', which you can see by hovering over it. In any case, however, the concern of wanting the acronym be quickly explained to the average reader can't really be resolved that easily. A 'transformer' (the T in GPT) is a rather complex architecture that an average reader would have trouble understanding, and the way it actually works is not really relevant enough to be in the lead of the article.
So, can you elaborate on why "ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022" sounds like gibberish? Most average readers know what artificial intelligence is, at least broadly, and I'm sure most know what a chatbot is, so why does this not give a good, clear first impression, in your view? – Popo Dameron talk 20:40, 11 March 2023 (UTC)

Knowledge is up to September 2021

The aritlce mentions it's knowledge is limited to 2021, however when using the bot itself, it specifically states "September 2021" whenever it's relevant. I would assume the reason we haven't specified the month is because no RS's have mentioend it? ― Blaze WolfTalkBlaze Wolf#6545 20:36, 16 February 2023 (UTC)

I found a new source and changed it to sep 2021. Rolf H Nelson (talk) 05:08, 20 March 2023 (UTC)

BBC quote

This wikipedia article said "According to the BBC, as of December 2022, OpenAI does not allow ChatGPT to "express political opinions or engage in political activism"." I removed this, because the linked source BBC article is interviewing ChatGPT, the AI tool, not OpenAI. All the quotes in the article come from the language model.

Yes the language model says, when asked, that it's not allowed to "express political opinions or engage in political activism", but ChatGPT always makes up it's responses. What it says is obviously not an official statement from OpenAI, and not a suitable source for Wikipedia. My edit was undone for some reason, I have now undone that undo. Please discuss before undoing it again.

If we want to include OpenAI's views on what ChatGPT is allowed to do, it should be sourced from what OpenAI says themselves, not from what ChatGPT says to a BBC reporter: https://openai.com/blog/how-should-ai-systems-behave Apinanaivot (talk) 16:34, 30 March 2023 (UTC)

Good catch; I may have added that line myself (it sounds like my pedantic style), but we should indeed remove it as the bbc article is probably indeed quoting ChatGPT, and not quoting OpenAI as I had likely assumed. Rolf H Nelson (talk) 01:12, 31 March 2023 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 08:23, 9 April 2023 (UTC)

Unrelated statement in training section

At the end of a paragraph in the training section it says:

Proximal Policy Optimization algorithms are a cost-effective alternative to trust region policy optimization algorithms.

This reads like an advertisement and I'm not sure if this should be included in the article? My initial reaction would be to remove, but I'd like to get some feedback before accidentally doing something I shouldn't Lunare Scuderia (talk) 11:15, 15 April 2023 (UTC)

If it stays in, a translation would help. Sbishop (talk) 11:23, 15 April 2023 (UTC)
I removed the statement for now in my most recent edit. Feel free to add it back in if you believe that it improves the article, I just feel like it doesn't :) Lunare Scuderia (talk) 17:00, 15 April 2023 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for speedy deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for speedy deletion:

You can see the reason for deletion at the file description page linked above. —Community Tech bot (talk) 11:57, 18 April 2023 (UTC)

Edit war on ideological bias of ChatGPT

ChatGPT has been "accused" - in so much as a ML model can be accused of anything, it not being a legal person - of having an ideological bias towards the "left" or "progressive" side. After having used the model for a while I can only agree that this is in fact the case, the model tends to present "progressive" views in a more positive light than it does "conservative" ones. When asked about its bias it responds by claiming to be unbiased and neutral but the answers clearly show this not to be true. As such it is noteworthy that the model is biased but a section mentioning this bias was repeatedly removed by User:Aishik_Rehman and User:LilianaUwU. I propose to add a mentioning of this ideological bias to the _Reception, criticism and issues_ section. If anyone wants to point out why this should not be done speak up. Please realise that merely agreeing with the model's bias is not a valid reason to keep a mention of this bias from the article. Yetanwiki (talk) 19:19, 19 December 2022 (UTC)

If you mean this, it is completely unsourced and had to be removed per WP:V. Whatever mention is added will have to be based on reliable sources. - MrOllie (talk) 19:23, 19 December 2022 (UTC)
The source is my own research but there are plenty of other sources mentioning this bias. The problem here will be that most of the publications which are willing to publish this type of information are "redlisted" or "pinklisted" in the perennial sources. It is, of course, easy to do some of your own experimenting to at least confirm or deny the existence of an ideological bias. A very easy experiment is to ask the bot for a story involving politically sensitive topics. Do not tell it to give advice, just ask it for a story. You'll quickly notice that the tone of the story largely depends on whether the topic at hand is one pushed by the "progressive" side versus the "conservative" side. Another way is to ask it for a number of stories (in new sessions) where the only variable is the sex/race/sexual orientation/... of the protagonist, this will show the same effect. Again do not ask it for advice since that seems to be caught in a filter, also do not ask for "personal experiences" because that too is filtered. Just ask for a story and notice the difference in tone and outcome. I did this many times to see whether the bias was coincidental and found out it was "reliably biased". Still, this is "original research" which does not belong in an encyclopedia so I'll have to find an "acceptable" source which has also "done the work". Yetanwiki (talk) 22:02, 19 December 2022 (UTC)
Keep in mind, when you say "The source is my own research but there are plenty of other sources mentioning this bias" -- your own research is not a valid Wikipedia source. As is stated Wikipedia:No original research, "'No original research' (NOR) is one of three core content policies that, along with Neutral point of view and Verifiability, determines the type and quality of material acceptable in articles." CoffeeBeans9 (talk) 19:00, 18 January 2023 (UTC)
Here are a few recent sources, some of them have already been marked as unclean in the perennial sources, others have not been added to this list. Given the biased character of this list - which insists that MSNBC, CNN and the New York Times are reliable sources while e.g the New York Post is "unreliable" despite the opposite having been proven by the former and latter reporting on the Biden laptop; it also labels so0mething like the 'Victims of Communism Memorial Foundation' as 'unreliable' because it is 'an anti-communist organisation' while it considers the 'World Socialist Web Site' as 'reliable for the attributed opinions of its authors' and 'more reliable for news related to labor issues' - this should not be a problem given the original intent of the NPOV policy. Here's a few:
ChatGPT’s score system shows political bias is no accident
The political orientation of the ChatGPT AI system - Applying Political Typology Quizes to a state-of-the-art AI Language model
ChatGPT is not politically neutral - The new AI chatbot espouses an all-too-familiar Left-liberal worldview
More are sure to follow as the bias issue is clear and the mentioned experiments are repeatable - I have done so and got the same results. Yetanwiki (talk) 18:59, 21 December 2022 (UTC)
Please note that sources must be meet Wikipedia's sourcing guidelines. Blog posts and opinion pieces are not going to pass muster here. If you want to argue that those guidelines are incorrect in an effort to change them, you can do so at WP:VPP. But you cannot simply ignore them. MrOllie (talk) 19:05, 21 December 2022 (UTC)
Unherd is not a 'blog', it is a 'news an opinion' publication which just has not been added (in pink or red, most likely) to the perennial sources yet. I could have cited a Daily Caller article referencing a number of these sources but that would have been met with a reference to those same perennial sources list where it is listed in, you guessed it, red ('the site publishes false or fabricated information') - in other words, just like CNN/MSNBC/NYT/LAT/etc who are all listed in green. As I already mentioned this is a problem and a well-known one given the plethora of reports on the ideological bias in many Wikipedia articles. This bias has made Wikipedia unusable for anything even tangentially related to politically contentious issues - as ChatGPT clearly is - since it is the keepers of the perennial sources list who get to decide which sources are allowed and which are to be shunned. Had this list been free of bias - i.e. had the same criteria been used for all publications - this would not be a problem but this is clearly not the case. Yetanwiki (talk) 08:46, 22 December 2022 (UTC)
The perennial sources list is not an exhaustive list of bad sources - may sources are so obviously appropriate or inappropriate that they are not discussed often enough to require an entry on the list. Each entry on the list is made only after several discussions, usually including an RFC with large attendance. There is no small set of 'keepers' as you imply here. - MrOllie (talk) 12:52, 22 December 2022 (UTC)
What do you mean with the claim that 'the perennial sources list list is not an exhaustive list of bad sources'? It is definitely not, but in a similar vein it is not a list of good sources. Why focus on the 'bad sources' part here? I don't claim that e.g. Unherd is 'bad', I just expect it to be called such if and when it is added to the perennial sources list because that list is heavily biased towards 'progressive' sources. As to there not being a 'small set of keepers' I can agree in that there are many editors who contribute to the list (165 individuals are responsible for the last 500 edits which corresponds to ~14 months). It is not a small group, just one in which the most vocal section happens to fit mostly within the "progressive" spectrum - how otherwise to explain the clear bias this list presents? Objectively speaking CNN is just as bad as Fox News and MSNBC is worse than both but this is not how the list represents them. Buzzfeed is just as good/bad as the Daily Caller but this is not represented in the list. The Daily Beast is just as good/bad and certainly as ideologically lopsided as The Daily Wire but only one of them is marked ass 'red, STOP'.
Anyway, since the list itself states that the absence of a source simply means that the source has not been discussed enough to merit inclusion I assume those Unherd articles can be used as sources. Let those who disagree speak up and give a good reason why this would not be the case. Yetanwiki (talk) 22:47, 22 December 2022 (UTC)
If Unherd hasn't been discussed yet, we can certainly open up a discussion on it on the reliable sources noticeboard, but if you already know what the result is going to be, you're wasting your own time and ours. Rolf H Nelson (talk) 03:02, 24 December 2022 (UTC)
Reality has a liberal bias. The world is ever-changing, so of course it supports the ideology that supports progress rather than the status quo. RPI2026F1 (talk) 23:58, 22 December 2022 (UTC)
"Reality is that which, when you stop believing in it, doesn't go away" said Philip K. Dick. Reality is also what drives both conservative as well as progressive thought. When circumstances change it makes sense to look for a different way of doing things, which is what drives progressive/liberal thought. When a good way has ben found it makes sense not to change just for the sake of change without considering the consequences, which is what drives conservative thought. Both are needed since conservatives can be overly cautious when circumstances change and be overtaken by reality while progressives/liberals can get so caught up in their schemes of improvement that they loose sight of reality and soon get caught by it.
BTW, that Colbert quote you refer to ('reality has a liberal bias') is outdated, reality in 2022/2023 has a conservative bias. This will change again, eventually but for now it clearly has. Yetanwiki (talk) 18:15, 23 December 2022 (UTC)
You are right that reality has a liberal bias, and in fact ChatGPT proves it. Because it is not ChatGPT that is biased in this, ChatGPT is just a model that is trained on a large amount of "real world" data. So if ChatGPT is biased it is only because reality is biased. In order to prove that ChatGPT is biased you would need to have access to the source code and point out exactly where the code restricts some particular ideology from being used. All the prompt responses in the world do not prove anything but real world data is biased. For this reason you should completely remove this entire "Accusations of Bias" section, because it is nothing but a tool to garner pity for the fake "victims" of bias. ClearConcise (talk) 21:05, 22 March 2023 (UTC)
An opinion article in Reason [1] could be cited as an attributed opinion that ChatGPT has/had a left-wing bias, but if we do so we also need to include other opinions about AI bias as well. Rolf H Nelson (talk) 03:12, 24 December 2022 (UTC)
The section "Accusations of Bias" must be removed until AFTER consensus is reached. It is not okay to just leave up misinformation and force the people removing it to have the burden of proof. The people adding it must provide the citations from RELIABLE sources, which they have not done. ClearConcise (talk) 14:32, 22 March 2023 (UTC)
The paragraph notes that these are accusations and not endorsements of the claims. Since there have been many accusations of bias (from many different perspectives), and they are coming from well-known, widely-read sources (i.e. not some random person's blog), it should be acknowledged in the article. ... discospinster talk 14:45, 22 March 2023 (UTC)
1. re: "not endorsements of the claims" It does not say, "these are not endorsements" and calling them "accusations" is not the same thing as that. Further, it's just a unprovable claim to make anyway without direct access to the source code. ChatGPT wouldn't have an ideological bias, the data that it was trained on would, which of course is just saying "reality is biased" because it is trained on huge datasets from many different sources.
If someone wants to prove that ChatGPT has bias coded into it, they would need to prove much more than what a couple of prompts give you as a result. They would need to prove that the bias is hard built into the model itself, not just the training data.
These accusations are fake outrage trying to push an agenda, and every single source cited is proof of that, because they are all opinion pieces with no references to any biased source code. Why is this Wikipedia content being used to further push this specific biased agenda, when the article should be confined to just the verifiable facts? These accusations are being used as a tool to try and garner pity for the so-called "victims" and they need to be removed from this encyclopedia article.
2. re: "there have been many accusations"
Is there an accusations section on the Google page for all the daily accusations raised against Google? Is there an accusations section on the Fox News page for all the daily accusations raised against Fox News? If you're going by the sheer number of accusations to support the reason for a section on it, then those two entities and many more pages will need to be updated because they of course have way more accusations leveled against them on a daily basis.
Unverifiable unproven statements are not information that should be included in what is supposed to be factual page, and this section needs to be removed. ClearConcise (talk) 16:12, 22 March 2023 (UTC)
  1. The section heading "Accusations of bias" is pretty clear that they're opinions. It does not suggest that these accusations are proven or reflective of reality.
  2. There is an entire article about Criticism of Google, as well as Criticism of Google Search which redirects to a section on bias in the Google Search article. Not to mention Criticism of Microsoft and Fox News controversies where the very first section is called Allegation of bias. I expect that there will soon be an article called Criticism of ChatGPT. ... discospinster talk 14:18, 23 March 2023 (UTC)
I think that this is a significant aspect and sourced aspect regarding the subject. While (at wp:AN) I don't endorse the use of the tools, I agree with discospinster's arguments. Sincerely, North8000 (talk) 21:48, 22 March 2023 (UTC)
Unless the content is vandalism, violates BLP guidelines or is a copyright violation, the burden DOES fall on those editors who want to completely remove large sections of sourced content that are presently in the article. I think rather than taking a hatchet to the article, you should make an argument based on specific sources you object to or work to improve the content in positive ways. You can't just come to an article that has been the work of many editors, make claims of bias and remove whole sections you disagree with. That's not how Wikipedia operates on high profile articles like this one. Liz Read! Talk! 06:32, 23 March 2023 (UTC)
"The political ideology of conversational AI" looks like a WP:PREPRINT, tho i do see a few published citations. fiveby(zero) 22:37, 23 March 2023 (UTC)

The reason ChatGPT has a bias towards Political Correctness or otherwise safe thoughts is because, first, it is trained on information publicly available on the internet, and second, it's distributed as a service over the internet. When toughts are permanently published, probably under the name of the author, there is a big incentive towards reducing unpopular opinions. Additionally, when a service is published by a company to millions of people, there's a strong incentive to sanitize it to avoid legal risks.

You may add this point of view to the article if you find a source for it, which you will most likely find because my ideas are great, and there are great thinkers out there who write, and since great minds think alike, and since the truth is pretty self-evident, you will find this thought eventually out there. It probably won't be citogenesis, because no one reads or gives credence to Talk comments, but even if it is, being published under someone elses name and authority means it is no longer Original Research.

You may now close this discussion, as I have ended it.--TZubiri (talk) 03:35, 25 April 2023 (UTC)

More possible subsections for "Implications" section.

  • Writing (like journalism and books).
  • Computer science, programming.
  • Language translation.

It has been applied for each, and there should be sufficient sources. It also has its limitations for each, especially programming. VintageVernacular (talk) 22:48, 5 May 2023 (UTC)

Merge of ChatGPT Plus

ChatGPT Plus article is useless and redundant in my opinion, please comment at Talk:ChatGPT_Plus#Merge_to_ChatGPT. tldr - there is nothing unique in paid version of an app, no need for separate article. Artem.G (talk) 07:13, 22 May 2023 (UTC)

Nope, ChatGPT Plus is quite a different product indeed. All of OpenAI's products should get their own page. Mathmo Talk 13:28, 23 May 2023 (UTC)
And what's the rationale? It's exactly the same product, but with some features for those who pay - faster response, better model, plugins - but that's the same chatbot, and even plugins are likely to be available to everyone after some time. No relibale source talk about impact of ChatGPT Plus on something, everyone just calls in ChatGPT.
And not every product should have its own page, only those that are notable. For example there is no page for GPT 3.5, though it's different from 3 and 4. Artem.G (talk) 14:05, 23 May 2023 (UTC)
Arguably GPT3.5 should not have its own page, but I see no problem with GPT2 vs GPT3 vs GPT4 etc having their own pages. Mathmo Talk 12:11, 25 May 2023 (UTC)

Redirected to the main page. Everything on the main page can be said about Plus service, and as a subscription service it doesn't require a separate page. Besides, ChatGPT got almost 7 million views in the last 30 days, and ChatGPT Plus got less than 6 thousand. Artem.G (talk) 14:46, 31 May 2023 (UTC)

Revert considering The Story unreliable

User:rolf h nelson, I noticed you recently reverted an edit by User:GeogSage on the basis that its cited source, The Story, is not a reliable source. Could you explain your rationale behind this? From what I'm seeing, the outlet seems fine, if young. The particular cited article's author seems to have credentials as well. I think this is a worthwhile addition to the article. Thanks- StereoFolic (talk) 02:29, 18 June 2023 (UTC)

Thanks for the support User:StereoFolic. I came across the source working on another page and am a bit surprised it was completely reverted. I am not super familiar with the history of The Story, but I checked the author's background, as you linked before posting. I figured that including an Australian source would be beneficial to help represent a worldwide view and combat systemic bias. It also very much reflects some of the criticism I've seen from artist communities on social media. I love ChatGPT and welcome the prospect of our new AI overlords, but I think the negative perspective/fear/concern in that article was worth including.
I won't undo the revert myself as it is now on the talk page, and I want to avoid edit warring. If you or someone else does after consensus/discussion, I support that.
Just wanted to go on the record on my opinion and why I included the source. GeogSage (⚔Chat?⚔) 02:54, 18 June 2023 (UTC)
"News reporting from less-established outlets is generally considered less reliable for statements of fact", per WP:NEWSORG. Once we go to editorial opinions, the bar for inclusion becomes even higher. Some examples of good global sources are at [2]. Rolf H Nelson (talk) 19:32, 18 June 2023 (UTC)
"When taking information from opinion content, the identity of the author may help determine reliability" per WP:NEWSORG. The author of the piece, James Hennessy, was an editor and writer at Business Insider Australia, and according to the author profile on The Story, has written for The Guardian, The Outline, The Saturday Paper and the ABC.
"When taking information from opinion content, the identity of the author may help determine reliability. The opinions of specialists and recognized experts are more likely to be reliable and to reflect a significant viewpoint." He's not a specialist or recognized expert at ChatGPT, so he doesn't get many points for that. We let in WP:RS, we don't let in everyone who ever wrote for an WP:RS. Rolf H Nelson (talk) 04:56, 19 June 2023 (UTC)
He is not talking about the technical aspects of ChatGPT, the article is discussing the potential impact of ChatGPT and AI on human story telling. The section is "negative reception," and this is a published author writing about an overall negative reception on ChatGPT based on their concerns regarding the technology. WP:RS is not exhaustive and relies heavily on context, and the context of this article being an opinion of an established writer on the impact of ChatGPT on writing within a new but well put together niche magazine. According to WP:RS, "Editorial commentary, analysis and opinion pieces, whether written by the editors of the publication (editorials) or outside authors (invited op-eds and letters to the editor from notable figures) are reliable primary sources for statements attributed to that editor or author, but are rarely reliable for statements of fact." What I included is simply the opinion of the author published, and is reliable in that regard. The authors statement reflects much of what I've seen said by creative communities in regards to AI generated content, and seems relevant to include in the "negative reception" section. I disagree completely that the source is not reliable more after reading your justification, and agree with @StereoFolic. Based on reading the Wikipedia:Reliable sources/Perennial sources page you linked, I think "The Story" is a niche topic unlikely to ever be indexed on this article, and that in this context, the opinion of this authors opinion on the matter as a writer/editor qualifies him as an "expert". I would be interested in others opinions on the matter to work on consensus on this issue, as I doubt you and I can come to an agreement here. GeogSage (⚔Chat?⚔) 06:02, 19 June 2023 (UTC)
As I said, if we're going to bring it in an a subjective opinion, then the bar for inclusion gets higher, not lower; it would have to pass considerations of WP:WEIGHT. If we don't get other opinions this week, consider posting to the WP:RSN board. Rolf H Nelson (talk) 01:51, 21 June 2023 (UTC)
I still consider this to be reliable enough. I see nothing in the article that is factually questionable, and the main opinion expressed (concerns about supplanting human creativity) does not seem well covered in the article at the moment. If there is a better source from a more notable author expressing similar concerns I'd be fine with using that instead of the Hennessy article. In any case, I think the concern here is about notability, not reliability. StereoFolic (talk) 17:11, 19 June 2023 (UTC)
The article has enough sources to meet verification for notability of ChatGPT as a topic. This source would not bee needed to build overall verification, just to verify the statement appeared in an article of "The Story". According to Wikipedia's article on verifiability, "mainstream (non-fringe) magazines, including specialty ones," are acceptable. I would argue that the magazine I linked is a specialty magazine, and not fringe.
"Some newspapers, magazines, and other news organizations host online columns they call blogs. These may be acceptable sources if the writers are professionals." I believe that the author of the publication can be said to be a professional writer, and is commenting on his reaction to the possible future impacts of ChatGPT on writing. I think the source is a noteworthy perspective in the article, but of course, I think that my opinion on the matter is clear. @StereoFolic@Rolf h nelson, is there any way to get more arbitration/additional opinions on the matter? Again, to avoid edit warring I will avoid reverting any edits on the topic and leave it to others depending on consensus. GeogSage (⚔Chat?⚔) 18:41, 19 June 2023 (UTC)
I don't think there's any rush in resolving this. We can wait for others to weigh in and perhaps a consensus will emerge. StereoFolic (talk) 02:53, 20 June 2023 (UTC)
That is true. I'm not used to working on pages with extremely active talk spaces, but here there is a much greater chance of people chiming in. GeogSage (⚔Chat?⚔) 03:25, 20 June 2023 (UTC)
This is a statement of negative reception from an established author, for a section on "negative reception," specifically from a magazine about writing. It is not a description of the inner workings of ChatGPTs neural networks. The Wikipedia "reliable sources" list is not exhaustive, and relies heavily on context. In this case, the context is attributing an opinion of a specific author to a particular source, not stating it as fact. The article does a good job of reflecting the concerns I've seen from the creative writing community surrounding LLMs. I disagree with much of it personally, but it does capture a lot of the discourse in my opinion. I think using the WP on reliable sources to revert the edit, given the author is established and the relevance of the publication to the topic, seems a bit extreme. GeogSage (⚔Chat?⚔) 04:00, 19 June 2023 (UTC)
It looks like the negative reception section already quotes at least two far more notable Australians, singer Nick Cave and Member of Parliament Julian Hill, so there's no need for "including an Australian source ... to help represent a worldwide view and combat systemic bias." Elspea756 (talk) 04:21, 19 June 2023 (UTC)
the content of the source, the author, and the fact it is not from the United States were the three main benefits to it, in that order. The article itself has 184 sources, most of which are from the United States. Adding other sources from English speaking non-American countries is beneficial even if there are already some from that country. GeogSage (⚔Chat?⚔) 04:26, 19 June 2023 (UTC)
I think adding every single opinion piece and magazine on the internet is a violation of WP:INDISCRIMINATE and WP:NOTREPOSITORY. Only noteworthy and distinctive ideas (and receptions, both positive and negative) deserve addition into article entry, see WP:MINORASPECT. This article already contains undue negative reception. --WikiLinuz {talk} 02:07, 21 June 2023 (UTC)
@StereoFolic@Rolf h nelson@Elspea756@WikiLinuz,
I had some time today, so I took into account the feedback and have completely rewritten the reverted text (see latest revision). Previous assertions are now backed up with citations from "The Atlantic" and "Vox", which are listed on Wikipedia:Reliable sources. I have continued to include the article from The Story, in addition to an article from Vanity Fair and Fortune, to further corroborate the claims made in the other two articles. These additional sources are not the primary, but lend credence to the argument that these points are discussed over a range of media (they could technically be deleted if necessary and the text still be supported I believe, but they do improve it). As the primary source of contention seemed to be the how reputable the single source was, and not necessarily the content, I hope this is more fitting. I have added the text as a fresh edit, and don't personally consider this reverting the revert by Rolf h nelson as I think it is changed significantly from the original, and has taken into consideration feedback from the talk page. While I don't agree with the perspective, and welcome the prospect of AI overlords, I do believe the collective negative reaction related to jobs and the human role in creativity was prominent enough to warrant inclusion. If you feel it is still in violation of Wiki policy, I understand that it may require significant changes or reverted, but thought it would be easier to see live then not. GeogSage (⚔Chat?⚔) 03:36, 21 June 2023 (UTC)
Those other sources are WP:RS, are any of the points made in the text specific only to The Story and not other sources? If all the points are made by the other RS you included, then just delete The Story as a source and we're good to go. Rolf H Nelson (talk) 03:45, 21 June 2023 (UTC)
Done deal. The Story added a bit of support (in my opinion) to demonstrate the response was published more widely, but it is not necessary to validate the claim and if that is the disputed point it is easy enough to remove. I reworded one sentence that mentioned "two sources", but all content is supported without it. Thanks for the help, and pleasure arguing with you. Please feel free to review, revise, or make any changes you feel are appropriate, obviously. GeogSage (⚔Chat?⚔) 04:27, 21 June 2023 (UTC)
Looks great, thanks for sticking with it, and thanks all for a productive discussion. StereoFolic (talk) 01:53, 22 June 2023 (UTC)

Avoiding an Edit War: Available Countries and Regions table

I, and @Artem.G have both reverted the table introduced by @Leesjy2k. Leesjy2k has reverted our edits with minor changes, so I'm preemptively opening up a discussion here to avoid Wikipedia:Edit warring. Personally, I think that the table is completely unnecessary. It has two columns, one for countries and another for if ChatGPT is available or not. As single sentence listing the few countries it is NOT available in would be much simpler. Further, the source given does not actually say what countries it is not available in, just the countries it IS available in. Absence of evidence is not evidence of absence when it comes to a source, the source does not say they are not supported, it only states the countries supported at time of publishing, and the list may not be exhaustive. I see a second source here that was included by Leesjy2k that addresses this concern, this source is just summarizing the first, and includes a list of "countries missing" from the list. Important to note this source does not say the countries are not serviced, but are just missing from the list of serviced countries. Finally, based on experience with @Rolf h nelson on this page, I say this source (wepc.com) will probably not pass most editors view of Wikipedia:Reliable sources, as I'm probably more flexible on this topic and I would question wepc.com as a source. If more data becomes necessary by country, such a table might make sense, but as it is I believe it makes the article far to long and offers little content for that increase in page length when a short sentence could accomplish the same.

This is my opinion, and I invite commentary from other editors. I will not delete the table a second time however and follow consensus. GeogSage (⚔Chat?⚔) 05:14, 24 June 2023 (UTC)

I agree, the table is unnecessary. There are already several sentences in the article about availability in several countries (banned in N Korea etc), and there is no need for a table that basically says "available everywhere except these few exceptions". Artem.G (talk) 07:29, 24 June 2023 (UTC)
Agree: Table unnecessary. The same (or similar) table could also be used to show a rogue's gallery of countries that block Google, Wikipedia, etc., don't have a free press, representative government, democracy, etc., etc. Don't see how it improves this article. -- Cl3phact0 (talk) 23:07, 24 June 2023 (UTC)
I have requested a graphical imagery showcasing the geographical availability of ChatGPT at Wikipedia:Graphics_Lab/Map_workshop/.
The primary list is exhaustive as it is directly from ChatGPT itself.
I had included the table to showcase to easily showcase the countries and regions by alphabetical order, that do either have or not have official ChatGPT access.
(For example, in numerous visa policy pages, tables specifiying the status requirements of each arriving citzenship was indicated (visa not required, e-visa, visa required, etc.) Leesjy2k (talk) 02:21, 25 June 2023 (UTC)
If the table is deemed too long, I can propose that the table be split off into a separate article called "ChatGPT availability by country or region", and that be linked into the "Available Countries and Regions" section of the ChatGPT Wikipedia article. Leesjy2k (talk) 02:28, 25 June 2023 (UTC)
I have decided to split off the table into another page, to assuage concerns that "table was too long", however the table is still warranted. Leesjy2k (talk) 02:33, 25 June 2023 (UTC)

Semi-protection of this talk page

A note for the record: Since last December, this talk page has had to be repeatedly protected from edits by new and unconfirmed users, due to a high rate of edits that were not consisted with WP:TALK and quite obviously come from people who mistake this page for a chatbot interface where one could post questions to ChatGPT. This seems to be a longterm problem, justifying the unusual measure of maintaining the protection for an extended period of time. See also discussions here and here. Regards, HaeB (talk) 21:35, 1 July 2023 (UTC)

@Floquenbeam and Dennis Brown: Following up on this per our earlier discussion, now that these misguided edits have resumed immediately after the one month protection expired. As last month, I would suggest one year. Regards, HaeB (talk) 01:09, 30 July 2023 (UTC)
When I have a chance to sit down and think, I’ll protect for a year, and figure out how to do one of those unconfirmed-only talk subpages. I’ve seen them, just need to figure out the details. Anyone else is welcome to do it before I get to it. On a phone now and incapable of doing anything besides hitting “reply”. Floquenbeam (talk) 15:01, 30 July 2023 (UTC)
 Done Floquenbeam (talk) 18:06, 30 July 2023 (UTC)
I would have done the same. Dennis Brown - 22:15, 30 July 2023 (UTC)
The banner at the top of this talk page to warn people off doing that doesn't appear on mobile devices without intentionally clicking through first. The issue might be mitigated if that were changed. One solution might be a pinned talk page section about it. VintageVernacular (talk) 19:55, 31 July 2023 (UTC)

Washington Post: "ChatGPT leans liberal, research shows"

Original: https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/

Archive: https://web.archive.org/web/20230816230234/https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/

I think this should be included in the article, but I'd like to hear what others think.

SquirrelHill1971 (talk) 20:56, 17 August 2023 (UTC)

Very similar content is already in the article unchallenged, so I doubt anyone will object. It should cite the actual paper, which is open access nicely enough, unless news articles give extra value such as comment from the researchers. VintageVernacular (talk) 22:36, 17 August 2023 (UTC)
Though, I notice the other paper that was already there is in an WP:MDPI journal. VintageVernacular (talk) 05:46, 18 August 2023 (UTC)

Removed Wikipedia section

I removed the section that focused on possible impacts of ChatGPT on Wikipedia. This is navel-gazing, as there's a plethora of online sites where Ai-generated content might have an impact. The opening lines of this text even notes that there's no general consensus on what this means for this particular encyclopedia project. Yes, people have commented on the possible impact and so there are sources, but that alone doesn't mean it should be included in such a way. --ZimZalaBim talk 22:16, 1 November 2023 (UTC)

I agree the old section was undue weight, but I think it may be worth a sentence or two, considering how important Wikipedia is. StereoFolic (talk) 17:20, 2 November 2023 (UTC)

The redirect ChatGPT. has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2023 November 6 § ChatGPT. until a consensus is reached. Gonnym (talk) 12:16, 6 November 2023 (UTC)

Nomination for deletion of Template:OpenAI

Template:OpenAI has been nominated for deletion. You are invited to comment on the discussion at the entry on the Templates for discussion page. InfiniteNexus (talk) 06:17, 14 December 2023 (UTC)

New logo?

When I go to https://chat.openai.com while logged out, I see on the top-left corner of the screen a new logo, that isn't OpenAI's signature flower(?) design. The logo is literally just ChatGPT●, but I can't find an SVG version of it, it's displayed as text on the website.

Should the logo be changed in the article? QuickQuokka [⁠talkcontribs] 23:21, 3 September 2023 (UTC)

However, it does retain its old logo in some other places. QuickQuokka [⁠talkcontribs] 23:22, 3 September 2023 (UTC)
That's not the logo. The logo is unchanged Pksois23 (talk) 08:53, 5 March 2024 (UTC)

Financial markets

I think there is a need for a discussion with @CommonKnowledgeCreator and @JPxG and anyone interested about this modification.

Personally, my impression is that the content removed by CommonKnowledgeCreator seems pretty accurate in regards to the sources (these studies indeed says that, although the phrasing should more clearly indicate that ChatGPT outperformed the average of the 10 investment funds, not necessarily all). And there is a desire to give the full picture of Patronus AI's study. But it's also confusing and I think the part on Patronus AI goes too deep into technical details that should be left in the references for further reading. I would probably prefer having just a few simple sentences with easy-to-interpret statistics concisely explaining what ChatGPT is good at (making quick and performant sentiment analysis and investment advice when fed a lot of data, integration with plugins) and what it's currently bad at (causal reasoning and mathematics, lack of knowledge of recent events, need for double-checking, reading facial expressions), e.g. a synthesis of the points in this article that look still relevant today.

That's my opinion, but I confess that I'm not skilled in financial markets (as for most Wikipedia readers). What do you think? Alenoach (talk) 17:19, 8 March 2024 (UTC)

To trim down the summary of the Patronus AI study, we can eliminate the list of different of SEC filings for starters (like I did for the Anthropic article). Specifically, what about the current summary of the Patronus AI study do you find confusing? I can take a stab at rewording it once I know what is unclear from the current revision. I didn't add the content summarizing the CNN article; I will look over it. -- CommonKnowledgeCreator (talk) 17:30, 8 March 2024 (UTC)
I would say that the protocol of the study is well explained but inherently complicated. Most people have really no idea of the difficulty of using this retrieval system for SEC filings in the first place, so even with all these details, the result remains difficult to interpret (although 81% of errors for GPT-4 clearly looks bad). And it leaves a sense of confusion with the previous study by finder.com, a reader might think: one study says that ChatGPT is great, others say that it's bad, but the article doesn't make it clear enough why ChatGPT performed well or badly in certain circumstances. But I would like to also have the fresh perspective of some wandering Wikipedian if possible to see if it makes consensus.
If I had to shamelessly simplify it, I may write "GPT-4 was shown to frequently fail analyzing SEC filings (a type of financial statements), along with other AI models like Claude 2 and LLaMA-2." It's less factual, but it's relatively simple to understand and verify, and the details of the study are available in the reference. Alenoach (talk) 18:24, 8 March 2024 (UTC)

I think that we should write the article by using reliable sources trying to figure out what the truth is, and then say that, and if the sources disagree, reflect that in our writing. What we should not do is write "A study proved that it was good.[1] However, a different study later proved that it was bad.[2]"

The organization of this article, bluntly, makes no sense and seems like the product of many sentences being slapped in piecemeal over the course of years. This section is flatly bizarre -- it's in "criticism by industry". The subsection name is "financial markets". Are the financial markets criticizing ChatGPT? Here is a sentence from it:

An experiment by finder.com revealed that ChatGPT could outperform popular fund managers by picking stocks based on criteria such as growth history and debt levels, resulting in a 4.9% increase in a hypothetical account of 38 stocks, outperforming 10 benchmarked investment funds with an average loss of 0.8%.

What part of this is "criticism"? Here is another.

On the retrieval system version, GPT-4-Turbo and LLaMA-2 both failed to produce correct answers to 81% of the questions, while on the long context window version, GPT-4-Turbo and Claude-2 failed to produce correct answers to 21% and 24% of the questions, respectively.

Is this "criticism"? Why does this belong in this section?

The more serious issue, to me, is that this actually has nothing to do with the previous thing. The first citation -- that it "outperformed popular fund managers" -- is cited to a fluff piece from CNN. Buried in the fourth paragraph of that fluff piece -- the one that boasts 4.9% ROI -- is that "Over the same eight-week period, the S&P 500 index, which tracks the 500 most valuable companies in the United States, rose 3%". There is no link to more detailed information, i.e. what the stocks were, what firms it was being compared against -- literally all we have to go by is that CNN is quoting some guy as saying "some of the most popular investment funds in the United Kingdom". The website itself has a page about this -- note it's written by their "Head of Communications & Content Marketing", not an analyst -- that doesn't explain any of this stuff either.

It doesn't give us information like, say, how the model was actually being used: were they prompting it with actual filings? were they prompting it with stock information? news articles? if this were actual credible research there would be a paper explaining these things, but it is clickbait, so there's not. It's also specifically talking about the United Kingdom. The SEC operates in the United States. Moreover, there is not a basis to claim that this is being refuted by different articles, published later, and by different people, about a different country, which the sources have not claimed is relevant to this. What's the connection supposed to be between this and it "failing" to read SEC filings? As far as I can tell, nobody ever claimed or warranted or represented that it could read SEC filings, so it's difficult to see how this is a "failure". Have I "failed" to learn how to speak Xhosa (I have never attempted to learn Xhosa, and never told anyone I was)? Have I "failed" to bench press eighteen thousand pounds, etc?


Like I said -- if we can't be bothered to find actual research that attempts to analyze these things in a detailed way, we should not just be trawling around for random clickbait from content marketers and then pretending it's research in the voice of the encyclopedia. jp×g🗯️ 20:49, 8 March 2024 (UTC)

Editnotice

Several people now have also posted messages at Talk:Gemini (chatbot) believing — or pretending — to ask Gemini a question. As the number of chatbots continues to grow, I think it may be beneficial to have a generic editnotice template for talk pages of chatbot or generative AI software. InfiniteNexus (talk) 06:37, 25 February 2024 (UTC)

Agree, this has also been happening over at Sora on a near-daily basis. Since it doesn't seem likely to end this would be a good way to at least reduce the number of edits that need to be undone. Jamedeus (talk) 07:13, 25 February 2024 (UTC)
Done, see {{Generative AI editnotice}}. InfiniteNexus (talk) 08:48, 25 February 2024 (UTC)
Wow this looks fantastic! The automatic link to the AI tool is a nice touch, I hadn't thought of that. Thanks for your work on this.
If I could make one suggestion, it would be nice to add a bool parameter publicly_available that changes the text around the link. If set to False the text becomes something like "<tool> is not available to the public, but you can read more about it <here>". This would be handy for Sora, Codex, etc (and probably more in the future with the endless hype). Jamedeus (talk) 09:09, 25 February 2024 (UTC)
I like this a lot. I think it could help significantly with this issue. I also agree with the idea for a publicly_available parameter. I would additionally suggest maybe including some way (probably either another parameter, or just changing the wording of the notice in general) to adapt the notice to non-chatbot generative AI tools, such as the Sora and Codex models mentioned above, as the existing wording of the notice seems to assume that the tool will be a chatbot. –Gluonz talk contribs 14:25, 25 February 2024 (UTC)
I added a |public=no parameter to change the wording from "to do so, you may visit" to "learn more at". As for the non-chatbot concern, what wording change do you have in mind? Personally, it sounds pretty general to me, but I'm open to suggestions. InfiniteNexus (talk) 19:38, 25 February 2024 (UTC)
I agree that the wording seems mostly neutral. I think my concern would mainly be with the wording of “to ask [the AI system] a question”, since that does imply a chatbot. However, I am unsure whether there is an easy way to reword that without making it sound awkward, and the current wording should probably be understandable enough anyway, so I think that the template is essentially good to go for the time being. –Gluonz talk contribs 17:21, 27 February 2024 (UTC)
I have added {{namespace detect}} templates to detect if it's being used in mainspace and tweak it a little bit so it can be used on vandalized articles (not just talk pages). ~~2NumForIce (speak|edits) 03:34, 11 March 2024 (UTC)
I am sorry, but how to add it somewhere? I cannot find the template on Talk:Sora (text-to-video model) source code. RodRabelo7 (talk) 18:01, 27 February 2024 (UTC)
You must be an administrator, page move, or template editor. What page are you trying to add it to? InfiniteNexus (talk) 18:48, 27 February 2024 (UTC)
Great idea. This has been an annoying problem pretty much ever since GPT became a household name. popodameron ⁠talk 21:53, 25 February 2024 (UTC)