Talk:Stochastic parrot
This article was nominated for deletion on 12 May 2023. The result of the discussion was keep. |
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||
|
POV
[edit]I think that a bit of re-examination may be warranted for this article. I think a major issue is that -- well -- is it true? The article just repeats a bunch of conjectural claims as though they're fact, without any real evidence for or against. To wit: LLMs "ultimately do not truly understand the meaning of the language they are processing" -- well, okay, what does it mean to "understand"? What does it mean to "truly understand"? Do we have any evidence that somebody has empirically tested this claim by coming up with an operationalized definition of "understanding" and then measuring whether or not these models do it? If not, then it is not a real thing; it's a hypothesis.
Compare: "A bloofgorf is a Wikipedia editor who has rough green skin and lives under a bridge and likes to argue on talk pages and is stupid". Wouldn't it be pretty relevant whether or not anyone had ever seen one of these, or taken a picture of it, etc? We would want to see some evidence that these things really existed, or that someone at least had a reason for thinking that they existed besides publishing a paper saying "Wouldn't it be crazy if these were real???". jp×g🗯️ 06:12, 17 January 2024 (UTC)
- You tags are both inappropriate. All the AI producers acknowledge that the LLMs do not actually understand language. Most of the sources are not the original article, which is cited only twice while there are 13 other sources! Skyerise (talk) 11:26, 17 January 2024 (UTC)
- Hi, this is an editor Chinese Wikipedia. This article is being translated into the Chinese Wikipedia. However, personally, I have some difficulties in following the Debate section. It looks the opinions are scattered and paragraph-to-paragraph is not so well connected.
- For example, two paragraphs are discussing the ''linear representation''. Take ''Othello-GPT'' as an example, I understand the technical side of ''Othello-GPT''. I think that main point of put it here is try to support LLM can have some knowledge of space (physical world, 8x8 board). The sentences in this article made me lost by mentioning ''linear representation'' and ''mechanistic interpretability''.
- Then some paragraphs discussing ‘’shortcut learning‘’, which to me are little bit diverged from the main topic (randomly generated or really understand things).
- I really appreciate your efforts in editing this topic. May I ask if the debate part can be further divided in some subsections with strong and clear opinions? Peach Blossom 20:21, 24 May 2024 (UTC)
- I added subheadings and clarified the paragraph with Othello-GPT, thanks for reporting these issues. Alenoach (talk) 02:30, 25 May 2024 (UTC)
- For this reason, I don't really understand the article / don't like the article -- it presents the idea "LLMs just probabilistically predict the next token" as some sort of shocking & groundbreaking research, but in reality this was not ever controversial in the field.
- I won't edit it because I waste too much time here already and I'm not sure my objection really warrants much editing (it would just be lots of rephrasing to avoid a misleading impression, but not much substantive difference) -- just a bit puzzled that it has apparently sort of "slipped through the cracks" in the first place, I guess.
- Himaldrmann (talk) 14:34, 25 August 2024 (UTC)
- My "understanding" is that the consensus in the field is that a LLM cannot do problem solving, and therefore is not said to "understand", but that does get into the Turing test and definition of "understanding" and no, I don't have sources saying what the consensus in the field is. (though maybe some could be found, otherwise "opinion" in the lead works). ---Avatar317(talk) 01:08, 18 January 2024 (UTC)
- I don't think it is correct to say that LLMs cannot do problem solving, or that there is a consensus to that effect. GPT-4 can add two integers of many digits. While this is not particularly an argument for understanding (any more than one claims a calculator understands), it is certainly a solution to a problem. It also would seem to undermine the Parroting argument. It is unlikely to have memorized all integer combinations.
- The article seems to be structured as Stochastic Parroting vs Understanding, this seems like a false dichotomy, neither may be true. A calculator has neither for example, and yet problem solves. 161.29.24.79 (talk) 21:54, 22 May 2024 (UTC)
- WP:OR means you need to find a reliable source that can say that, then we can include that perspective in. We cannot use your own thoughts and musings unless they are directly backed up by sources Bluethricecreamman (talk) 14:48, 25 August 2024 (UTC)
Wiki Education assignment: WRIT 340 for Engineers - Spring 2024
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 8 January 2024 and 26 April 2024. Further details are available on the course page. Student editor(s): Maria Panicali, Alleeejandro, Nchan103, Quiescent Quail (article contribs).
— Assignment last updated by 1namesake1 (talk) 23:24, 18 March 2024 (UTC)
- Education editors are advised not to change the referencing style. Once works are listed in a "Works cited" section and linked using {{sfn}} templates (this is called CS1 style referencing), they should never be moved back into ref tags inline in the article. Since the first student editor (@Alleeejandro:) did this, I have had to revert all the subsequent additions. I may try to fix them later, but it is the responsibilty of new editors to educate themselves about referencing styles and rules before editing articles. Skyerise (talk) 12:18, 6 April 2024 (UTC)
- Thank you for helping us with our contributions; we will do our best to apply what we've learned and not let this happen going forward. 1namesake1 (talk) 16:48, 11 April 2024 (UTC)
- Great. Thanks! Skyerise (talk) 19:03, 11 April 2024 (UTC)
- Thank you for helping us with our contributions; we will do our best to apply what we've learned and not let this happen going forward. 1namesake1 (talk) 16:48, 11 April 2024 (UTC)
Might be offensive to zoologists
[edit]Nancy Shute, the chief editor of Science News wrote on this magazine that, "Real parrots, and the scientists who study them, may take offense at that term. ... Now, scientists are discovering that parrots can do much more, ... and sometimes even understanding what we say." ——🦝 The Interaccoonale Will be the raccoon race (talk・contribs) 12:23, 28 September 2024 (UTC)
- Fun anecdote. But on the other side, parrots don't beat humans on language understanding benchmarks (e.g. SuperGLUE) like frontier AI models do. If LLMs were stochastic parrots, it would be a huge limitation (notably due to the combinatorial explosion), and it would probably be really easy to craft a benchmark with private data that demonstrates it. Alenoach (talk) 12:46, 28 September 2024 (UTC)
- i have no clue about alenoach or trying to settle the debate but the science news article def is interesting to include in article Bluethricecreamman (talk) 17:12, 28 September 2024 (UTC)
- WikiProject Artificial Intelligence articles
- Start-Class Computer science articles
- Unknown-importance Computer science articles
- WikiProject Computer science articles
- Start-Class Philosophy articles
- Low-importance Philosophy articles
- Start-Class philosophy of mind articles
- Low-importance philosophy of mind articles
- Philosophy of mind task force articles
- Start-Class Contemporary philosophy articles
- Low-importance Contemporary philosophy articles
- Contemporary philosophy task force articles