Talk:Artificial general intelligence/Archive 3
This is an archive of past discussions about Artificial general intelligence. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
REFERENCES! what a mess.
There are really messed up references throughout the page. The style of citation here is inconsistent with WP as a whole and with other articles from respective projects. I'll slowly work on cleaning it up. Help is appreciated. Cliff (talk) 17:39, 7 April 2011 (UTC)
- I took a pass through and fixed a few things. I made sure that all the shortened footnotes were linked using the
{{harv}}
family of templates, and I found a number of citations that were missing or broken. There are still a large number of embedded links that should probably be fixed. ---- CharlesGillingham (talk) 10:17, 8 April 2011 (UTC)
- Good work, It looks a lot better. The notes section is quite large. Are all of these notes necessary? Which references are for which statements? I'm not used to a citation style like this. I'd like to help, but don't know what we're aiming for as far as style goes. Cliff (talk) 16:28, 8 April 2011 (UTC)
- This appears to be the Clocksin 2003 reference: http://rsta.royalsocietypublishing.org/content/361/1809/1721.short I haven't changed the article because I haven't read the paper yet. pgr94 (talk) 18:42, 8 April 2011 (UTC)
To reflect the importance of glial cells, especially astrocytes, to faithful simulation of the human brain, somebody (better oriented than me, that is) could cite some neurobiology articles describing tripartite synapses, for example. — Preceding unsigned comment added by 80.240.162.190 (talk) 17:35, 25 June 2012 (UTC)
IBM has developed a nanotechnology based working hardware implementation of a mammalian brain.
"Researchers at IBM have been working on a cognitive computing project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE). By reproducing the structure and architecture of the brain—the way its elements receive sensory input, connect to each other, adapt these connections, and transmit motor output—the SyNAPSE project models computing systems that emulate the brain's computing efficiency, size and power usage without being programmed.
IBM is combining principles from nanoscience, neuroscience and super-computing as part of a multi-year cognitive computing initiative. The Defense Advanced Research Projects Agency (DARPA) has awarded approximately US$21 million in new funding for phase 2 of the SyNAPSE project. For this project, a world-class, multi-dimensional team has been assembled, consisting of IBM researchers and collaborators from Columbia University; Cornell University; University of California, Merced; and University of Wisconsin-Madison." http://www.ibm.com/smarterplanet/us/en/business_analytics/article/cognitive_computing.html — Preceding unsigned comment added by 173.32.56.170 (talk) 07:04, 31 August 2011 (UTC)
Sourcing, verifiability, truth,and removing content
I don't see any compelling reason to remove the statement about misuse. If this were some controversial or questioned fact, getting specific sourcing might be good. There are/were sourcing issues with the article, but in general I don't think much of the content needs to be redacted for that. Probably better at the moment to work on getting better sources, then reworking the content based on those sources instead of just deleting content. If better sources can't be found in the long term, then the more dubious and trivial statements should be removed. aprock (talk) 19:25, 14 September 2011 (UTC)
Sentient computers in science fiction
In the article on SF computers directs to that article, rather than this article.
Should there be a brief section in the Strong AI article on sentient computers in SF (who range from helpful to being contrary/patronising their human companions (their term for 'pets')?
Given the way some actual computers do #what they want# rather than what you tell them, many people would argue that the feeling that sentience is beginning to emerge is not just a pathetic fallacy. Jackiespeel (talk) 15:43, 9 November 2012 (UTC)
- I agree that this article should also cover the way the term is used in science fiction. ---- CharlesGillingham (talk) 19:01, 9 November 2012 (UTC)
- There is a fairly clear distinction between 'sentient robots' (including androids, gynoids (sp?) and zoo-oids (Dr Who's K9) - I don't know if plant/mushroom/other category equivalents exist in SF and what they would be called) and 'sentient computers.' List of fictional computers is just that and Artificial intelligence in fiction describes specific 'constructed intelligences.' An overview of the topic would be within WP's remit (but a longer discussion probably falls outside it). Jackiespeel (talk) 22:51, 14 November 2012 (UTC)
- Thanks for the link to AI in fiction—what a cool article! I agree with your assessment that an overview is appropriate. Perhaps we could find a way to focus on certain 'stronger' instances artificial intelligence (even though most of the AI's on this list probably meet criteria for "strong" as defined here). HAL 9000 and Wintermute strike me as relevant (especially to the "criticisms" section) and well-known. groupuscule (talk) 00:07, 15 November 2012 (UTC)
- There is a fairly clear distinction between 'sentient robots' (including androids, gynoids (sp?) and zoo-oids (Dr Who's K9) - I don't know if plant/mushroom/other category equivalents exist in SF and what they would be called) and 'sentient computers.' List of fictional computers is just that and Artificial intelligence in fiction describes specific 'constructed intelligences.' An overview of the topic would be within WP's remit (but a longer discussion probably falls outside it). Jackiespeel (talk) 22:51, 14 November 2012 (UTC)
Difficulty of evolving AI, and education within evolved AI
One of the biggest obstacles I anticipate with AI is not about the processing power required to facilitate an intelligent machine, but the processing power required to tune the AI.
Even with the processing power equivalent to the brain, there is a massive difference between brains solely as a result of how the brain grows and develops, resulting in a huge disparity between the intelligences of human beings.
AI will be no different. The processing power must be arranged effectively.
A genetic algorithm will be needed to tune the AI, but therein lies a problem. Perhaps an evolution focused solely on the brain may need much fewer iterations, but nonetheless it will be required. And to test the AI it would need to have lived (or have executed) long enough to measure its learning capacity. So we might have to wait hundreds or even thousands of years before an AI is tuned to (near) "perfection"!
And that's not to mention the complexity of having a system of education within which to train and test the AI's.
94.185.85.171 (talk) 06:49, 12 March 2013 (UTC)
Rename proposal: AGI
- The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section.
The result of the move request was: move. We'll have a simple disambiguation page for now, although I don't think there is any objection here to making Strong AI/Strong artificial intelligence into a broad concept article or something; please feel free. ErikHaugen (talk | contribs) 17:44, 7 February 2014 (UTC)
Strong AI → Artificial general intelligence – Disambiguation Silence (talk) 23:49, 11 January 2014 (UTC) I suggest that we move this article to Artificial general intelligence, and make 'Strong AI' redirect here, with a dab note telling people that Searle's Strong AI thesis is discussed on the page Chinese room. Reasons:
- 'AGI' is mostly unambiguous (at least, it constitutes a unified enough topic to discuss in one article), whereas 'Strong AI' is highly ambiguous (Searle's thesis isn't primarily discussed here).
- Researchers in the field seem to be moving toward preferring 'AGI'.
- "artificial general intelligence" gets 804k Google hits. "strong artificial intelligence" gets only 269k hits ("strong AI" gets even fewer), in spite of soaking up a lot of irrelevant hits from Searle's usage.
- It's confusing to have a 'strong AI' article but no 'weak AI' article. The antonym of AGI is a bit less obvious, so fewer readers will be sent on a wild goose chase.
Any objections? -Silence (talk) 09:13, 8 January 2014 (UTC)
- Oppose. My opinion is that strong AI is more established than AGI and it's not for Wikipedia to predict where a field is heading. Looking at google scholar occurrences: "artificial general intelligence"(1730), "strong artificial intelligence" (3580) pgr94 (talk) 11:08, 8 January 2014 (UTC)
- If you look at the hits you got for "strong AI", the vast majority of them aren't about the topic of this article. Looking at the first three pages of Scholar hits I get in Incognito Mode, they fall into these categories:
- About this article's subject: Copeland 1993, Looks & Goertzel 2006
- About Chinese room instead: Gilder 2002, Sharkey 2001, Searle 1990, Preston & Bishop 2002, Searle 1990, Rey 2002, Melnyk 1996, Amoroso 1995, Sloman 1986, Cam 1990, Penrose 1990, Searle 1982, Wakefield 2003, Bringsjord 1998, Searle 2001, Rey 1986, Puccetti 1980, Sloman 1985, Harman 1974, Sober 1992, Harnad 2001
- About modeling humans as smart machines, as contrasted with building intelligent machines ('weak AI'), instead: Hobbs 2006
- Refers to strong 'athletic identity' (= AI) instead: Horton & Maxk 2000
- Refers to people named 'Norman Strong' and 'Abdullah Iqbal' (= AI) instead: Iqbal et al. 2006
- About strong 'aluminum' (= Al) instead: Parfitt et al. 1991
- About strong 'A_1' modes in physics instead: Shifman et al. 1979, Zeiger et al. 1992
- About a different topic, mention rather than use 'strong AI' in this article's sense: Berger et al. 2000
- So, at a cursory glance, only 2 out of every 30 articles using our article's title are actually about what we're talking about. (And one of those two articles, Looks & Goertzel 2006, mainly uses 'AGI' instead of 'strong AI'.) It looks like the case for moving is much, much stronger than I thought. Even if the number of contemporary articles about our article talking about 'strong AI' and talking about 'AGI' were the same, the sheer number of articles using 'strong AI' with other meanings makes the title unsuitable here. -Silence (talk) 01:38, 9 January 2014 (UTC)
- Assuming we're seeing the same list, it looks like "strong artificial intelligence" is a well-established term. pgr94 (talk) 18:56, 10 January 2014 (UTC)
- Yes, and it'is one that evidently should be the name of Chinese room, if of any article. Do you see the difference between the topic of Chinese room and the topic of this article? -Silence (talk) 17:09, 11 January 2014 (UTC)
- Assuming we're seeing the same list, it looks like "strong artificial intelligence" is a well-established term. pgr94 (talk) 18:56, 10 January 2014 (UTC)
- If you look at the hits you got for "strong AI", the vast majority of them aren't about the topic of this article. Looking at the first three pages of Scholar hits I get in Incognito Mode, they fall into these categories:
- My new recommendation is to turn 'Strong AI' into a disambiguation page, noting the following meanings:
- Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do.
- The research program of building an artificial general intelligence.
- Computational theory of mind, the view that human minds are (or can be usefully modeled as) computer programs.
- The more general thesis, usually associated with physicalism and mechanism, that human minds are (or can be usefully modeled as) machines.
- Searle's Strong AI thesis, the view that syntactic rules suffice for understanding (or semantics, or consciousness), criticized in the Chinese room thought experiment.
- Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do.
- What do you think? -Silence (talk) 01:48, 9 January 2014 (UTC)
- Is there a reliable source, e.g. an AI textbook, that makes this distinction? I find this too subtle a difference to warrant significant changes. If you're convinced changes are needed, perhaps get some more opinions from editors over on artificial intelligence? pgr94 (talk) 18:56, 10 January 2014 (UTC)
- Which distinction are you asking for a source for? I made several distinctions above. -Silence (talk) 17:09, 11 January 2014 (UTC)
- A couple of clarifications please:
- Is strong AI and Searle's strong AI thesis the same thing?
- Is the proposed change due to an evolution in the meaning of the terms?
- Is the proposed change due to a change in popularity of the labels AGI/strong AI?
- What do contemporary AI textbooks say?
- Thanks for helping me understand.
- The AGI community used to be disproportionately vocal, perhaps this perception is no longer accurate. I think there is reason for caution.
- pgr94 (talk) 17:36, 28 January 2014 (UTC)
- 'Strong AI' is ambiguous. Sometimes it refers to AGI, sometimes to entirely different things. But it seems to most often be used for Searle's thesis, yes.
- Somewhat. I think 'AGI' has pretty much always had the same meaning, and 'Strong AI' has been somewhat ambiguous for a long time. But as 'AGI' has become the more mainstream term used by AI researchers, 'Strong AI' has increasingly become philosophers' jargon specific to debates in the vicinity of Searle's.
- I believe so.
- Artificial Intelligence: A Modern Approach discusses 'Artificial General Intelligence' on page 27. It also follows the philosophical literature in using 'weak AI' to mean 'an artificial system exhibiting intelligent behavior' and 'strong AI' to mean 'an artificial system that can really and truly think / experience / mean things / etc.'. In other words, the article we're currently calling Strong AI is exactly what Russell & Norvig call 'Weak AI'. -Silence (talk) 03:32, 30 January 2014 (UTC)
- A couple of clarifications please:
- Which distinction are you asking for a source for? I made several distinctions above. -Silence (talk) 17:09, 11 January 2014 (UTC)
- Is there a reliable source, e.g. an AI textbook, that makes this distinction? I find this too subtle a difference to warrant significant changes. If you're convinced changes are needed, perhaps get some more opinions from editors over on artificial intelligence? pgr94 (talk) 18:56, 10 January 2014 (UTC)
- My new recommendation is to turn 'Strong AI' into a disambiguation page, noting the following meanings:
- Support I like the idea of a disambiguation page (similar to the page we have for Weak AI), and I think that the term "AGI" has gained a lot of ground since we last tried to sort this out; even Kurzweil's people are using it now. I support the rename, and the disambiguation.
- I have to criticize your concrete proposal above, however. I think that a two-way distinction between "AGI" and John Searle's "strong AI hypothesis" is sufficient: you don't need the three way division. First note that "Physicalism" or "mechanism" are different and more general than "computationalism" or "functionalism." Beyond that, any distinction between other forms of computationalism and "Searle's strong AI" is unnecessary; according to Searle, strong AI is computationalism or functionalism. (Searle writes in The Rediscovery of the Mind that his "Strong AI" is the same thing as Dennett's "computer functionalism", which is a form of computationalism).
- Similar searches that I have done in the past have shown that the vast majority of academic sources (including Russell & Norvig's standard textbook) use "strong AI" to refer to Searle's strong AI, rather than Kurzweil's. Thus it seems like a mistake to name this article after Kurzweil's definition, especially when AGI is available and popular and unambiguous.
- As to sources: Kurzweil makes a very clear definition in his books and websites, Searle makes a very clear definition in his papers and books. These are very well sourced in both this article and over at Chinese room. Their definitions are different and mutually incompatible (unless one assumes Searle is wrong -- see below). Kurzweil's "strong AI" is AGI. Searle's "strong AI" is the assumption that AGI will be conscious. (Searle agrees with Kurzweil that "AGI" is possible and may one day exist. He just disagrees it will necessarily be conscious.) With both of them using the same term for different things, it can be very confusing. A quick search of both names shows that Searle and Kurzweil have been at war for a long time:[1], so we are advised to be careful.
- Kurzweil assumes intelligent function requires consciousness, and so, for Kurzweil saying "as intelligent as a human" (as in Kurzweil's definition "strong AI") implies "has consciousness" (as in Searle's definition of "strong AI"). Thus, if you agree with Kurzweil and disagree with Searle, then the "strong AI hypothesis" and "AGI" are close to the same thing (one is a philosophical position, the other is a capability, but close enough). This source [2] assumes Kurzweil is using the term this way.
- I would venture to speculate that most of the people who use the term "strong AI" to mean "AGI" make the same assumptions as Kurzweil: they assume that intelligent behavior requires consciousness, and thus a machine with "artificial general intelligence" would need a simulation of consciousness, and they assume that a simulation of consciousness would actually experience consciousness as we do, and thus any machine with "AGI" would be conscious. These assumptions are Searle's strong AI. (And these are the assumptions that Searle does not make; he would call them an "ideology".) So calling "AGI" by the name "Strong AI" is, in my view, unfair to Searle: it can be taken as an assertion that Searle is wrong.
- If this argument is correct, then it is a mistake to name an article about AGI "Strong AI". AGI is clearly defined in terms of intelligence, and leaves consciousness out of it, allowing Searle's "strong AI" to be proven or disproven by later work.
- Forgive me for going on about this. I support the name change, and the creation of a disambiguation page. ---- CharlesGillingham (talk) 05:34, 18 January 2014 (UTC)
- Glad we agree! Pgr94's evidence seems pretty dead, so if no one comes up with a new argument against the move, I'll make it in 4 days. -Silence (talk) 09:05, 28 January 2014 (UTC)
- Oppose creation of a disambiguation page, as these are not wholly unrelated concepts. bd2412 T 22:16, 28 January 2014 (UTC)
- Does it bother you that almost no academic sources define "strong AI" the way it is defined on this page? (As shown above by Silence and shown by me several years ago? See the footnote 54 under origin of the term.) I think you could argue pretty persuasively that this article uses the "wrong" definition of "strong AI" -- it uses a definition only prevalent in science fiction and Raymond Kurzweil. This is what Silence and I would like to correct. ---- CharlesGillingham (talk) 01:45, 30 January 2014 (UTC)
- bd2412, do you also oppose renaming this article? My main proposal here is to remain 'Strong AI' to 'artificial general intelligence', which is a good idea even if we don't make a dab page. If we don't make it a dab page, then we'll need 'Strong AI' to redirect to 'Chinese room' rather than this page ('Artificial general intelligence'), since it appears that the great majority of people using the term 'Strong AI' are referring to Searle's definition (which is discussed in Chinese room, not here), not to artificial general intelligence. If you're suggesting that we merge Chinese room into this article, then I'm afraid I don't understand; the topics are distinct, and there's too much material on each to squeeze both into a single article. -Silence (talk) 07:39, 30 January 2014 (UTC)
- I have no opposition to changing the title, but a disambiguation page will offer no clarity on why different people have made different uses of the term within the same field. What is needed is an article (or a section redirect) succinctly explaining what the boundaries of "Strong AI" and "Weak AI" are according to different people. A disambiguation page can't accomplish that, as it can have no extended discussion, and no references or external links. bd2412 T 14:59, 6 February 2014 (UTC)
- Oppose per bd2412. This sounds like nitpicking, and I'm not sure "artificial general intelligence" really is unambigious anyway. If there's enough material to create spin-off articles, great, but a move isn't merited IMHO. SnowFire (talk) 23:14, 29 January 2014 (UTC)
- Artificial general intelligence is defined in terms of "intelligence". "Strong AI" is defined in terms of "having a mind" and "consciousness". This is not the same thing. If we spin off an article on artificial general intelligence then ALL the material in this article should go there, and this article will properly contain nothing except the section on Strong AI#Origin of the term: John Searle's strong AI. ---- CharlesGillingham (talk) 02:03, 30 January 2014 (UTC)
- It still doesn't sound to me like the topics are ambiguous; it sounds like a topic with subtopics. Ambiguous is Mercury, which might mean a planet, a Roman god, or a car line. bd2412 T 16:55, 3 February 2014 (UTC)
- They aren't subtopics. Norvig and Russell's AI textbook actually calls what we're calling 'Strong AI' in this article Weak AI. What they call Strong AI is the topic we discuss on Chinese room. Perhaps we should rename that article to "Strong AI", but we certainly shouldn't leave this article with the wrong name. The fact that we can also avoid ambiguities and equivocation in the process is an important bonus. -Silence (talk) 07:22, 4 February 2014 (UTC)
- It still doesn't sound to me like the topics are ambiguous; it sounds like a topic with subtopics. Ambiguous is Mercury, which might mean a planet, a Roman god, or a car line. bd2412 T 16:55, 3 February 2014 (UTC)
- Do you oppose the rename I proposed? It's confusing to conflate 'Oppose creation of disambiguation page' with 'Oppose rename proposal', and bd2412 only opposed the latter. 'Artificial general intelligence' doesn't need to be 100% unambiguous; it only needs to be unambiguous enough to warrant a single article. 'Strong AI' doesn't meet that criterion, as shown by the fact that Chinese room (which discusses Searle's concept of strong AI) is currently a separate article from this one. The two topics aren't totally unrelated, but they are distinct. -Silence (talk) 03:11, 30 January 2014 (UTC)
- Artificial general intelligence is defined in terms of "intelligence". "Strong AI" is defined in terms of "having a mind" and "consciousness". This is not the same thing. If we spin off an article on artificial general intelligence then ALL the material in this article should go there, and this article will properly contain nothing except the section on Strong AI#Origin of the term: John Searle's strong AI. ---- CharlesGillingham (talk) 02:03, 30 January 2014 (UTC)
- Assuming the proposal is unsuccessful, I think a move to Strong artificial intelligence would be prudent. Per WP:TITLEFORMAT, abbreviations and acronyms should be avoided in titles, even if AI is sufficiently primary to redirect to Artificial intelligence. --BDD (talk) 23:15, 30 January 2014 (UTC)
- Support either proposed title or Strong artificial intelligence per BDD Red Slash 02:06, 6 February 2014 (UTC)
- The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.
Re-examining rename/move
I commented below the closed move discussion, but I figured this should have its own section, since the move discussion is closed and archived. My points (some the same as above) are that: 1.) I don't think that consensus was achieved in the original discussion, and a disambiguation page was not warranted. 2.) Nearly ALL links to Strong AI are referring to this article, and there are only two targets on the disambiguation page - this weighs heavily in favor of making this the main redirect. 3.) These concepts are closely related and not even entirely distinct as far as I can tell. This makes it much harder to tell what page you want to look at just based on the new article titles and the brief introductory sentences on the disambiguation page. As above, I suggest that for starters we change Strong AI to a redirect page to Artificial general intelligence and possibly add a soft-redirect template to the top of the page linking to either Computational theory of mind or Strong AI hypothesis. 0x0077BE (talk) 17:53, 10 February 2014 (UTC)
- Regarding the close to move: can you elaborate on why you think the close was inappropriate? Closing discussions is not simply counting heads, of course; the arguments to move were quite compelling. ErikHaugen (talk | contribs) 19:23, 10 February 2014 (UTC)
- It's not that it was counting heads, it's just that I didn't see any consensus emerging, other than for a rename to either AGI or Strong artificial intelligence, rather than the DAB. I found bd2412's arguments compelling about why a disambiguation page is not adding any clarity, and didn't see any significant rebuttal on that notion. New users coming to the discussion (SnowFire, Red Slash and BDD) similarly did not seem persuaded by arguments about the DAB, but rather about renaming the article. So I guess I would say I think a close was likely appropriate for a move to AGI, but not for adding the disambiguation page. 0x0077BE (talk) 19:56, 10 February 2014 (UTC)
- Well, my closure does not preclude BD2412's proposal, on the contrary I tried to encourage it, although I am likely not going to be writing the article myself. Please feel more than free to make this change. BDD and redslash didn't seem to object to what I did, did they? Compelling arguments, not really refuted by anyone, were made that AGI is not the primary topic, if there is one, for the term Strong AI. I don't think I could have fairly closed the discussion by setting up a redirect from Strong AI to this article. ErikHaugen (talk | contribs) 20:18, 10 February 2014 (UTC)
- Regarding your claim about the links ALL (in caps!) coming here, having fixed a bunch of them I find this claim surprising. Do you have numbers, or is this just a general impression that you got somehow? ErikHaugen (talk | contribs) 19:23, 10 February 2014 (UTC)
- It's quite possible that it's a selection bias or that I just happened to get more AGI hits than Searle-type hits. I think what's more likely is that since this wasn't a fork but a move that most direct links to Strong AI would be intended for AGI since, until now, that was the title of this article. 0x0077BE (talk) 19:56, 10 February 2014 (UTC)
- Regarding only two articles on the disambiguation page: per WP:TWODABS, I'm assuming your point is that since you believe "Strong AI" almost always refers to AGI and not conscious machines that Strong AI should redirect here? ErikHaugen (talk | contribs) 19:23, 10 February 2014 (UTC)
- This could be a skewed perception from the fact that most of the links I've been disambiguating have been pointed at AGI, but yes, I would say that that is accurate. If the Searle-type is the more common form, a redirect from that to this article would also be fine. Still, I've seen it explicitly mentioned in articles about the Searle-type that his notion of "Strong AI" is not to be confused with the notion of AGI, but I have not seen the reverse. 0x0077BE (talk) 19:56, 10 February 2014 (UTC)
- Ok, well I think if a case can be made that there is a primary topic (i.e., WP:PRIMARYTOPIC) for strong AI then let's set up a redirect, but if not then a disambiguation page with only 2 entries is what we usually do in these cases. ErikHaugen (talk | contribs) 20:18, 10 February 2014 (UTC)
- I agree, I may have to defer to people who can take a more objective handle on things, though. I had assumed that AGI was the primary topic because of the balance of the links that I was seeing. I think maybe if we go based on Silence's list of references, Strong AI should redirect to Chinese Room with a hatnote to AGI and/or computational theory of mind. 0x0077BE (talk) 04:50, 11 February 2014 (UTC)
- Ok, well I think if a case can be made that there is a primary topic (i.e., WP:PRIMARYTOPIC) for strong AI then let's set up a redirect, but if not then a disambiguation page with only 2 entries is what we usually do in these cases. ErikHaugen (talk | contribs) 20:18, 10 February 2014 (UTC)
- This could be a skewed perception from the fact that most of the links I've been disambiguating have been pointed at AGI, but yes, I would say that that is accurate. If the Searle-type is the more common form, a redirect from that to this article would also be fine. Still, I've seen it explicitly mentioned in articles about the Searle-type that his notion of "Strong AI" is not to be confused with the notion of AGI, but I have not seen the reverse. 0x0077BE (talk) 19:56, 10 February 2014 (UTC)
- I too think the move was a little rushed. Terminology does indeed change over time, but Wikipedia should be conservative and terminology should follow respected AI textbooks.
- Looking at p.27 of Russell and Norvig (3rd ed) the same weight is given to "Human-level AI" as "Artificial General Intelligence". Should we therefore have a Human-level AI article too? At the end of the day, the burden of evidence should be on those proposing the change and should be based on textbooks. pgr94 (talk) 23:41, 10 February 2014 (UTC)
- Poole & Mackworth (2010) don't appear to mention "artificial general intelligence" at all. pgr94 (talk) 01:33, 11 February 2014 (UTC)
Artificial Intelligence and the Infinity
I added this subject, in good faith, as it is an important issue concerning Artificial Intelligence with finite machines. It has been suppressed arguing that it needs references...
Well, as it can be easily seen below, the text is self-sufficient, as many other paragraphs that we can read in Wikipedia. For instance, check the whole paragraphs written about the power set, they have no references, and that is not needed, as it just simple mathematical reasoning. This is mathematics, not literature, or other opinion subjects.
Mathematical reasoning has a problem... it is either true or false by itself. It does not need a citation to become true. --AlvoMaia (talk) 16:59, 22 June 2014 (UTC)
Infinity Paradox on Artificial Intelligence (... Suppressed ...)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
A finite device can never generate the notion of infinity.
Therefore the notion of infinity is impossible to be acquired by a computer or by any other device that can only produce a finite set of results in a finite lapse of time.
This is a simple mathematical paradox, as the combinations from a finite set S are in the finite power set 2S. This applies either to computer states or neuron states. If the notion of infinity could be obtained from some finite combination this would mean that infinity would be equivalent to that finite sequence, which is obviously a contradiction.
Thus, infinity and other abstract notions have to be pre-acquired in any finite device.
--AlvoMaia (talk) 16:59, 22 June 2014 (UTC)
- As you've been told by multiple editors, please provide sources. Your unwillingness to do so, and instead writing reams about why you don't need a source, is an indication sources may not exist for your interpretation. --NeilN talk to me 17:15, 22 June 2014 (UTC)
--NeilN It is as much an interpretation as any other statement written in any language. Either you understand it or not. I don't Neil. I already pointed out that references are absent from most mathematical work in Wikipedia, and that strict policy now seems to be followed only for some topics. For instance in this page, I can add "citation needed" to several sentences, starting from the first. But I would not suppress it. I'm sorry, but if you insist in suppressing material just with the "citations needed" argument, then to be conaistent with yourself, you would need to suppress most of Wikipedia paragraphs. Otherwise, I'm sorry to say that you seem to be just randomly supressing information, even if you don't realize it. --AlvoMaia (talk) 17:57, 22 June 2014 (UTC)
- @AlvoMaia: See WP:OTHERSTUFFEXISTS. Just because other unsourced text exists does not give you a free pass to add yet more unsourced text. If we did as you wanted, Wikipedia would be quickly overwhelmed by unsourced "interpretations". --NeilN talk to me 18:15, 22 June 2014 (UTC)
--NeilN I understand you Neil. It is the well known argument, "let the Chaos inside, if I can control it". Sorry, I don't have citations for that power argument! I just hope you understand that those kind of justifications were better presented by the Spanish Inquisition when willing to avoid publication of undesired material. — Preceding unsigned comment added by AlvoMaia (talk • contribs) 18:43, 22 June 2014 (UTC)
- @AlvoMaia: A comparison to the Spanish Inquisition instead of providing a proper cite. Wow. BTW, adding "--" before my name causes the software to think that that's a signature and won't trigger a notification. --NeilN talk to me 21:21, 22 June 2014 (UTC)
- @NeilN: You're right Spanish is probably a surplus. Thanks for the warning. As you probably understand, I don't kneel, to that. --AlvoMaia (talk) 22:01, 22 June 2014 (UTC)
- Just to clarify your concerns, the paragraph being self-contained is completely irrelevant. There are numerous issues on why we need references: is inclusion encyclopedic? Even if the argument is self-contained, the suggestion that the issue is central to the subject of the article needs citation. Otherwise we come across the questing of according to whom? According to the Wikipedia article on Artificial general intelligence?
Wikipedia doesn't commit to original research, even if that research might be of high quality. I will only mention it once, but the only reason noone has reported you for breaking rules WP:3RR is because they most likely believe you could contribute properly if you only took time to get aquainted with what Wikipedia is, and what it is not. I suggest you refain from edit warring so that hopefully no ban needs to be dealed out.-- CFCF (talk · contribs · email) 21:44, 22 June 2014 (UTC)
- Just to clarify your concerns, the paragraph being self-contained is completely irrelevant. There are numerous issues on why we need references: is inclusion encyclopedic? Even if the argument is self-contained, the suggestion that the issue is central to the subject of the article needs citation. Otherwise we come across the questing of according to whom? According to the Wikipedia article on Artificial general intelligence?
- @CFCF: I insist, a single paragraph is not a question of "original research" it is a question of common sense. No citation to this is needed, if you understand it... just copy it. Otherwise, if do not understand it, you should not cite it. Anyhow you can cite a webpage, even if it is a webpage history file. Thanks for your constructive answer. --AlvoMaia (talk) 22:01, 22 June 2014 (UTC)
- Please, these policies are not here arbitrarily, but exist to ensure that illegitimate content is not added to Wikipedia. Wikipedia does not cite itself, this goes against the policy WP:Circular. If you wish to take up these policies or suggest we are incorrectly implementing them you can start a Request for comment. -- CFCF (talk · contribs · email) 22:23, 22 June 2014 (UTC)
- @CFCF: Yes, I understand that and I agree in general. However, above the rule there is the human judgment, to check if the rule applies or not. Otherwise you will have human machine behavior. Rules are not blind, but it seems that here they are. Notice the time lost discussing the procedure and not the issue... just because of one blind rule.--AlvoMaia (talk) 23:03, 22 June 2014 (UTC)
- @AlvoMaia: The only one wasting our time is you with your refusal to provide a reference. The material you added isn't going to stay in the article unless you provide a proper reference. Unless you have something else to say other than, "I don't have to provide a reference", I'm done here. --NeilN talk to me 23:47, 22 June 2014 (UTC)
- @NeilN: I understand that without an ID citation card (issued by "reliable sources"...) you can shoot. Go ahead, do what you want, you are the man with the gun. Did you saw me undoing the suppression? No! So what is you problem? You can not kill ideas, even if you kill the one that expresses them. Best wishes. --AlvoMaia (talk) 00:51, 23 June 2014 (UTC)
- Whether you think Wikipedia is run following human judgement or not is completely irrelevant, and to be frank I do not care. Sure, you can make an analogy to shooting someone, but noone is saying you're wrong, you should be shot/banned for trying to improve the wiki etc. What we are saying is you need to follow the rules, just like everyone else. There are no exceptions!
I'm closing this discussion, seeing as talk pages are for discussing the article, and ways to improve it, not discussing policy etc. etc. which is handled on policy pages. This is not a general forum. -- CFCF (talk · contribs · email) 11:07, 23 June 2014 (UTC)
- Whether you think Wikipedia is run following human judgement or not is completely irrelevant, and to be frank I do not care. Sure, you can make an analogy to shooting someone, but noone is saying you're wrong, you should be shot/banned for trying to improve the wiki etc. What we are saying is you need to follow the rules, just like everyone else. There are no exceptions!
Probability of an intelligence solving any given problem
A problem is a time limited interaction with an environment, with a goal that is either achieved or not achieved.
- The intelligence probability is the probability of an intelligence solving any problem given to it.
For an intelligence there is a probability of it being posed any problem. The problems may form a continuum so the probability must be considered as a probability measure.
So,
- The measurement of intelligence for each problem is either,
- 1 - The intelligence will achieved the goal.
- 0 - The intelligence will not achieve that goal.
- The intelligence probability is,
- The weighted sum over the probability measure of problems of the measure of intelligence for each problem.
Thepigdog (talk) 11:46, 17 November 2014 (UTC)
Covered briefly in Turing test
Measuring universal intelligence: Towards an anytime intelligence test José Hernández-Orallo a,∗, David L. Dowe b
An Approximation of the Universal Intelligence Measure Shane Legg, Joel Veness arXiv:1109.5951 [cs.AI] [1]
Universal Intelligence: A Definition of Machine Intelligence Shane Legg, Marcus Hutter Minds and Machines December 2007, Volume 17, Issue 4, pp 391-444 Date: 10 Nov 2007
Thepigdog (talk) 13:32, 17 November 2014 (UTC)
References
- ^ An Approximation of the Universal Intelligence Measure, Shane Legg and Joel Veness, 2011 Solomonoff Memorial Conference
Origins and usage of "Artificial General Intelligence" are just WRONG
Although I must distance myself from the editing of this article, being an AGI researcher myself, I am nevertheless obliged to point out the conflict between what this article says about AGI and what people who created the field consider it to be about. I will offer the following as pointers for objective editors to pick up and research.
1) The term "Artificial General Intelligence" first gained widespread currency through the discussions that took place online, in the 2000-2005 period, among a group of researchers who felt frustrated by the way that mainstream AI researchers in the 1990s and 2000s had redefined "AI" to be something that no longer made reference to actual, complete thinking machines of the sort that were very much at the fore during the first decade or two of AI. For many people inside and outside the AI community, back in the 1990 - 2006 timeframe, the term "AI' had been diluted so much that it meant little more than "computer program that did some things that were fairly sophisticated" (for references, look at the textbooks published in this period). For people who wanted to go back to the original ambitions of AI (to build complete thinking systems, at about the human level of domain-independent competence) there was no way to refer to what they were trying to do, without getting confused with mainstream AI. Indeed, the label "AI" had already become a commonplace, being slapped on any computer product, willy-nilly, if it just contained some code that featured a few algorithms that were of above-average cleverness.
2) By about 2005, Ben Goertzel was one of the main people pushing the term. He had formed his own small group called "Artificial General Intelligence Research Institute" or AGIRI, and he, along with Pei Wang, organized the 2006 AGIRI Workshop in Bethesda MD. This was the immediate precursor to the first conference on Artificial General Intelligence, which happened on year later. Online discussions of AGI topics were first known to me on the SL4 discussion list, and then after the flame war that occurred there in late 2006, many people left and the discussion list whose name was actually "Artificial General Intelligence" was created - by Ben Goertzel - as a venue for people who disliked the cult-like nature of the old SL4 list (the latter quickly declined and then died, after that).
3) This article refers to John Searle and his Chinese Room paper as the origin of the term "Artificial General Intelligence" .... but the actual text mentions ONLY the term "strong AI", and nowhere in his writings at that time did Searle ever use "AGI" !! This entire section on the origin of the term - and indeed, ALL CONFUSIONS between the term AGI and Strong AI should be removed from this article. Among AI researchers and commentators, the term "Strong AI" has been around for a long time, and it means "The claim that an AI program can ACTUALLY "think", in some sense of the word "think" that puts it on a par with the human mind" as contrasted with the term "Weak AI" which means "The claim that an AI program is simply doing things that resemble "thinking" when those same things are observed in a human being, but with no implication that this "really is" thinking". These usages of Strong and Weak AI can be found in many references (at the very least, check out Hofstadter and Dennett's book The Mind's Eye), and it is quite clear to people in the AGI community that their usage of "AGI" is not synonymous with "Strong AI". LimitingFactor (talk) 16:36, 23 February 2015 (UTC)
- @LimitingFactor: Agree on all counts.
- I think your first point is covered in the "Mainstream AI" section. It isn't a direct critique, but it attempts to tell the same story. Take another look and see if you see it that way -- I'd be interested if you disagree.
- I agree we should make more prominent mention of Goetzel, perhaps in the lede. (More on this in a moment.)
- The problems you mention are mostly due to the fact that this article was originally called "strong AI", and was only recently renamed to "artificial general intelligence". This explains the Searle section -- it's talking about the origin of the term "strong AI", not the origin of the term "artificial general intelligence".
- Finally, I wonder if you would agree on one other point. The subject of this article is "human-level-or-higher intelligence in machines" -- so it's not just about "artificial general intelligence" of the 2000s, it's also the article about "strong AI" (as used by Kurzweil and science fiction), or Newell & Simon's "general intelligent action", or any other attempt over the years at general intelligence. In fact, most of the links into this article come from science fiction and point through the term "strong AI". So my question is this: is this a bad idea? What should we call this article, if we can't call it "AGI" or "Strong AI"? How could we fix the lede to make it clear that this is about "strong AI", "AGI" and "GOFAI", all three? ---- CharlesGillingham (talk) 09:31, 24 March 2015 (UTC)
Found this article while looking for possible missed remnants of Wikipedia:Articles for deletion/UNICE global brain project. I am leaving this note here that I blanked this section because it is basically the same content as that deleted article. 野狼院ひさし u/t/c 12:23, 4 March 2015 (UTC)
- Effectively reverting this diff, basically. 野狼院ひさし u/t/c 12:25, 4 March 2015 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on Artificial general intelligence. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive http://web.archive.org/web/20090411050423/http://www.singinst.org:80/upload/LOGI/LOGI.pdf to http://www.singinst.org/upload/LOGI//LOGI.pdf
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 13:06, 28 February 2016 (UTC)
Why is there no entry about mathematically proven & critically acclaimed AIXI formalism for AGI?
There is a proven-optimal (though incomputable) AGI called https://en.wikipedia.org/wiki/AIXI It is strange that there is not a single mention of such result in this article. Also there is a half-century old approach to AGI (related to AIXI) called https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference Again not a single reference.
I think it would be in interest of the reader to know about rigorous approaches to AGI. — Preceding unsigned comment added by 92.100.136.123 (talk) 21:05, 11 July 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on Artificial general intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20100118120034/http://www.bbsonline.org:80/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html to http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 02:39, 19 October 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified 3 external links on Artificial general intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm to http://www.transhumanist.com/volume1/moravec.htm
- Added archive https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm to http://www.transhumanist.com/volume1/moravec.htm
- Corrected formatting/usage for http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 00:36, 10 July 2017 (UTC)
Distinction "Artificial general intelligence" and "AI-complete"
In my opinion we need a clear distinction between Artificial general intelligence and AI-complete in both articles. Currently they are described as being pretty much the same; even so it's somehow not easy to distinguish between them.
--89.204.154.200 (talk) 20:47, 10 November 2017 (UTC)
- "AI complete" is a class of problems, "AGI" is a capability of machines. An "AI-complete" problem is a problem that requires AGI to be solved perfectly. However, "weak" or "narrow" solutions to AI-complete problems can be useful, if not perfect. ---- CharlesGillingham (talk) 19:15, 18 November 2017 (UTC)
strong AI
ASI does not stand for strong AI. It is well known that ASI stands for artificial super intelligence. The article is in error. https://en.wikipedia.org/wiki/Intelligence_explosion 108.93.181.106 (talk) 05:01, 25 March 2018 (UTC)
Proposed Edit
I propose to add the following short article:
Artificial general intelligence The concept of Artificial Intelligence says that some day inventors can build a machine that has the same smartness as a human being. This is the logic behind Artificial General Intelligence (AGI). Futurist and author Martin Ford interviewed 23 well-known personalities involved in Artificial Intelligence. Each one made a guess as to what year there will be a chance by at least fifty percent AGI will be developed. {{Vincent, James=| This is when AI’s Top Researchers Think Artificial General Intelligence will be Achieved=|November 27, 2018=| https://www.theverge.com/2018/11/27/18114362/ai-artificial-general-intelligence-when-achieved-martin-ford-book {{Ford, Martin=|Architects of Intelligence=|2019=| http://book.mfordfuture.com/ According to the Artificial General Intelligence Society, AGI refers to the “emerging field that aims to build thinking machines or general-purpose systems that has intelligence comparable to the human mind.” http://www.agi-society.org/
Thank you!
LOBOSKYJOJO (talk) 02:46, 21 January 2019 (UTC)
Snasci AGI
DeepThought News (https://deepthoughtnews.wordpress.com), over the last several weeks, has been releasing research from Snasci AGI. At present, 17 articles have been written spanning from database solutions, right through to higher brain functions. Taken together, these articles outline the bare bones of a functional AGI. The series focuses mainly on the various engines used to power an AGI, as well as how to wire the brain. Interestingly, the approach in the articles tends to gloss over AI-complete problems, however, it is very clear how these solutions fit into the overall picture and the structure of this AGI may lend to alternative solutions. This probably needs a mention in the wikipedia entry. — Preceding unsigned comment added by 154.59.156.85 (talk) 13:18, 28 October 2019 (UTC)
Speculation
I added 'speculative' in the first sentence because AFAIK no system that claims to fulfill the definition here has been presented or published until now. Any sources to proove me wrong? --Bernd.Brincken (talk) 13:58, 3 March 2020 (UTC)
- I agree that no AGI system exists today. I think practically everyone agrees with that ... certainly everyone with a clue of what they're talking about would agree with that. The article should certainly make it clear to readers that you can't go out and buy an AGI today. But I'm not sure that "speculative" is the right way to describe that fact. To my ears, "speculative" kinda has a connotation of "unlikely to be even possible". But I think practically everyone thinks that building AGI is possible, and will happen sooner or later if technological development continues (although some think it's many centuries away).
- Maybe "hypothetical" would be a better term? Not sure. --Steve (talk) 19:44, 7 March 2020 (UTC)
- Let me remind you that the hypothesis of AGI 'soon to come' has been expressed already in 1956 by John McCarthy, Marvin Minsky, Rochester and Claude Shannon and at the Dartmouth Conference, proposing a project: "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Several waves of AGI-enthusiasm followed, see also AI winter, among them the prediction of intelligence explosion, later Kurzweil's Singularity. You may also have a look at this recent discusssion.
So calling AGI 'speculative', in 2020, after decades of experience with the hypothesis, is perfectly reasonable. --Bernd.Brincken (talk) 14:24, 13 March 2020 (UTC)
- Let me remind you that the hypothesis of AGI 'soon to come' has been expressed already in 1956 by John McCarthy, Marvin Minsky, Rochester and Claude Shannon and at the Dartmouth Conference, proposing a project: "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Several waves of AGI-enthusiasm followed, see also AI winter, among them the prediction of intelligence explosion, later Kurzweil's Singularity. You may also have a look at this recent discusssion.
- Speculative is a WP:CLAIM word. It should only be used if there are much stronger sources arguing against the claim than arguing for the claim. Since there is strong sourcing on both sides, we should use a more neutral word than 'speculative'. Rolf H Nelson (talk) 20:16, 15 March 2020 (UTC)
- I agree with Rolf. The hypothesis of "AGI soon to come" has always been controversial, and remains controversial today. The hypothesis of "AGI will never ever happen, not in a million centuries, because it is fundamentally impossible or incoherent" is a quite rare view compared to its opposite. What reliable sources have you seen that in? Is this really what you believe?
- More generally, the argument "We haven't invented X yet, despite considerable effort over decades, and despite many good people making optimistic promises and then failing... therefore X will never ever be invented, not in a million centuries" seems like a pretty bad argument in general. Is that your argument? Hasn't this argument been falsified over and over and over again through history? --Steve (talk) 13:11, 19 March 2020 (UTC)
- No, I did not say and I don't think that "AGI will never be invented". It might be possible, we don't know. What we do know is that is has been predicted by the brightest minds of the field - see above list - several times, and their predictions failed.
This is a different situation as with, for example, the warp drive - where no reputable scientists predicted it to be realised within a certain time span, AFAIK. It is also different from fusion energy which seems to be quite close to be realised, with little objection to the principle feasability, and several experiments on the way to a success state have been conducted with positive results.
Hypothesis would be misleading, IMHO, as it gives an "explanation for a [real] phenomenon". What would be that phenomenon In the AI field?
Not even insect intelligence - for example 1 Mio Neurons of a cockroach - has been simulated with artificial neural networks successfully.
So yes, the idea to realize AI on eye level with humans is speculative, I see no better adjective for this situation; but I am always willing to learn. --Bernd.Brincken (talk) 19:26, 22 March 2020 (UTC)- A hypothesis also has a broader dictionary meaning of "a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation". A supposition is "an uncertain belief". So hypothesis doesn't seem misleading in this context; it's used in the "supposition" rather than the "proposed explanation" meaning. Rolf H Nelson (talk) 20:51, 22 March 2020 (UTC)
- So what does AGI suppose? AFAIK there is no specific or unspecific (systematic?, philosophic?) question or supposition that it offers to solve, which would justify the term "Hypothesis", exept it's own possibility. And the supposition has failed several times before - how do you value this fact? --Bernd.Brincken (talk) 11:44, 25 March 2020 (UTC)
- I disagree that "the supposition has failed several times before". The claim that "AGI is coming soon" has failed several times, but that is not what we're talking about here. The claim that "AGI is possible" has not failed. Quite the contrary, it is widely accepted, and indeed I think it's awfully hard to deny that AGI is possible, given the existence of (1) human brains and (2) Turing-complete artificial computers.
- "Hypothetical" means exactly the obvious thing you suggest at the end: "We hypothesize that it could exist". Analogies are hypothetical star, hypothetical chemical compound, hypothetical species, hypothetical protein, etc., though they're not perfectly exact analogies. Maybe "theoretical" would also be OK, as another option worth considering. --Steve (talk) 01:04, 26 March 2020 (UTC)
- AGI proponents hypothesize (or suppose, or have an uncertain belief that) scientists will, in the foreseeable future, be able to build a machine with "a machine that has the capacity to understand or learn any intellectual task that a human being can" or some similar notion. Others hypothesize that such a machine is impossible, and still others that such a machine is possible but will remain unfeasible even for the next centuries or millenia, depending on their level of skepticism. I'm not sure what the disconnect is about the definition, but AGI as "hypothetical" is well-sourced in any case. Rolf H Nelson (talk) 04:48, 26 March 2020 (UTC)
- So what does AGI suppose? AFAIK there is no specific or unspecific (systematic?, philosophic?) question or supposition that it offers to solve, which would justify the term "Hypothesis", exept it's own possibility. And the supposition has failed several times before - how do you value this fact? --Bernd.Brincken (talk) 11:44, 25 March 2020 (UTC)
- A hypothesis also has a broader dictionary meaning of "a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation". A supposition is "an uncertain belief". So hypothesis doesn't seem misleading in this context; it's used in the "supposition" rather than the "proposed explanation" meaning. Rolf H Nelson (talk) 20:51, 22 March 2020 (UTC)
- No, I did not say and I don't think that "AGI will never be invented". It might be possible, we don't know. What we do know is that is has been predicted by the brightest minds of the field - see above list - several times, and their predictions failed.
- Proposal for compromise:
- We put 'hypothetical' in the first sentence, adding two or more reputable sources for this term, researched by you. Just to avoid any WP:OR suspicion.
- We add a clause in the intro chapter ".. while some scientists consider AGI to be speculative" - with the two sources I added on March 3.
- Okay? --Bernd.Brincken (talk) 13:11, 29 March 2020 (UTC)
- So you felt no need to answer here, but inserted one source. Ok, I re-inserted the "speculative" & refs, now in the last sentence. --Bernd.Brincken (talk) 16:00, 2 April 2020 (UTC)
- I'm not following you, who inserted what? Rolf H Nelson (talk) 04:22, 3 April 2020 (UTC)
- It's not going to be clear to all readers what the difference is between hypothetical and speculative, or whether there even is a difference to the reader apart from connotation. What is the specific point that you want the lede or article to make to tell readers, that it's currently not making? We can certainly have a couple of sentences summarizing different schools of thought, or a sentence stating that AGI does not currently exist. Rolf H Nelson (talk) 04:41, 3 April 2020 (UTC)
- So you felt no need to answer here, but inserted one source. Ok, I re-inserted the "speculative" & refs, now in the last sentence. --Bernd.Brincken (talk) 16:00, 2 April 2020 (UTC)
Edits
- Without offering his arguments here, User:Mithoron chose to replace 'speculative' already with 'hypothetical' on March 7. This is not how WP is meant to work, so I reverted his edit. Why 'hypothecial' can not be the right term here, see above edit. --Bernd.Brincken (talk) 19:37, 22 March 2020 (UTC)
- @Bernd.Brincken Correct me if I'm missing something but from the history it looks like you added 'speculative' on Feb 28 and were reverted Mar 3 and Mar 7. You need to build WP:CONSENSUS on this page to make the change. — Preceding unsigned comment added by Rolf h nelson (talk • contribs)
- @User:Rolf h nelson, on March 3 I reacted on the argument "no good disambiguation for speculation", which was correct, and inserted the word without the internal link, while adding two sources. Then I started the discussion here. While it was going on, User:Mithoron changed the article without offering his arguments here. Okay?
BTW, would you kindly consider to sign your posts? --Bernd.Brincken (talk) 11:38, 25 March 2020 (UTC)- the rejection was also due to "inclusion only confuses the first sentence". That said, even if Jaydavidmartin turns out to be OK with your change, you're still claiming consensus based on the edit remaining for (most of) 8 consecutive days, which comes across as hypocritical since Mithron's later edit remained for 15 days, and thus would have an even greater claim to consensus. Rolf H Nelson (talk) 04:48, 26 March 2020 (UTC)
- I was not claiming "consensus", just ongoing discussion. --Bernd.Brincken (talk) 13:07, 29 March 2020 (UTC)
- the rejection was also due to "inclusion only confuses the first sentence". That said, even if Jaydavidmartin turns out to be OK with your change, you're still claiming consensus based on the edit remaining for (most of) 8 consecutive days, which comes across as hypocritical since Mithron's later edit remained for 15 days, and thus would have an even greater claim to consensus. Rolf H Nelson (talk) 04:48, 26 March 2020 (UTC)
- @User:Rolf h nelson, on March 3 I reacted on the argument "no good disambiguation for speculation", which was correct, and inserted the word without the internal link, while adding two sources. Then I started the discussion here. While it was going on, User:Mithoron changed the article without offering his arguments here. Okay?
- @Bernd.Brincken Correct me if I'm missing something but from the history it looks like you added 'speculative' on Feb 28 and were reverted Mar 3 and Mar 7. You need to build WP:CONSENSUS on this page to make the change. — Preceding unsigned comment added by Rolf h nelson (talk • contribs)
Nice try
"Today's AI is speculated to be many years, if not decades, away from AGI" :-))
So the speculation is not AGI, but that it is not yet achieved.
"Todays rocket propulsion is speculated to be many years, if not decades, away from Warp drives."
This is really funny, let's leave this here, as a nice illustration how AI evangelists promote their cause in online media. --Bernd.Brincken (talk) 10:23, 12 April 2020 (UTC)
- Bernd.Brincken Fantastic to have some kind of consensus on this finally, not from an AI evangelist, but someone with an MA in Applied Linguistics (it's all in the semantics, right?) who is working on global public administration of nuclear fusion and of military AI ;-). If only I could tell you everything I knew about militarized AI... I have now added this key source to justify the rewording: 'When Will AI Exceed Human Performance? Evidence from AI Experts?' P.S. The secret to warp may be in the (revised) Natário variant of the Alcubierre drive, as I have suggested in my edit to the page. Johncdraper (talk) 08:36, 13 April 2020 (UTC)
- Cpt. Crunch, that's nice, we met one time in Berlin at the CCC club, or conference :-)
About 'AI exceeding human performance' - of course there are various fields where this already is the case, starting with chess and not ending with Go. But IMHO the real challenge, as I have explained in my book (Künstliche Dummheit), are the social abilities of humans. And they are learned slowly, over decades, in permanent exchange with other humans. For anything artificial to enter this sphere, we (/it) would likely experience as the dominant phenomenon: racism. --Bernd.Brincken (talk) 10:08, 13 April 2020 (UTC)- Bernd.Brincken I am actually another John Draper working on military AI right now, but that's another story. The article I cite for that sentence points to around another three quarters of a century for an AGI/ASI. I don't dicount we could get something like an AGI by 2045-2050 in the case of an AI arms race where e.g., China and the US go for full-blown militarization and weaponization of AI, but the end result would likely be an abomination and, as you point out, the socialization poblem means that it would be a racist or speciesist. Johncdraper (talk) 13:17, 13 April 2020 (UTC)
- Cpt. Crunch, that's nice, we met one time in Berlin at the CCC club, or conference :-)
- I guess that's a resolution for now, then, one way or another. Feel free to drop by again if you change your mind and want to discuss further in the future. Rolf H Nelson (talk) 06:42, 14 April 2020 (UTC)
- No resolution, just a form of sarcasm. A sentence "xx is speculated to be many years, if not decades, away from xxx" is a manipulative language basically unfit for any encyclopedia. If you support this, consider more research on the concept of WP, like WP:NPOV. --Bernd.Brincken (talk) 22:02, 14 April 2020 (UTC)
- I'm glad we can get back on topic then. I don't understand the relevance of warp drives, or racism, or breakfast cereal to the objectionable sentence; thank you for providing a more straightforward explanation of your point. Also please replace the section title with something informative if you get a chance. Rolf H Nelson (talk) 06:57, 17 April 2020 (UTC)
- Is there alternative wording summarizing, in an non-editorial manner, the given sources, that you would find to be not manipulative? Rolf H Nelson (talk) 06:57, 17 April 2020 (UTC)
- No resolution, just a form of sarcasm. A sentence "xx is speculated to be many years, if not decades, away from xxx" is a manipulative language basically unfit for any encyclopedia. If you support this, consider more research on the concept of WP, like WP:NPOV. --Bernd.Brincken (talk) 22:02, 14 April 2020 (UTC)
Deletion of talk content
User:Rolf h nelson, you deleted a part of the conversation. Bad idea. --Bernd.Brincken (talk) 21:46, 14 April 2020 (UTC)
- This is a clear violation of WP rules - Wikipedia:Talk_page_guidelines#Editing_others'_comments:
- The basic rule, with exceptions outlined below, is to not edit or delete others' posts without their permission.
- Please bring back the content in the above chapter, in order to show that you accept WP rules.
--Bernd.Brincken (talk) 21:57, 14 April 2020 (UTC)- I added back the first of the three paragraphs since you seem to have no problem with it. For the other two, I've asked for admin opinion on whether to add them back, if I don't get an opinion in 24 hours then I'll add them back. Rolf H Nelson (talk) 06:59, 16 April 2020 (UTC)
- My apologies, then, Oversight judged it doesn't qualify as a WP:dox, so I restored the material. Rolf H Nelson (talk) 06:57, 17 April 2020 (UTC)
Functional Modeling Based Approaches
This note is to encourage feedback on functional modeling based approaches to ensure that any mention of such approaches meets community standards. Numerous edits to remove any mention of functional modeling based approaches have been made. To avoid edit warring, any feedback regarding objection to such approaches is encouraged here as opposed to simply deleting any mention of such approaches, particularly when they refer to multiple peer-reviewed sources, and particularly when some researchers believe that functional modeling is potentially the most important direction in AGI research today.CognitiveMMA (talk) 16:09, 9 March 2021 (UTC)
- I want to try to be helpful here. While I'm not qualified to address questions of WP:UNDUE, I do think that the addition in question reads more like an opinion piece or original research commentary than an encyclopedic overview. An example is "If it's true that no one completely understands how cognition works, any estimates for how long it might take to create an artificial system of cognition like an Artificial General Intelligence since incremental progression towards a destination can’t be measured if the destination is unknown." MaxwellMolecule (talk) 16:18, 9 March 2021 (UTC)
- This section seems to largely be built around promoting a very recent (2020) conference paper by Andy Williams. As far as I can tell this paper has been cited only once, by a preprint article also written by Williams. This is WP:UNDUE - we need some evidence that unrelated researchers have picked this concept up and are publishing on it before we can accept that this is 'potentially the most important direction in AGI research'. I also agree with MaxwellMolecule's POV concerns. - MrOllie (talk) 16:35, 9 March 2021 (UTC)
Proposed new section on Functional Modeling Approaches to Artificial General Intelligence
Comments or feedback are invited.
Functional Modeling Approaches
Functional modeling is a significant departure from mainstream modern Artificial Intelligence Research. Being based on the human-observable functions of cognition, Human-Centric Functional Modeling is essentially a first person approach to modeling cognition. The opinion that research in the cognitive sciences, and by extension research into systems of artificial cognition such as AGI is dominated by third person approaches has been widely expressed in the cognitive science literature: "According to a widespread view, first-person methods were largely discarded from psychology after the fall of introspectionism a century ago and replaced by more objective behavioral measures"[1]. However, this has also been called "a step that some authors have begun to criticize"[1]. The deep polarization "between investigations of third-person, objective, correlates (e.g.,neuroscience and cognitive science) and investigations of first-person, subjective experience and phenomena (e.g., introspection and meditation)"[2] has been described as a source of bias that causes proponents of third person approaches to reject first person approaches and therefore to reject all functional modeling approaches as mere "opinion". If it is true that functional modeling approaches define the only reliable incremental path towards AGI, and if it is true that a functional modeling approach is required to maximize capacity to achieve AGI, then this fundamental bias defines a roadblock to the actual attainment of AGI.
Functional modeling of cognition[3] [4] however is a potentially promising and perhaps essential approach to Artificial General Intelligence. Any dynamical system with repeatable behavior can potentially be modeled in a human-centric way in terms of the minimal set of functions that can be used to represent all the human observable behavior of the system. All the states that can be accessed by the system through such functions then form a “functional state space” which the system moves through. Such models have the potential to represent all observable behavior of the system, even where the mechanisms and structures implementing those functions are unknown. Applied to modeling human cognition, this creates the potential to model all known functions of that cognition. Furthermore, because such functional models of cognition are independent of implementation, one might be used to implement an artificial cognition (an Artificial General Intelligence)[5].
If it's true that current understanding of how cognition works is incomplete, any estimates for how long it might take to create an artificial system of cognition like an Artificial General Intelligence since incremental progression towards a destination can’t be measured if the destination is unknown. However, the recent development of what is suggested to be the first complete functional model of artificial cognition[5] potentially changes this.
If such a functional model is indeed complete, if all models of AGI being developed by any researcher and all narrow problem solving processes implemented by any AI researcher can be decomposed into these functional components, and if the “fitness” of these implementations at achieving these observed functions can be measured, then a functional modeling approach must provide a reliable incremental path towards AGI. Furthermore, if this fitness of each component can be compared, then an optimal set of components must be obtainable, meaning that such an approach maximizes the collective ability of all AI and AGI researchers to reliably converge on the set of components that best implements AGI. This has been suggested to create the potential to effectively combine all AGI research into a single meta-project with the potential to more rapidly and reliably converge on the functionality required for AGI.
- If the WP:UNDUE concerns raised by others could somehow be addressed, I would ask for this text to be shortened substantially, cutting out the sentences with a non-neutral point of view and/or original research. Some sentence on critical reception by independent researchers would be ideal. Again, this is all assuming WP:UNDUE could be addressed, which is not guaranteed. MaxwellMolecule (talk) 16:59, 9 March 2021 (UTC)
References
- ^ a b Rigato, J., Rennie, S.M. & Mainen, Z.F. The overlooked ubiquity of first-person experience in the cognitive sciences. Synthese (2019). https://doi.org/10.1007/s11229-019-02136-6
- ^ De Quincey, Christian. "Intersubjectivity: Exploring consciousness from the second-person perspective." Journal of transpersonal psychology 32.2 (2000): 135-156.
- ^ Metzler, Torsten, and Kristina Shea. "Taxonomy of cognitive functions." DS 68-7: Proceedings of the 18th International Conference on Engineering Design (ICED 11), Impacting Society through Engineering Design, Vol. 7: Human Behaviour in Design, Lyngby/Copenhagen, Denmark, 15.-19.08. 2011. 2011.
- ^ Helms, Michael, Swaroop Vattam, and Ashok Goel. "The effect of functional modeling on understanding complex biological systems." International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Vol. 44137. 2010.
- ^ a b Williams A.E. (2020) A Model for Artificial General Intelligence. In: Goertzel B., Panov A., Potapov A., Yampolskiy R. (eds) Artificial General Intelligence. AGI 2020. Lecture Notes in Computer Science, vol 12177. Springer, Cham. https://doi.org/10.1007/978-3-030-52152-3_38
Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2021 and 16 May 2021. Further details are available on the course page. Student editor(s): Regulinecoast1. Peer reviewers: C.robinrcbc.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)
Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 29 January 2019 and 8 March 2019. Further details are available on the course page. Student editor(s): Tgs847.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 14:47, 16 January 2022 (UTC)
Proposed removal of confusing/unclear tag to *Possible explanations for the slow progress of strong AI research*
I propose removing this tag as the section is now largely coherent, though admittedly not complete. Johncdraper (talk) 13:38, 29 August 2022 (UTC)
Proposed removal of needs expansion tag to /Feasibility/
I have now better contextualised this section, including by moving in a paragraph mentioning the Dreyfussian perspective from the previous section. This section does not need to be complete becaue it is a summary of another page, only indicative, and I think that while imperfect, it achieves that. Johncdraper (talk) 13:44, 29 August 2022 (UTC)
Quotes from social media
Wikipedia should not be a dumping ground for opinion quotes gathered from social media, even if some of those quoted are notable people. Please stop adding these over and over. MrOllie (talk) 20:32, 17 October 2022 (UTC)
- @MrOllie: I am not convinced of the wisdom of this approach. Of course, it would be bad to turn Wikipedia into a "dumping ground for opinion quotes gathered from social media", but the content at the center of this edit war does not look like that to me at all. Yann LeCun is not just some random guy who happens to pass GNG -- he's one of the foremost researchers in the field of neural networks. I realize that the fact of his comments being posted on one website rather than another presents us with a bit of a difficult situation. However, templates like {{cite tweet}} exist for a reason: sometimes notable stuff happens on websites that we don't typically consider to be reliable sources. jp×g 21:09, 17 October 2022 (UTC)
- We can quote him from his papers, we don't need to be digging into the comment sections of facebook posts. MrOllie (talk) 21:20, 17 October 2022 (UTC)
- I don't understand what you mean by "digging into". Is there a factual issue with the source? jp×g 00:33, 18 October 2022 (UTC)
- I think one issue here is that Wikipedia looks dimly on social media posts, e.g., Twitter, even if factually correct. If a scientific breakthrough is worth announcing, at least one news source should be carrying it, complete with a quote. There are exceptions, e.g., if a future US president tweets the US has created an AGI before actually declaring the achievement via a formal announcement. Otherwise, encycloedias simply don't do social media quotes. It's part of the filter for good sources, so to speak. Johncdraper (talk) 08:14, 18 October 2022 (UTC)
- I don't understand what you mean by "digging into". Is there a factual issue with the source? jp×g 00:33, 18 October 2022 (UTC)
- We can quote him from his papers, we don't need to be digging into the comment sections of facebook posts. MrOllie (talk) 21:20, 17 October 2022 (UTC)
This line feels like a sentence fragment. Is it connected to the preceding text? "in this world where intelligent behaviour is to be observed." 71.34.86.238 (talk) 05:25, 26 February 2023 (UTC)
The Pranav Test
I will remove this due to not being able to find anything about the author. It seems like a random dude just added his test in this article without being notable. Mrconter1 (talk) 10:25, 7 March 2023 (UTC)
Ability to make edits without reversion: Integral Mind completed, proven development of first AGI, first superintelligence (proven by US Govt)
After 25 years of work, and many years in the US Govt, Integral Mind developed and proved out the first AGI and first superintelligence. All required capabilities and properties were proven via multiple methods and extensive past performance.
This reality necessitates multiple changes to the AGI Wikipedia page and allows us to answer many of the questions posed therein with real-world results. In addition, this is no longer an unsolved question in Computer Science.
We could simply make all necessary changes, but we are concerned that this will lead to reverts and edit wars, and we want to avoid this.
Our first trial edit made was reverted by WatkynBassett, who noted their understanding that consensus was required and that there would be a very high bar for this type of edit.
The edit at issue accurately reflected the state of the art, but we certainly want to work via community processes to ensure the edit (and subsequent appropriate edits) won't be reverted in future.
How does one go about establishing such a consensus, and how is the bar defined? If either of these are fundamentally arbitrary, that would be an issue of some concern.
But I believe we have an excellent solution.
General and Strong AI must be proven by showing that certain properties hold in all cases and that the AI implementation is capable of certain core capabilities. In our implementation, additional desirable properties are present such as transparency, safety, self-control, nuance, and other properties that make the AI ideal for use in situations where life is at stake and/or the human cannot be expected to supervise the system. This will eventually be true of any superintelligence.
After our internal discovery that this was in fact the first AGI, we worked closely with the US Government for many years. During that time, multiple personnel spent years understanding and evaluating the system in depth and using it to perform various tasks. A DoD-compliant Validation and Verification (V&V) process was developed to prove that all necessary properties held and all core capabilities were present. This process was completed successfully. These personnel then completed signed documentation attesting to the fact that all requirements had been met.
The environment in which the AI was validated could not have been more stringent; absolutely everyone wanted this to fail. Understanding the inner workings of the AI requires highly significant learning and paradigm shifts, time, and openness, and also raises IP and proliferation concerns. Given that it took years for the government to understand how the AI worked and to run all pertinent tests, and that we have documentation that these tests were successful, we submit that it would not be fair nor proper to force us to repeat those years of work here.
If we can prove the Government was satisfied, we believe that Wikipedia should, in good faith, be satisfied as well.
To that end, I don't want to doxx the people who signed the documents, but would be happy to show them to a selected editor who could attest to their authenticity in this forum and in so doing allow us to make our edits.
Would this be an acceptable path forward for the community?
Thank you very much. — Preceding unsigned comment added by Daniel.olsher (talk • contribs)
- In a word: no. Wikipedia runs on verifiability through citations to reliable sources; we can cover information only when they can be cited to published, reliable sources, and not before. Writ Keeper ⚇♔ 06:31, 16 April 2023 (UTC)
- Please stop adding mentions of your company here - Wikipedia is not a venue for advertising or self promotion. If you have independent, peer-reviewed sources about your company's work, you can list them here on the talk page. Repeatedly inserting mentions of your company directly into the article violates our policies and guidelines and may lead to your account being blocked. - MrOllie (talk) 12:21, 9 May 2023 (UTC)
- It is essential that Wikipedia be accurate - the AGI page makes many statements that we have shown to be incorrect and speculates about a great many questions for which we have demonstrable answers. It is a demonstrable fact that we have developed and proved out the first AGI, and we are currently participating in a process in order to demonstrate that to the commmunity's satisfaction.
- Please respect that process and our participation in it.
- It is improper to suggest that any mention of us or our work is improper; others have their work noted on the AGI page supported by fewer peer-reviewed articles and lesser amounts of demonstrated proof than we have.
- If you believe that it's permissible for their work to be present but not ours, it is incumbent upon you to explain why. Otherwise, it appears to be a personal bias. Daniel.olsher (talk) 12:56, 9 May 2023 (UTC)
- The way we determine what is accurate is via WP:V. We cannot and will not change the article without the required sourcing. If you really don't understand the difference between your company and one that has hundreds (possibly thousands) of independent sources available, I don't know how to explain it more clearly. MrOllie (talk) 12:58, 9 May 2023 (UTC)
- The entire purpose of this topic is to provide the venue for the proper sourcing. We started the topic ourselves to that end. That is not and has not ever been the issue. Indeed, we began by proposing a first method of sourcing (approval documents), and are now moving to a second method (peer-reviewed publications) as that is clearly a better fit for the venue.
- The concern is that you appear to be applying a idiosyncratic, personal 'notability' standard to who may and may not be mentioned in the article itself - I found your statement 'this is not the Yellow Pages' to not be in keeping with the professionalism this forum deserves. We are actively engaging on this and offering proof of our work, and it should go without saying that we deserve at least a minimum level of decorum and respect.
- When it comes to the core claims at issue, we're working here to get those sourced to everyone's satisfaction. With respect to the sole statement that we were deeply associated with AGI, that is plainly true but you also reverted that - if the other companies there need only be sourced by their Website, we should not be an exception. If you feel we should be, it is incumbent upon you to explain why. Daniel.olsher (talk) 13:17, 9 May 2023 (UTC)
- My concern is that you keep talking about providing sources but not actually providing sources. There isn't anything more to discuss here until you do so. MrOllie (talk) 13:40, 9 May 2023 (UTC)
- There's an entire list provided on our Website, and a simple Google search returns many of them, but I think that everything will make a lot more sense if we introduce those sources to you properly via the synthesis. Happy to pause here until that's complete. Daniel.olsher (talk) 13:45, 9 May 2023 (UTC)
- My concern is that you keep talking about providing sources but not actually providing sources. There isn't anything more to discuss here until you do so. MrOllie (talk) 13:40, 9 May 2023 (UTC)
- The way we determine what is accurate is via WP:V. We cannot and will not change the article without the required sourcing. If you really don't understand the difference between your company and one that has hundreds (possibly thousands) of independent sources available, I don't know how to explain it more clearly. MrOllie (talk) 12:58, 9 May 2023 (UTC)
Moving forward, we will make use of the multiple peer-reviewed publications that exist in the literature with respect to this technology in order to support the necessary edits. We are currently completing a new paper providing the necessary synthesis. Once complete, we will submit a link to that paper here in order to catalyze the necessary discussion. We anticipate that that body of work, taken as a whole, will provide all necessary support for this page and with respect to other relevant topics. — Preceding unsigned comment added by Daniel.olsher (talk • contribs) 13:03, 9 May 2023 (UTC)
What does "LLM" mean?
I searched the page for this acronym. It's used several places, e.g.,
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. Gil (talk) 19:47, 11 November 2023 (UTC)
- LLM means Large Language Model. I edited the article to introduce this acronym. Alenoach (talk) 23:09, 3 December 2023 (UTC)