Jump to content

Talk:Artificial intelligence/Archive 7

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 5Archive 6Archive 7Archive 8Archive 9Archive 10

Deep Learning

It appears that User:CharlesGillingham removed a section on deep learning, and that User:FelixRosch restored it. Was the questioned material copied from another article? If so, should a link rather than a copy-and-paste be used? What is the issue? Let's resolve the issue rather than edit-warring. Robert McClenon (talk) 16:55, 21 October 2014 (UTC)

Sorry, I reverted the edit before I saw this new section. There is also a section above on Deep Learning where I put a new comment. I thought we had consensus already which is why I reverted the last edit on deep learning. Another editor has mentioned that there are "63 books and articles on Deep Learning" which IMO is not at all a convincing argument. Let's take another esoteric part of AI that I was actually very involved in: the Knowledge-Based Software Assistant There are well over 63 articles and books on that topic. They used to have a yearly conference with around 50 articles or so just in that one conference. So "63 articles and books" is not a strong argument. This article should not be a collection of every approach to AI that has ever been tried. It should be a high level overview of the field that focuses on the main topics. A link to Deep Learning in the section on Neural Nets (or wherever it is most appropriate) seems far more in keeping with the relative weight of the topic in the field rather than having a whole sub-section. I think to merit a sub-section the topic should receive significant coverage in at least one major AI overview book such as R&N. --MadScientistX11 (talk) 17:12, 21 October 2014 (UTC)
I am inclined to agree with the removal of the section. There is a full article. We do not need to largely duplicate a separate article. Robert McClenon (talk) 18:54, 21 October 2014 (UTC)
@Madruss also had just voiced a similar sentiment in the old section above. My own orientation was based on the large number of articles in the mainstream press last month (Wired magazine, etc) about deep learning as in: [1][2][3]. @RobertM, after you look at the link make a judgment call if it is worth discussing, or, a simple one sentence link from one of the existing pertinent sections, or other option. @MadScientist, should everyone be starting to integrate their comments on this here in this new section? FelixRosch (talk) 19:00, 21 October 2014 (UTC)
Please note that this has been discussed twice on this page already. There was consensus for lowering the weight of "deep learning", based on the fact that it does not appear in the leading AI textbook. I added a one-sentence mention of deep learning in the appropriate section (deep learning is a neural network technique).
Keep in mind that AI is a very large field; we are trying summarize hundreds of thousands of technical papers, thousands of courses, and thousands of popular books. A few magazine articles is not nearly enough weight to merit that many paragraphs.
I would say that "weight" is the most difficult part of editing a summary article like this one, especially because AI is an interesting topic and there are thousands of "pet theories", "interesting new ideas", "new organizing principles" and "contrarian points of view" out there. Occasionally they get added to this article, and we have to weed them out.
That's why we have used the leading AI textbooks as the gold standard for inclusion in this article. (See Talk:Artificial intelligence/Textbook survey to see how this was originally done.) There's no better way to have an objective standard that I can think of. Russell and Norvig is almost 1000 pages and covers a truly vast amount of information, far more than we could summarize here. We have even skipped topics that have their own journals.
At any rate, "deep learning" is not in those 1000 pages, and I need more than a magazine article to consider it important enough to cover here, but, as a compromise, I added a sentence under neural networks. ---- CharlesGillingham (talk) 01:10, 22 October 2014 (UTC)
Undid revision 630536769 Someone keeps section blanking on this section with 63 linked citations. If you are making a bold edit by deleting the entire section which uses 63 cites for books and articles then state this plainly on the Talk page. New cites added. There has not been a full discussion of the deletion of an issue that is abundantly supported by 63 books and articles. This topic is well past the normal guidelines of verifiable sources. Referring to an old 2008 textbook for a high tech field which they have not kept up with is evidence that the 2008 source is antiquated and should be effectively supplemented in a 2014 on-line encyclopedia article which keeps up with well-documented progress in the field. FelixRosch (talk) 17:06, 23 October 2014 (UTC)
The pure number of sources is not what decides if something should be in the article.
Neither is it necessarily the point of an article (especially a summary article like this one) to provide in depth news coverage of whatever books have come out this year.
This is a topic spanning over a half a century. Even if Felix is correct (and I don't really think he is.) that "Deap Learning" has become dominant in the field in the last six years, we should beware recent-ism and not give it undue weight. APL (talk) 17:13, 23 October 2014 (UTC)
I agree absolutely with APL and I know CharlesGillingham agrees. FelixRosch you keep harping on these 64 papers and books. 64 doesn't mean a thing. There are hundreds of articles and books on AI sub-topics like Knowledge-Based Software Engineering and Case Based Reasoning. Should we have separate sections for those as well? --MadScientistX11 (talk) 17:51, 23 October 2014 (UTC)
That's a point which is well-rehearsed. May I ask you to read the 3 (only 3) new cites which I added at the very start of the section. Scientific American is usually not considered to be a silly magazine not be dealt with seriously. Also, investments of the size being made by Google and others should not be ignored as irrelevant. Are you including the Sci Am article as just another tabloid piece for the popular press? FelixRosch (talk) 17:57, 23 October 2014 (UTC)
@Felix: Your unfamiliarity with the field is evident. The 2013 edition of Russell and Norvig's Artificial Intelligence textbook has 1 chapter out of 27 covering probabilistic models and statistical machine learning which encompasses deep learning. Your editing is disruptive; enough of your edit warring and pushing your own favourite fields. Pick up an AI textbook and broaden your horizon. pgr94 (talk) 19:06, 23 October 2014 (UTC)
@Pgr94, Reading old textbooks, even if periodically updated, is not the same as keeping up with the 2014 literature in a hi-tech field subject to rapid growth and innovation. A 2014 on-line encyclopedia is capable of keeping pace with rapid innovation and the growing technical literature which is not limited to one or two textbooks which you may have read previously. If you have a 2014 article or book from a scholarly or well-established source to quote then present it here. FelixRosch (talk) 21:02, 23 October 2014 (UTC)
@Felix: Most top tier universities are using Russell and Norvig to teach artificial intelligence. I suppose they are behind the times and you know better? (StanfordCMUCambridge).
Please stop edit warring. The evidence overwhelmingly points to deep learning not being as important as you make out and the burden of proof is on you to demonstrate otherwise. If you provide sufficiently strong evidence we'll reach a consensus. But instead you are repeatedly failing to follow WP:BRD. pgr94 (talk) 21:32, 23 October 2014 (UTC)

() @Felix. Please read WP:UNDUE and read my posts; you keep restating points which I have already refuted, and you offer no counter argument against me -- you just keep restating your points as if you haven't read my posts, which is disruptive. I've already explained why the 66 footnotes in the article deep learning and the three additional magazine articles don't count for much in an article like this one. But I'll repeat it in more detail if that helps:

3 or 66 citations are incredibly tiny numbers and mean almost nothing here. Pretty much every section in this article summarizes thousands or sometimes hundreds of thousands of technical papers and books. (Especially in the sections Goals, Approaches, Tools and Philosophy. You can't change anything based on just one more article; adding a three or ten or a hundred new citations does nothing to prove WP:UNDUE weight. That's just not going to convince us.

The citations you are finding are useless because there is no point in trying to count hundreds of thousands of possible citations to determine the right weight. We're not talking about a local band here trying to establish notability. We're talking about a multi-billion dollar industry, thousands of university departments, tens of thousands of active researchers and a vast body of published research. We can't count citations. We have to rely on textbooks and introductory courses to help us do the summarizing and weighting. I use Russell & Norvig because it's the most popular AI textbook.

You claim that Russell and Norvig is out of date, but this isn't true: it is currently, today, being used in over 1200 University courses -- it's currently, today, the most popular textbook. And, if you buy the top ten textbooks (as I did when I first starting editing this article) you'll find that Nilsson and all the other popular textbooks use very similar weights. I found that there is widespread agreement about what "AI" is all about.

Instead of citing magazine articles, you should buy the textbook or take an introductory course in AI. If your argument had the form that "the last three chapters of Hawkin's Introduction to AI are all about "Hierarchical Temporal Memory" and now Stanford, Harvard and MIT are teaching Hawkin's book, and, what's more, Hawkin's company now has majority market share and thus is the most successful AI company in history." Then I would agree we should bump up the profile of Hierarchical Temporal Memory. But this isn't the case, and, more to the point: this isn't the form of your argument.

Your argument does not convince me and does not appear to convince many of the others above. Please don't drop that giant blob of text about "deep learning" back in; it will just be removed again. ---- CharlesGillingham (talk) 05:04, 24 October 2014 (UTC)

@CharlesGillingham, Arbitrary deletion of edits without reading them first is against Wikipedia policy. If you are making a bold edit and section blanking an edit citing 63 books and articles on Deep Learning, then you need to state this plainly and be held responsible for it. You apparently have not read one single 2014 citation which I provided. You appear to have no knowledge that Peter Norvig is one of the principle supporters of Deep Learning research at Google in his function as a director there were Krizhevsky was hired: "In running deep learning algorithms on a machine with two GPU cards, Alex Krizhevsky could better the performance of 16,000 machines and their primary CPUs, the central chips that drive our computers. The trick involves how the algorithms operate but also that all those GPUs are so close together. Unlike with Google’s massive cluster, he didn’t have to send large amounts of data across a network. As it turns out, Krizhevsky now works for Google—he was part of a deep learning startup recently acquired by the company—and Google, like other web giants, is exploring the use of GPUs in its own deep learning work." Norvig defers to Krizhevsky for expertise on Deep Learning, not the other way around. Your lack of reading in this area is profound. If you are making a bold edit and section blanking an edit citing 63 books and articles on Deep Learning, then you need to state this plainly. FelixRosch (talk) 14:53, 24 October 2014 (UTC)
@Felid: Again, you seem to be missing my point and talking past me. It's like having an argument with someone who doesn't listen.
I'll respond to all your points, as quickly as I can: I am absolutely responsible for my edits -- I reduced the four paragraphs down to one sentence and put it under "neural networks". I looked at most of the citations -- they're not really on topic (i.e. they don't really address WP:UNDUE weight), and many of them don't mention deep learning on all. Everybody knows that Peter Norvig does AI for Google, and that Google is deeply into statistical machine learning. Note that Norvig doesn't use the term "deep learning" in his textbook, which we are using as the gold standard. I am well read in AI, having a degree in it, having worked in the field, and having followed it fairly carefully for the last thirty five years, at any rate the argument from authority doesn't count in Wikipedia. I removed the four paragraphs on "deep learning" at least twice, always signed my name (unless I forgot to login, I guess). I am always WP:BOLD when boldness is called for. I am proud of my edit and can't imagine why I wouldn't want to "state this plainly". Is that plain enough?
Finally, as I said at the top, please respond to my points so I know you are listening. Do you disagree with the method we are using to determine WP:UNDUE weight? Do you agree that we can't count citations to determine weight in this case? I.e. that the every sentence in this article has thousands of potential citations and 100 more or less doesn't matter? ---- CharlesGillingham (talk) 15:45, 24 October 2014 (UTC)
@CharlesGillingham; As you are finally indicating that you are making a Bold edit in section blanking "Deep learning" then I am Reverting formally in order that Discussion can now take place on Talk in the section now designated. Since you are the only one of the editors to finally take responsibility for your bold edit, I have re-read your Norvig comment and think that there may be a way to address the multiple issues on this page during this BRD. In order to show good faith, I will add the first comment of the BRD process to see if there is agreement that the current outline of this article for AI is substantially inferior to the Norvig outline for AI found in any one of the editions of his book. If there is agreement on this, although I do not think that the Norvig outline is the most ideal one in 2014, then it may still be possible to bring the multiple issues on this page into some better order and resolution. FelixRosch (talk) 21:47, 24 October 2014 (UTC)
This has been discussed, and everyone agreed that you were wrong.
Sorry that you didn't get your way, but you don't get to pretend otherwise.
You don't own the article. APL (talk) 22:07, 24 October 2014 (UTC)
@Felix: We already did BRD, about three or four times. Not sure what you mean. ---- CharlesGillingham (talk) 22:45, 24 October 2014 (UTC)
Who are you replying to? From you indentation, It looks like you're replying to me, but I think that reply was intended for Felix. APL (talk) 14:27, 27 October 2014 (UTC)
Yes, sorry. Gave it an "@". ---- CharlesGillingham (talk) 22:54, 27 October 2014 (UTC)

BRD following Bold edit by CharlesG of section blanking and Revert for Discussion

After re-reading the CharlesG comment on Norvig made directly above, there is reason to think that there may be a way to address the multiple issues on this page during this BRD. There is virtually no agreement between the outline of the current Wikipedia article for AI with the outline used for AI by Norvig which CharlesG is strongly supporting. In order to show good faith, I will add the first comment of the BRD process to see if there is agreement that the current outline of this non-peer reviewed AI article ("B"-class) is substantially inferior to the Norvig outline for AI found in any one of the editions of his book. If there is agreement on this, although I do not think that the Norvig 2008 outline is the most ideal one in 2014, then it may still be possible to bring the multiple issues on this page into some better order and resolution. FelixRosch (talk) 21:47, 24 October 2014 (UTC)

I think that you've got a very ... non-standard ... idea of what [WP:BRD] means?
It certainly doesn't mean that you get to revert-war against consensus for as long as you can prolong the discussion. APL (talk) 22:12, 24 October 2014 (UTC)
@Felix: I think the bold edit was when you dropped it in, the revert was me removing, the discussion was up above. The next bold was you adding it back in, the next revert was by someone else, the next discussion was up above, a little lower. And so on. About four or five times. Every time you add it back in, we all discuss it and decide it's a bad idea, you ignore the discussion and put it back in. BRD was already happening. The discussion is over, the article is fixed. There won't be anything to discuss until the next time you boldly drop it back in and we have to revert and discuss again. ---- CharlesGillingham (talk) 22:45, 24 October 2014 (UTC)
The BRD sequence consists of: the first, "bold", edit; the second "revert" edit; then discussion if the original editor does not agree with the reversion (perhaps followed by some form of dispute resolution if necessary). There is no second bold edit etc: restoring a reverted edit without discussion is edit warring. --Mirokado (talk) 00:05, 28 October 2014 (UTC)
Second, to rebut your point about R&N: please see Talk:Artificial intelligence/Textbook survey. This article uses very similar weights as the major AI textbooks, except that we emphasize history a little more, unsolved problems a little more. ---- CharlesGillingham (talk) 22:50, 24 October 2014 (UTC)
@CharlesG; There is no knowledge on my part of any brd ever taking place. You did not ping my account not inform me of any dispute. The deletion appeared a few days ago on the edit history summary of the AI article which is when I was informed. I had spent two days trying to find one editor who would take responsibility for the section blanking as something more substantial than a drive-by section blanking by a random editor. You finally stepped up and took responsibility for it as a Bold edit of section blanking. I then Reverted under BRD policy and guidelines so that BRD Discussion could proceed in good faith. If you re-install the edit as I placed it in my Revert according to BRD policy and guidelines, then I will support your re-installing it for purposes of BRD and good faith discussion can proceed. If you do not honor the BRD policy and guidelines by not honoring the revert then there is no reason for me to assume good faith on your part for further discussion. The BRD policy and guidelines for honoring my right to Revert needs to be re-installed as the section before it was deleted by you in order for good faith BRD discussion to be initiated. FelixRosch (talk) 14:35, 28 October 2014 (UTC)
You've totally ignored the fact that four separate editors reverted your addition. And you're still going on about your "rights"? [4] --NeilN talk to me 14:46, 28 October 2014 (UTC)
I agree that the content was removed properly, in accordance to policy, and in good faith.
Felix's understanding of both the situation, and the BRD essay is wrong. APL (talk) 19:57, 28 October 2014 (UTC)
My understanding of BRD is to follow Wikipedia policy and guidelines, and all 4 editors can have their say as long as the BRD policies and guidelines are followed with the restore of the blanked out section, in order for good faith discussion to continue. Otherwise, there is no basis to believe that good faith discussion is possible by CharlesG for following Wikipedia policy and guidelines. He initiated the section blanking and he can restore it in order for good faith discussion to continue for all participating editors. FelixRosch (talk) 20:58, 28 October 2014 (UTC)
Regardless of which edit was the "B" and which was the "R", the discussion has now occurred. It'd done. The consensus it to not include the material.
Maybe you missed the discussion because you were busy arguing about BRD, RFCs or who knows what, but you're not allowed to hold up the process by pretending that the discussion hasn't occurred. It's just disruptive editing. APL (talk) 21:29, 28 October 2014 (UTC)
No answer from @CharlesG; There is no knowledge on my part of any brd ever taking place. You did not ping my account nor inform me of any dispute for deep learning. The deletion appeared a few days ago on the edit history summary of the AI article which is when I was indirectly informed of it. I had spent two days trying to find one editor who would take responsibility for the section blanking as something more substantial than a drive-by section blanking by a random editor. You finally took responsibility for it as a Bold edit of section blanking with harsh words. I then Reverted under BRD policy and guidelines so that BRD Discussion could proceed in good faith. If you re-install the edit as I placed it in my Revert according to BRD policy and guidelines, then I will support your re-installing it for purposes of BRD and good faith discussion can proceed. If you do not honor the BRD policy and guidelines, by not honoring the revert, then there is no reason for me to assume good faith on your part for further discussion. The BRD policy and guidelines for honoring my placement of the Revert needs to be re-installed as the full section before it was deleted by you in order for good faith BRD discussion to be initiated. Each editor at Wikipedia is entitled to discussion under the terms of BRD, yet you seem to have singled me out from being allowed to do this. You appear not to wish to honor my request to make my case for Deep Learning following the accounts of its leading expositors in 2014 and not your old version from a 2008 book. The assumption of good faith on your part calls for the section which you blanked out to be Restored under BRD rules in order for good faith discussion of the section to commence under Wikipedia BRD policy and guidelines. FelixRosch (talk) 21:07, 29 October 2014 (UTC)
"Each editor at Wikipedia is entitled to discussion under the terms of BRD"
What? That is not true at all. Nothing about BRD is personal. You don't own your edits. The issue has been discussed, you don't personally get some kind of trial.
Anyway, it's pretty clear that you're not convincing anyone at all. If you think we've behaved improperly, go report us at WP:ANI.
APL (talk) 21:22, 29 October 2014 (UTC)

Going forward on Deep Learning

There appears to be consensus against the inclusion of the four paragraphs proposed by User:FelixRosch on deep learning. Is Felix willing to accept that consensus and leave the material out of this article? If so, should a link to the article on deep learning be included in this article? If Felix does not agree that there is consensus, then does he want to go forward with another RFC, or does he want to take this issue and possibly any other issues to moderated dispute resolution? Robert McClenon (talk) 15:35, 27 October 2014 (UTC)

@RobertM; Not so fast. You still have not explained your odd about face on the ANI. You made an agreement with your "Done" check mark to me, and then you did an about-face needlessly exposing several good faith editors to the hazards of increased scrutiny for mis-step during the ANI process. When the several editors there followed the ANI rules and did not commit a mis-step you suddenly did an about face nullifying your previous agreement to attain consensus for your own personal reasons. The editors you exposed to ANI were all good faith editors are there was no visible reason for you to needlessly put them into the hazard of increased scrutiny during an ANI, and for you to foment a controversial ANI, for your own personal reasons of seeking your bias in this AI article. Your about-face requires some explanation. Please explain. FelixRosch (talk) 14:48, 28 October 2014 (UTC)
"exposing several good faith editors to the hazards of increased scrutiny for mis-step during the ANI process."
Oh, please. (::RollEyes::)
Nobody has been put into any "hazard". If the only thing stopping you from edit-warring is the threat that someone at ANI might notice, then you should not be editing Wikipedia ever.
(In any case, forming consensus is not a sin. We're supposed to be flexible, and not stubborn.)
APL (talk) 15:02, 28 October 2014 (UTC)
You were not the principal editor involved in the ANI, so you should not laugh at the other good faith editors. If you enjoy having your positions and edits on Wikipedia misrepresented, then roll your eyes again. FelixRosch (talk) 15:08, 28 October 2014 (UTC)
Oh, I am. APL (talk) 15:18, 28 October 2014 (UTC)
First, I wasn't the author of the ANI. Second, if Felix is implying that making mistakes is sanctionable behavior, then isn't defacing an RFC also sanctionable behavior. Third, this thread isn't about the ANI or the previous RFC. It is about deep learning. Fourth, I misunderstood what you had offered. I thought that you thought that the RFC was non-neutral, and that you were willing to change its wording. So I struck the bot template for the previous RFC, and expected that you would file a new RFC. I didn't realize that you expected that you were trying to establish a consensus and then use that to stop further discussion. I made a mistake, in that I didn't understand what you were offering. Fifth, how do you want to go forward on deep learning? Robert McClenon (talk) 15:54, 28 October 2014 (UTC)
My answer to CharlesG is posted above in the BRD section. Your odd about face is still unexplained and is still pending. My comments said nothing about a new RfC at that time and I did exactly my part of what was stated:
"Offer to stipulate. You appear to be saying that of the 4 options from various editors which I listed above as being on the discussion table, that you have a preference for version (d) by Steelpillow, and that you are willing to remove the disputed RfC under the circumstance that the Steeltrap version be posted as being a neutral version of the edit. Since I accept that the editors on this page are in general of good faith, then I can stipulate that if (If) you will drop this RfC by removing the template, etc, that I shall then post the version of Steeltrap from 3 October on Talk in preference to the Qwerty version of 1 October. The 4 paragraph version of the Lede of Qwerty will then be posted updated with the 3 October phrase of Steelpillow ("...whether human-like or not") with no further amendations. It is not your version and it is not my version, and all can call it the Steelpillow version the consensus version. If agreed then all you need do is close/drop the RfC, and then I'll post the Steelpillow version as the consensus version. FelixRosch (talk) 17:46, 16 October 2014 (UTC)"
 Done - Your turn. Robert McClenon (talk) 18:30, 16 October 2014 (UTC)
 Done Installing new 4 paragraph version of Lede following terms of close-out by originating Editor RobertM and consensus of 5 editors. It is the "Steelpillow" version of the new Lede following RfC close-out on Talk. FelixRosch (talk) 19:58, 16 October 2014 (UTC)
Please explain your about-face. Please remove/close your defective and biased RfC. FelixRosch (talk) 16:25, 28 October 2014 (UTC)
1 ) He did explain. It was an error. It needs no explanation beyond that. (It didn't even need that much explanation. People are allowed to change their minds without first getting permission from User:FelixRosch.)
2) You have no authority to issue demands to other users.
3) It would be inappropriate for him or anyone else to remove an RFC in progress.
4) Felix, you are making a fool of yourself. Everyone can see it except you.
APL (talk) 16:45, 28 October 2014 (UTC)
As User:APL explains, and as I explained, I misunderstood the offer, which I thought was an offer to reword the RFC, not to replace it with a two-editor "consensus". If I had understood the offer, I certainly would never had taken an RFC down based on a private agreement. I made a mistake based on misunderstanding. I didn't, for instance, deface an RFC. User:FelixRosch appears to be imposing a higher standard for other editors than he holds himself to. Robert McClenon (talk) 18:41, 28 October 2014 (UTC)
In any case, you haven't yet answered how you want to go forward on deep learning. Do you have a constructive suggestion, or do you want to ridicule other editors and to carp? Robert McClenon (talk) 18:41, 28 October 2014 (UTC)
As to the demand of User:FelixRosch to remove the RFC, there is consensus that it should not be removed, and that its removal, based on a miscommunication, was improper. Your continued demands that it be removed are disruptive editing. Because of your tendentiousness in editing, I have gone to extraordinary lengths to satisfy your demands, and have just gotten more demands, and will no longer pay any attention to your demands. You said that the RFC was biased, so I was willing to have its wording changed, but you took that as a private consensus. I am still trying to work toward consensus, and I have opposed a suggestion that the RFC should be snow closed. Would you rather have the RFC run its course, or have it snow closed? Demanding that it be removed is not a reasonable option. Robert McClenon (talk) 20:41, 28 October 2014 (UTC)

Summary of Penrose's views inaccurate

This passage summarizes the views of Penrose's critics, not his own views:

John Lucas (in 1961) and Roger Penrose (in 1989) both argued that Gödel's theorem entails that artificial intelligence can never surpass human intelligence,[186] because it shows there are propositions which a human being can prove but which can not be proved by a formal system. A system with a certain amount of arithmetic, cannot prove all true statements, as is possible in formal logic. Formal deductive logic is complete, but when a certain level of number theory is added, the total system becomes incomplete. This is true for a human thinker using these systems, or a computer program.

Penrose's argument is that artificial intelligence if it is based on programming, neural nets, quantum computers and such like, can never emulate human understanding of truth. And Gödel's theorem indeed shows that any first order formal deductive logic strong enough to include arithmetic is incomplete (and a second order theory can't be used on its own as a deductive system). But Penrose's point is that the limitations of formal logic do not apply to a human, because we are not limited to reasoning within formal systems. Humans can indeed use Gödel's very argument to come up with a statement that a human can see to be true but which can never be proved within the formal system. Vis the statement that the Gödel sentence cannot be proved within the formal system, and that there is no Gödel encoding of its proof within the system.

And he doesn't argue that it is impossible for artificial intelligence to surpass human intelligence. It clearly can in restricted areas like playing chess games.

Nor does he argue that it is impossible for any form of artificial intelligence to ever emulate human understanding of mathematical truth. He just says that it is impossible for it to do that if based on methods that are equivalent to Turing computability, which includes hardware neural nets, and quantum computers and probabilitistic methods as all of those are shown to introduce nothing new, they are just faster forms of Turing type computability.

He says that before AI can achieve human understanding of mathematical truth, then new physics is needed. We need to understand first how humans are able to do something non computable, to understand maths. He thinks that the missing step is to do with the way that the collapse of the wave function happens, based on his ideas of quantum gravity. So - you still could get artificial intelligence able to understand maths based on either using biology directly - or else - based on some fundamental new understanding of physics. But not through computer programs, neural nets or quantum computers.

I think this is important because, whatever you think of Penrose's views, they introduce an interesting new position according to which both weak AI and strong AI can never be achieved by conventional methods. So - it would add to the article to acknowledge that in the philosopical section, as a third position, and highlight it more - rather than to just put forward the views of Penrose's critics. At any rate the section as it stands is inaccurate as it says that it is summarizing the views of Penrose and Lucas - but then goes on to summarize the views of their critics instead.

His ideas are summarized reasonably accurately in the main article Philosophy_of_artificial_intelligence#Lucas.2C_Penrose_and_G.C3.B6del

"In 1931, Kurt Gödel proved that it is always possible to create statements that a formal system (such as a high-level symbol manipulation program) could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". This proved to philosopher John Lucas that human reason would always be superior to machines.[26] He wrote "Gödel's theorem seems to me to prove that mechanism is false, that is, that minds cannot be explained as machines."[27] Roger Penrose expanded on this argument in his 1989 book The Emperor's New Mind, where he speculated that quantum mechanical processes inside individual neurons gave humans this special advantage over machines."

Except - that more precisely, he speculated that non computable processes occurring during collapse of quantum mechanical states, both within neurons and also spanning more than one neuron, gave humans this special advantage over machines. (I'll correct that page).

Robert Walker (talk) 11:17, 16 October 2014 (UTC)

Agreed. Please be WP:BOLD and fix it. (It needs to be short, of course.)---- CharlesGillingham (talk) 16:00, 16 October 2014 (UTC)
Okay thanks, will do. I've just fixed the issues in the Philosophy_of_artificial_intelligence#Lucas.2C_Penrose_and_G.C3.B6del section as best I could. Will give some thought about how to summarize that in a shorter form for this article and give it a go after I've had time to reflect on how best to present it here in short form. Robert Walker (talk) 13:47, 19 October 2014 (UTC)
Done my best. It is a little long, but not much longer than the para it replaced, at least I've got it down to a single para. Hope it is okay. Will take another look in a day or two see if I can do a rewrite to reduce it further. If you wanted to trim it right down, you could stop just before "This is quite a general result, if accepted" - by that stage it has presented the essential argument in a nutshell but not mentioned its implications or the controversy that arose from it. Artificial_intelligence#Philosophy Robert Walker (talk) 22:08, 21 October 2014 (UTC)
And you feel happy that all the other issues are covered in philosophy of AI? ---- 172.250.79.167 (talk) 00:51, 22 October 2014 (UTC)
Sorry about the delay in reply. I'm okay with the section on Lucas and Penrose there - there are many more counter arguments and responses to them, but the basics I think covered okay, as best I understand it.
As for the rest of the article, when it says "Few disagree that a brain simulation is possible in theory" - of course - Penrose thinks that his argument proves that brain simulation of the type described is not possible even in theory - at least not if you use the currently known laws of physics. He thinks that some other laws are needed. And if the resulting laws have some element that is in some essential way non computable, then they can't be simulated in an ordinary Turing machine equivalent computer - perhaps can only be simulated using the same physics or similar.
Just saying, not sure it needs to be edited, he could count as one of the "few people" who disagree here. And the next section makes his views clear enough hopefully. It's the only other thing that springs to mind. Robert Walker (talk) 03:10, 1 November 2014 (UTC)

RfC: Should this article define AI as studying/simulating "intelligence" or "human-like intelligence"?

Deleting RFC header as per discussion. New RFC will be posted if required. Robert McClenon (talk) 15:03, 1 October 2014 (UTC)

Argument in favor of "intelligence"

The article should define AI as studying "intelligence" in general rather than specifically "human-like intelligence" because

  1. AI founder John McCarthy (computer scientist) writes "AI is not, by definition, a simulation of human intelligence", and has argued forcefully and repeatedly that AI should not simulate human intelligence, but should focus on solving problems that people use intelligence to solve.
  2. The leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach defines AI as the "the study and design of rational agents", a term (like the more common term intelligent agent) which is carefully defined to include simple rational agents like thermostats and complex rational agents like firms or nations, as well as insects, human beings, and other living things. All of these are "rational agents", all them provide insight into the mechanism of intelligent behavior, and humans are just one example among many. They also write that the "whole-agent view is now widely accepted in the field."
  3. Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  4. The majority of successful AI applications do not use "human-like" reasoning, and instead rely on statistical techniques (such as bayesian nets or support vector machines), models based the behavior of animals (such as particle swarm optimization), models based on natural selection, and so on. Even neural networks are an abstract mathematical model that does not typically simulate any part of a human brain. The last successful approach that modeled human reasoning were the expert systems of the 1980s, which are primarily of historical interest. Applications based on human biology or psychology do exist and may one day regain the center stage (consider Jeff Hawkins' Numenta, for one), but as of 2014, they are on the back burner.
  5. From the 1960s to the 1980s there was some debate over the value of human-like intelligence as a model, which was mostly settled by the all-inclusive "intelligent agent" paradigm. (See History of AI#The importance of having a body: Nouvelle AI and embodied reason and History of AI#Intelligence agents.) The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This sub-field defines itself in terms human intelligence, as do some individual researchers and journalists. The field of AI, as a whole, does not.

All of these points are made in the article, with ample references:

From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
  • Poole, Mackworth & Goebel 1998, p. 1, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
  • Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55).
  • Nilsson 1998
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[5]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).

FelixRosch has succeeded in showing that human-like intelligence is interesting to AI research, but not that it defines AI research. Defining artificial intelligence as studying/simulating "human-like intelligence" is simply incorrect; it is not how the majority of AI researchers, leaders and major textbooks define the field. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)

Comments

I fully support the position presented by User:CharlesGillingham.

User:FelixRosch says The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). I fail to see how sections 2.3 planning, 2.4 learning, 2.6 perception, 2.7 motion and manipulation relate only to humans. Could you please quote the exact wording in each of these sections that give you this impression? pgr94 (talk) 22:18, 23 September 2014 (UTC)

User:FelixRosch says "User:CharlesG keeps referring abstractly to multiple references on the article Talk page (and in this RfC) which he is familiar with, and continues not to bring them into the main body of the article first". The references are already in the article. The material in the table above is cut-and-pasted from the article. ---- CharlesGillingham (talk) 06:12, 24 September 2014 (UTC)

I support the position in favor of "intelligence", for the raisons stated by User:CharlesGillingham. Pintoch (talk) 07:27, 24 September 2014 (UTC)

The origins of the artificial intelligence discipline did largely have to do with "human-like" intelligence. However, much of modern AI, including most of its successes, have had to do with various sorts of non-human intelligence. To restrict this article only to efforts (mostly unsuccessful) at human-like intelligence would be to impoverish the article. Robert McClenon (talk) 03:16, 25 September 2014 (UTC)

  • If I'm understanding the question correctly the answer is obvious. AI is absolutely not just about studying "human like" intelligence but intelligence in general, which includes human like intelligence. I mean there are whole sub-disciplines of AI, the formal methods people in particular, who study mathematical formalisms that are about how to represent logic and information in general not just human intelligence. To pick one specific example: First Order Logic. People are all over the map on how much FOL relates to human intelligence. Some would say very much others would say not at all but I don't think anyone who has worked in AI would deny that FOL and languages based on it are absolutely a part of AI. Or another example is Deep Blue. It performs at the grand master level but some people would argue the way it computes is very different than the way a human does and -- at least in my experience -- the people who code programs like Deep Blue don't really care that much either way, they want to solve hard problems as effectively as possible. The analogy I used to hear all the time was that AI is to human cognition as aeronautics is to bird flight. An aeronautics engineer may study how birds fly in order to better design a plane but she will never be constrained by how birds do it because airplanes are fundamentally different and the same with computers, human intelligence will definitely impact how we design smart computers but it won't define it and AI researchers are not bound to stay within the limits of how humans solve problems. --MadScientistX11 (talk) 15:14, 25 September 2014 (UTC)
  • Comment The sources cited by CharlesGillingham do not all contradict the article. Apparently the current wording is confusing, but:
  • "Human-like" need not be read as "simulating humans". It can also be read as "human-level", which is typically the (or a) goal of AI. Speaking from my field of expertise, all the Bayes nets and neural nets and graphical models in the world are still trying to match hand-labeled examples, i.e. obtain human-level performance, even though the claim that they do anything like human brains is very, very strenuous. (Though I can point you to recent papers in major venues where this is still used as a selling point. Here's one of them.)
  • More importantly, the article speaks of emulating, not simulating, intelligence. Citing Wiktionary, emulation is "The endeavor or desire to equal or excel someone else in qualities or actions" (I don't have my Oxford Learner's Dictionary near, but I assure you the definition will be similar). In other words, emulation can exceed the qualities of the thing being emulated, so there's no need to stop at a human level of performance a priori; and emulation does not need to use "human means", or restrict itself to "cognitively plausible" ways of achieving its goal.
  • The phrase "though other variations of AI such as strong-AI and weak-AI are also studied" seems to have been added by someone who didn't understand the simulation-emulation distinction. I'll remove this right away, as it also just drops two technical terms on the reader without introducing them or explaining the (perceived) distinction with the emulation of human-like intelligence.
I conclude that the RFC is based on false premises (but I have no objection against a better formulation that is more in line with reliable sources). QVVERTYVS (hm?) 09:01, 1 October 2014 (UTC)
The simulation/emulation distinction that you are making does not appear in our most reliable source, Russell and Norvig's Artificial Intelligence: A Modern Approach (the most popular textbook on the topic). They categorize definitions of AI along these orthogonal lines: acting/"thinking" (i.e. behavior vs. algorithm), human/rational (human emulation vs. directed at defined goals), and they argue that AI is most usefully defined in terms of "acting rationally". The same section describes the long-term debate over the definition of AI. (See pgs. 1-5 of the second edition). The argument against defining AI as "human-like" (see my post at the top of the first RfC) is that R&N, as well as AI leaders John McCarthy (computer scientist) and Rodney Brooks all argue that AI should NOT be defined as "human-like". While this does not represent a unanimous consensus of all sources, of course, nevertheless we certainly can't simply bulldoze over the majority opinion and substitute our own. Cutting the word "human-like" gives us a definition that everyone would find acceptable. ---- CharlesGillingham (talk) 00:51, 9 October 2014 (UTC)

Argument in favor of "human-like intelligence"

Please Delete or Strike RFC

I am requesting that the author of the RFC delete or strike the RFC, because it is non-neutral in its wording. Robert McClenon (talk) 22:30, 23 September 2014 (UTC)

Just as the issue with the article is only with its first sentence, my issue with the RFC is with its first sentence. Robert McClenon (talk) 22:31, 23 September 2014 (UTC)
Let's face it, I don't know how to do this .... is the format above acceptable? Can you help me fix it? ---- CharlesGillingham (talk) 05:32, 24 September 2014 (UTC)
Much better. Robert McClenon (talk) 03:12, 25 September 2014 (UTC)

Neither one

To avoid a possible false dichotomy (and because neither seemed right to me) I will add an area for alternative proposals and make an initial one. Markbassett (talk) 05:27, 3 November 2014 (UTC)

  • Just Go With The Cites Stop trying to reinvent the words. The general definition for such a prominent field should be quoted from authoritative external source, not recrafted by random editors. Maybe Webster "an area of computer science that deals with giving machines the ability to seem like they have human intelligence" or IEEE or whoever, but get what is out there that folks have been using and stop trying to make up something. That's how it got to the tangled phrasing "study which studies the goal of creating intelligence" from a simple "Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it". Again, just go with whatever the cites say. Markbassett (talk) 05:39, 3 November 2014 (UTC)

Good Article

I've never put this article up for WP:Good article because I don't have time to fix all the little editorial changes that this always requires. Since there are so many eyes on this article now, maybe we have sufficient editor-power to do this now.

It's very comprehensive, it has extremely reliable sources for almost every paragraph and the weight is good as we can make it. I think the writing is fine (although you never know what's going to bother them this year over at MOS).

The only section that is a little short on comprehensiveness is Applications -- it's just scattershot -- no one has made an effort to cover the most widely used and important applications first, no one has made an attempt to organize the applications by industry and so on.

Another section that could use help is AI in fiction. It's just a bullet list, and it's always growing. We should probably cut the whole thing -- it's a pain to keep knocking everybody's favorite book off it, so I gave up a long time ago and just let it grow. I liked the way this section was several years ago, when science fiction and speculation were mixed in the same section, because they really do make the same points. If you try to organize the section on science fiction by topic, you will find that many of the paragraphs cover the same topics as the speculation section.

Anyway, those are my thoughts. Anyone interested? ---- CharlesGillingham (talk) 02:42, 5 November 2014 (UTC)

This dovetails nicely with the "Human-like", question. As, unlike most real AI, fictional AI is almost always indistinguishable from human intelligence. (Except smarter and/or crazier, somehow.) APL (talk) 15:13, 5 November 2014 (UTC)

Cognitive computers

Cognitive computers combine artificial intelligence and machine-learning algorithms, in an approach which attempts to reproduce the behaviour of the human brain.[1] An example is provided by IBM's Watson machine. A later approach has been their development of the TrueNorth microchip architecture, which is designed to be closer in structure to the human brain that the von Neumann architecture used in conventional computers.

  1. ^ Dharmendra Modha (interview), "A computer that thinks", New Scientist 8 November 2014, Pages 28-29

Any thoughts on where to drop this - or something like it - into the article? Or, since it "combines AI with..." does it belong elsewhere? — Cheers, Steelpillow (Talk) 16:04, 7 November 2014 (UTC)

[update] Just found the stub article on the Cognitive computer. The paragraph above obviously belongs there, but how much mention should be made here? — Cheers, Steelpillow (Talk) 18:12, 7 November 2014 (UTC)
As I understand (from what little I read) this is a chip that accelerates an algorithm called Hebbian learning, which is used on neural networks. So, if you want to add a (one sentence) mention of cognitive computer, it would have to go in the section Neural networks of this article. (I would prefer it if we added it the neural network sub-article, however, because there isn't room for everything here.) ---- CharlesGillingham (talk) 04:35, 8 November 2014 (UTC)
I looked at neural networks, and neuromorphic computing (a different kind of hardware designed to run neural networks faster) is described under Artificial neural network#Recent improvements- so it seems to me that "cognitive computer" belongs there. Someone should create a section in that article on hardware approaches to neural network computing, and find out all the current hardware. It may even deserve it's own article. ---- CharlesGillingham (talk) 04:52, 8 November 2014 (UTC)

The thing vs the academic study of it

The structure and scope of this article shows a conceptual muddle between "AI" as machine intelligence and "AI" as the associated academic discipline. We get an opening definition of the thing itself, a restatement that it is the academic field, and various sections on one or the other of these with no indication as to which or why. The Outline of artificial intelligence is no better, while the History of artificial intelligence carries barely one paragraph on the wider aspects.

It is claimed above that the term "artificial intelligence" was coined by academia to describe a field of research. But so too were words such as "biology", "philosophy", "nuclear physics" and a hundred other terms which have become synonymous with the object studied. Several broader aspects of AI have their own articles - Philosophy of artificial intelligence, Ethics of artificial intelligence, Artificial intelligence in fiction. It is nonsense to suggest that these are studying the philosophy, ethics and fictional aspects of the academic discipline, they are patently aspects of the thing itself. So we see that Wikipedians have already embedded this broader usage of the term in our encyclopedia. To suddenly turn round and claim that this article's present title references primarily the academic discipline is inconsistent with the established wider usage and utterly untenable.

It seems to me that one way out of this dilemma is to create a new page title for Artificial intelligence research. Probably 90% of the present article's content would be appropriate there, the remainder - plus a summary of the former - should remain here. Options would seem to be:

  1. Move this article across, then cut-and-paste back the 10% or so that does belong here.
  2. Create Artificial intelligence research from scratch and heavily rebalance the present article.
  3. Refactor the present article to cover both usages unambigously and not create a new page.

What do folks think? — Cheers, Steelpillow (Talk) 10:24, 5 November 2014 (UTC)

I don't see the need for two articles. Refactor this article to differentiate clearly between artificial intelligence (the product or software) and the study of artificial intelligence (the discipline). Robert McClenon (talk) 15:20, 5 November 2014 (UTC)
@Steelpillow I'm having a lot of trouble understanding what you have in mind. Pretty much every technical topic is both a subfield and a type of AI. Neural networks is subfield and a kind of program. Machine learning is a subfield and a class of programs. Statistcal AI is subfield and a description of a kind of program. Certainly you wouldn't want to write slightly differently worded versions of each section in both articles? How would you split up the history section? I guess I'm not sure what you mean by the "object of study" -- it's AI programs in the real world, right? Or are we talking about something else? ---- CharlesGillingham (talk) 09:17, 6 November 2014 (UTC)
Personally I agree with you. But some editors have been adamant that they want the article to cover the academic literature to the exclusion of other sources. For example the fact that the idea of "human-like" does not appear in the primary literature is being used as an argument that it should not appear in the article. On the "object of study", consider biology. This is an academic field of study, but we also talk happily about "the biology of" a thing, meaning what is actually going on inside the thing rather than meaning merely the study of the thing. Similarly, when we philosophise about AI we are philosophising about the end product, the thing created, not about the academic activity that surrounds it. This kind of issue has been bubbling along for some time so I was wondering whether the apparently irreconcilable views of some editors might be met by forking the article. This would allow each camp to develop their material in relative peace - at least until the first Merge proposal . I felt the idea worth floating. — Cheers, Steelpillow (Talk) 10:21, 6 November 2014 (UTC)
I'm more interested in what you actually want to do the article; how the fork would work. We already have an article on "human-like intelligence" --- artificial general intelligence so the fork is already there. But the fork you are proposing is between AI research vs. Ai programs, and, actually, we also already have an article artificial intelligence applications which describes AI programs. So the forks are pretty much there. I'm having trouble visualizing what material would go into an article on "the intelligence of machines" unless it's the same as applications of AI. I'm also having trouble how we could describe the field of AI without most of the text just describing different classes of AI programs (and the subfields of AI that study and create them). ---- CharlesGillingham (talk)
I should have looked for those articles, thank you. I am coming at this primarily as a Wikipedian, and my knowledge of the subject is no more than that of an interested layman. What I have been groping towards is the idea of a top-level article, akin to a portal page, which introduces the whole field of created intelligence, cognition, mind, whatever, and across all of hard research, philosophy/ethics and fiction. I had assumed that the present article was it, but then found that other editors had a more specialised view of its scope. For example philosophy, ethics, popular culture and fiction are most strongly concerned with the human-like aspects of artificial general intelligence so a portal would give that more equal emphasis with the academic study of weak AI. The present article focuses on the academic study, which is primarily of weak AI. I started the present discussion as a way to tease apart those two foci and see how best to create that overarching introduction. (In this context I am treating the programs as part of the research effort, as a substrate to the intelligence created rather than as enshrining the intelligence, any more than a surgeon sees the intelligence under the scalpel when he cuts into a living brain. It is the information constructs bandied about by the programs and the brain which comprise the intelligence itself. Perhaps academia disagrees with me?). I am quite prepared to be told that my proposed fork is wrong-headed and that there is a better way. But as a Wikipedian and a layman, I do need that introductory material. — Cheers, Steelpillow (Talk) 11:36, 8 November 2014 (UTC)
Would it be appropriate to assist readers by adding a disambiguation section at the very top of the article along the lines of "This article concerns research into artificial intelligence and its applications. For information related to human-level artificial intelligence see Artificial General Intelligence, for the feasibility of AI see Philosophy of AI."
Another option might be to create an AI navigation template like the one in Programming paradigm. Both options may make it easier for readers to find what they are looking for. I'd much rather this kind of change than the article evolving into something that doesn't match the textbooks. pgr94 (talk) 15:17, 8 November 2014 (UTC)
@Steelpillow appears to be suggesting something like this for a refactoring of the main sections in the article (in addition to the four options of newer outlines for refactoring sections in the new section below). This would be done for the benefit of readers to have a more useful version of the article. The preliminary version of @Steelpillow appears to move in the direction of something like the following (for discussion/amendation/revision, etc.):
AI (continue to tease apart the two foci)
1. Created intelligence, cognition, mind
2. Academic study of Weak-AI
3. Hard research
4. Philosophy/ethics
5. Fiction
Is this any closer to a workable refactoring, or, should the 4 new outlines provided below still be consulted. FelixRosch TALK 17:53, 8 November 2014 (UTC)

That looks quite close to what I have in mind, although I don't know enough to comment on the division between academic study and the hard stuff, while a History section might also be appropriate. I am currently chewing over the idea that the entry page should be the new article, titled something like Introduction to artificial intelligence and allowing the present one the opportunity to focus more exclusively on the technical side of things, omitting for example the section on Fiction. This seems to be the way that several other science topics are treated, see for example Quantum mechanics and the Introduction to quantum mechanics. This reply may also be taken in the context of the thoughts expressed by JonRichfield (talk · contribs) below here. — Cheers, Steelpillow (Talk) 21:11, 8 November 2014 (UTC)