Talk:Neural network (machine learning)/Archives/2020/July
This is an archive of past discussions about Neural network (machine learning). Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Abject Failure
This article fails. Wikipedia is supposed to be an encyclopedia, not a graduate math text, comprehensible only by math geeks. More plain text for normal people is sorely needed. I could not make head nor tails of this article, and I hold two degrees in computer science. —Preceding unsigned comment added by 24.18.203.194 (talk) 11:27, 8 May 2010 (UTC)
- Feel free to fix it. The problem partly stems from the fact that there is no concrete agreed definition of what an ANN is. It seems to me that it was a fancy word that researchers used for a few decades before it went out of fashion. The lack of coherence in the article is partially a reflection of this. User A1 (talk) 12:27, 8 May 2010 (UTC)
- Feel free to fix it??? What if the poster of that comment doesn't know anything about neural networks? What good would it do for them to try to "fix it"? Your comment can be interpreted only as a snarky smug attempt to criticize someone who is giving Wikipedia and its writers extremely important constructive feedback. The problems with this article go way, way beyond the virtually irrelevant fact that not all definitions of artificial neural networks are identical.
- The problems with this article are at root because the people who wrote it have no idea what an encyclopedia article is.Daqu (talk) 01:49, 8 May 2015 (UTC)
- A1's response can also be interpreted as a polite invitation to help. QVVERTYVS (hm?) 09:30, 8 May 2015 (UTC)
- I have to agree with this. I didn't understand this article, so I came to the talk page and this confirms my hunch. The issue is not that it contains complexity, "comprehensible only by math geeks". Rather, it needs a section to bridge between no knowledge of this concept and expert knowledge. I agree that "more plain text for normal people is sorely needed". This should come from experts, not from someone like me or the first commenter, because we lack a sufficient understanding to explain the concept. JJCaesar (talk) 04:15, 5 September 2015 (UTC)
- could another article not just be written, "introduction to ANNs" .. "overview of ANN's " .. or split the depth out into more specific articles ..
It's worse than abject failure. It's a complete mess. I would recommend anybody who comes here to run away as quickly as possible. There are several excellent books about NNs that range from popsci to university level text books. There is plenty of free online material. There are also plenty of excellent free lectures available from MIT and others. Some excellent youtube videos. Wikipedia should be ashamed of this disaster. The article should be taken down until its readable. — Preceding unsigned comment added by 2601:646:8580:19:5D95:B046:D9AD:5FA3 (talk) 22:46, 22 October 2017 (UTC)
Merge suggestion
Consensus is to not merge. NN, BNN and ANN are three separate entities. Consensus is to keep three separate articles and slim each down to a more specific version by removing NN from ANN and ANN from BNN etc.
95% of the article Neural network is about Artificial neural networks. Keeping these two articles is unnecessary forking and only supports continued divergence/overlap of the texts. The article must be merged and then restructured by according to reasonable subtopics. The page Neural network must be a disambig page for ANN, BNN and Neural Networks, the Official Journal of the International Neural Network Society, European Neural Network Society & Japanese Neural Network Society. Twri (talk) 23:32, 24 November 2008 (UTC)
- I could go either way on this. Leave Neural network (disambiguation) where it is. Then either remove most of the ANN stuff from the NN article, or merge the two. Dicklyon (talk) 03:48, 25 November 2008 (UTC)
- Obviously there is a distinct difference between biological neural networks and artificial neural networks. Merging the two does not seem appropriate. Instead, the "neural networks" article should only describe biological neural networks and refer to the ANN article as computational models. 84.244.141.35 (talk) 08:27, 19 December 2008 (UTC)
- I propose that the subsection "Neural networks and neuroscience" should be moved to Neural network (Neuroscience) as a Neuroscience stub and the rest should be merged. Neuroscience is only interested in ANNs that model BNNs and are not interested as ANN as a general AI tool. There is a significant difference between the two uses and having them as one page would make the page long and cumbersome for both audiences. JonathanWilliford (talk) 15:49, 25 January 2009 (UTC)
- Artificial neural networks are a specific case of neural networks. I agree that the "neural networks" should only describe biological neural networks. There must be two pages one for "neural networks" and another one for the specific "artificial neural networks". —Preceding unsigned comment added by Sergioledesma (talk • contribs) 18:59, 26 January 2009 (UTC)
- I think that the ANN article should be kept seperate from the NN article. They are both different things and it could cause confusion
- Strongly agree. The term 'Neural Network' in common parlance, in all other encyclopaedias I have sampled, and in academic environments nearly always refers to Artificial Neural Networks, and virtually never to networks of biological neurons. The proposal by JonathanWilliford seems reasonable as long as the merged article is called 'Neural Networks', and I support that. Having the 'Neural Networks' article only refer to biological neural networks goes against the accepted, widely used meaning of the term. 203.173.41.22 (talk) 04:38, 29 January 2009 (UTC)
- Questionable Oppose aren't neural networks and artificial neural networks two diffrent things? 64.222.85.86 (talk) 21:38, 7 October 2009 (UTC)
- Strongly oppose per above; proposal is a year old. Bearian (talk) 01:34, 22 January 2010 (UTC)
- I think we should migrate the biological aspects of the neural network article to the biological neural network article, keep the artificial neural network article for the computing/algorithmic aspects of neural networks and use the neural network article as a small abstract article about both types of neural networks and how they are related. Nicholas Léonard 04:14, 10 March 2010 (UTC) —Preceding unsigned comment added by Strife911 (talk • contribs)
OpposeChaosdruid (talk) 03:26, 12 July 2010 (UTC)
- I agree also - These three articles are pretty much the same thing with slight variations. NN should in no way have as much detail as it does about ANN Neural Network should cover the cognitive aspects of both ANN and BNN. ANN and BNN should then take their theories and applications and expand upon them. The NN page should only have history, background and a summary of the branches of research and apps. and maybe a little piece saying how "in modern times NN is mostly used to refer to ANN" and so correct the misconception that NN IS ANN" and thats all really.
- I have tried to do this in the disambiguation page also - maybe thats the place to start - it took me an hour and a half to get the wording but I think I have it all in there :¬)
- Chaosdruid (talk) 03:26, 12 July 2010 (UTC)
Done - Weblink suggestion: Free bilingual PDF manuscript (200pages)
I currently am at the RoboCup 2009 Competition in graz, where I found the site http://www.dkriesel.com/en/science/robocup_2009_graz because different to the robocup site ;) it presents recent news and pictures about robocup.
What I found there might be something for this wikipage: http://www.dkriesel.com/en/science/neural_networks a neural networks PDF manuscript is presented that seems to be extended often, is free, contains whole lots of illustrations and (this is special) is available in English and German Language. I also noticed that its german version is linked in the german wikipedia. I want to start a discussion if it should be added as weblink in this article. If there will be no protest, I would try and add it in the next few days. 91.143.103.39 (talk) 07:42, 4 July 2009 (UTC)
- Looks like a good resource. I would prefer to link to the PDF directly, however the author has stated they do not wish this to be done. User A1 (talk) 09:05, 4 July 2009 (UTC)
- They say they don't wish this to be done because of the extension they make which even include filename changes 91.143.103.39 (talk) 10:54, 4 July 2009 (UTC)
- As an aside, in my opinion, it would be better if the author made it cc-by-sa-nc, rather than the somewhat vague licencing terms give. User A1 (talk) 09:08, 4 July 2009 (UTC)
- Yeah, someone wants to mail and explain that to him? Not everyone is aware of such licenses 91.143.103.39 (talk) 10:54, 4 July 2009 (UTC)
- Another small thing, just to ley you know: If I place a link, I will just copy and translate that of de.wikipedia.org ...91.143.103.39 (talk) 10:56, 4 July 2009 (UTC)
- Placed the link as rough translation of that from the german wikipedia. Anyone mailed the authors concerning the license issue? RoadBear 80.136.224.193 (talk) 08:45, 7 July 2009 (UTC)
- Another small thing, just to ley you know: If I place a link, I will just copy and translate that of de.wikipedia.org ...91.143.103.39 (talk) 10:56, 4 July 2009 (UTC)
- Yeah, someone wants to mail and explain that to him? Not everyone is aware of such licenses 91.143.103.39 (talk) 10:54, 4 July 2009 (UTC)
- As an aside, in my opinion, it would be better if the author made it cc-by-sa-nc, rather than the somewhat vague licencing terms give. User A1 (talk) 09:08, 4 July 2009 (UTC)
- They say they don't wish this to be done because of the extension they make which even include filename changes 91.143.103.39 (talk) 10:54, 4 July 2009 (UTC)
Very Complicated
Does anyone else feel like this page is incomprehensible? Paskari (talk) 16:38, 13 January 2009 (UTC)
Yeah, reading the article one doesn't know what all of this stuff have to do with neurons (I mean, the article apparently only talks about functions). —Preceding unsigned comment added by 80.31.182.27 (talk) 11:32, 4 March 2009 (UTC)
Against Merging
I prefer leaving "Neural Network" as it is because the contents on the heading "Neural Network" gives the basic understanding of the Biological Neural Network and differs, in a great way, from Artifical Neural Network and its understanding.
- I agree. Neural network must talk about the generic term and Biological NN. Pepe Ochoa (talk) 22:17, 26 March 2009 (UTC)
The main discussion in neural network is about artificial Neural Network.So they should be merged with a discussion of Natural Neural network in introduction.Bpavel88 (talk) 19:03, 1 May 2009 (UTC)
I would agree that substantial differences lie between the two types, and that there is specific terminology used for the artifical types that would not be appropriate for the non-artificial page 220.253.48.185 (talk) 03:49, 7 June 2010 (UTC)
There are a lot of common concepts in artificial and biological neural networks, but they are also quite different. We don't really know how biological neural networks operate, and at the same time artificial neural networks use methods that we don't know is apropriate for biological neural networks. I would say keep them apart for now and probably for foreseeable future. Jeblad (talk) 13:56, 31 March 2019 (UTC)
Types of Neural Networks
I think this page should have 2-3 paragraphs tops for all the types of neural networks. than we can split up the types into a new page, making it more readable. Oldag07 (talk) 17:21, 20 August 2009 (UTC)
- It's a good idea. Now "Feedforward neural network" has only 3-sentence description, and less known types have much more... julekmen (talk) 12:13, 23 October 2009 (UTC)
Broken citation
I came to this page to find out about the computational power of neural networks. There was a claim that a particular neural network (not described) has universal turing power, but the link and DOI in the citation both seem to point to a non-existent paper.120.16.58.115 (talk) 04:17, 15 October 2009 (UTC)
- I've fixed it. Thanks for pointing out the error. User A1 (talk) 07:48, 15 October 2009 (UTC)
Remarks by Dewdney (1997)
The remarks by Dewdney are really from a sour physicist missing the point. For difficult problems you first want to see the existence proof that a universal function approximator can do (part of) the job. Once that is the case you go hunt for the concise or 'real' solution. The Dewdy comment is very surprising, because that was about six years after the invention of the convolutional neural MLP by Yann LeCun, still unbeaten in handwritten character recognition after twenty years (better than 99.3 percent on the NIST benchmark). If the citation to Dewdney remains in there, the balance requires that (more) success stories are presented more clearly in this article. — Preceding unsigned comment added by 129.125.178.72 (talk) 15:48, 3 October 2011 (UTC)
Dewdney's criticism is indeed outdated. One should add something about the spectacular recent successes since 2009: Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning[1]. For example, the bi-directional and multi-dimensional Long short term memory (LSTM)[2][3] by Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned. Recent deep learning methods for feedforward networks alternate convolutional layers[4] and max-pooling layers[5], topped by several pure classification layers. Fast GPU-based implementations of this approach by Dan Ciresan and colleagues at IDSIA have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition[6], the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge[7], and others. Their neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[8] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the famous MNIST handwritten digits problem of Yann LeCun at NYU. Deep, highly nonlinear neural architectures similar to the 1980 Neocognitron by Kunihiko Fukushima[9] and the "standard architecture of vision"[10]
can also be pre-trained by unsupervised methods[11][12] of Geoff Hinton's lab at Toronto University.
Deeper Learning (talk) 22:23, 13 December 2012 (UTC)
References
- ^ http://www.kurzweilai.net/how-bio-inspired-deep-learning-keeps-winning-competitions 2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his Deep Learning team 2009-2012
- ^ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
- ^ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
- ^ LeCun, Y., Bottou, L., Bengio, Y., & Ha�ner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE, 86, pp. 2278-2324.
- ^ Scherer, D., M?uller, A., Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition. ICANN 2010, pp. 82-91). Springer.
- ^ D. C. Ciresan, U. Meier, J. Masci, J. Schmidhuber. Multi-Column Deep Neural Network for Traffic Sign Classification. Neural Networks, 2012.
- ^ D. Ciresan, A. Giusti, L. Gambardella, J. Schmidhuber. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. In Advances in Neural Information Processing Systems (NIPS 2012), Lake Tahoe, 2012.
- ^ D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.
- ^ K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4): 93-202, 1980.
- ^ M Riesenhuber, T Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience, 1999.
- ^ http://www.scholarpedia.org/article/Deep_belief_networks /
- ^ Hinton, G. E.; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets" (PDF). Neural Computation. 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID 16764513.
{{cite journal}}
: Check|authorlink1=
value (help)
Cleanup
The entire article is absolutely terrible; there are so many good facts, but the organization is atrocious. A team needs to come in, clean up the article, word it well, and it's quite a shame because of how developed it's become. If anyone wants this article to at least reach a B-class rating on the quality scale (which is extremely important due to the article's importance in Wikipedia), we really need to clean it up. It's incomprehensible, and as someone pointed out above, it just talks about the functions of an artificial neural network, rather than how it's modelled upon biological neural networks, which is the principle purpose of this article, to explain how the two are related, and the history/applications of the system. Even worse, there are NO citations in the first few sections, and they are quite scarce. There is an excessive amount of subsections, which themselves are mere paragraphs.
Final verdict: This article needs to be re-written!
Thanks, Rifasj123 (talk) 22:47, 19 June 2012 (UTC)
- I agree with the above opinion. A lot of the statements in the article just come across as complete nonsense. The lead section is just crammed with terminology and hardly summarizes the article. Take the first sentence in the body of the article:
“ | The original inspiration for the term Artificial Neural Network came from examination of central nervous systems and their neurons, axons, dendrites, and synapses, which constitute the processing elements of biological neural networks investigated by neuroscience. | ” |
This almost seems as if it has deliberately been written to confuse the reader. Again, going down to Models,
“ | Neural network models in artificial intelligence are usually referred to as artificial neural networks (ANNs); these are essentially simple mathematical models defining a function or a distribution over or both and , but sometimes models are also intimately associated with a particular learning algorithm or learning rule. | ” |
This just doesn't make any sense! Nobody can get anywhere reading this article, it's just babbling and jargon glued together with mumbo-jumbo. JoshuSasori (talk) 03:50, 14 September 2012 (UTC)
- I've done a bit of work on cleaning up the article & will now see what response this gets. If there are no problems then I will continue cleaning up and removing the babbling and nonsense. JoshuSasori (talk) 03:36, 22 September 2012 (UTC)
- I think part of the problem is that a good portion of the editors are grad students/postdocs procrastinating from reading academic papers that sound exactly like this. We have to keep trying I guess. SamuelRiv (talk) 17:40, 28 February 2013 (UTC)
- Support a total rewrite per Rifasj123 above. This article is just laden with errors and original research. Just zap it. And merge in Neural Network in the process. History2007 (talk) 23:53, 19 March 2013 (UTC)
- Please discuss merging with Neural network over at Talk:Neural network#Proposed merge with Artificial neural network. QVVERTYVS (hm?) 16:40, 4 August 2013 (UTC)
Proposed merge with Deep learning
"Deep learning" is little more than a fad term for the current generation of neural nets, and this page describes neural net technology almost exclusively. The page neural network could do with an update from the more recent and better-written material on this page. QVVERTYVS (hm?) 11:12, 4 August 2013 (UTC)
- I am against merger. I disagree that deep learning is merely a fad - there are fundamental differences between distributed representation implementations (e.g. deep belief networks and deep autoencoders) and they all step further from the term neural network than simply being artificial. On the basis that deep learning is just another neural network term, we'd end up merging anything to do with machine learning into one page. However, I agree the related articles need work and balance. p.r.newman (talk) 13:54, 20 August 2013 (UTC)
- Oppose – I basically agree with Mr. Newman. Deep learning is a rather specific conception in the context of ANNs. Obviously, a short and concise section on the topic should be a welcome part to ANN. Kind regards, 㓟 (talk) 22:47, 30 August 2013 (UTC)
- I oppose the proposed merger. The term describes a theory that is more general than any particular implementation, such as an ANN, to wit, a big chunk of the current article describes how DL might be implemented in wet(brain)ware, which presumably can't be tucking into ANN since the brain ain't "A" :-) Jshrager (talk) 03:44, 9 September 2013 (UTC)
- I am against the merger. Even if several traits are shared between "classical" neural networks and deep learning's networks they are sufficiently different to deserve their own page. Also, merging would create a single massive article regarding all neural-net-like things. However, I agree the articles could be better organized. Efrenchavez (talk) 02:59, 15 September 2013 (UTC)
- Against - big time. This is the correct term, and is as separate from neural networks as it is from deep learning.
- Deep learning is one method of ANN programming, and so a sub-topic of ANN, which covers all aspects of programming, hardware and abstract thought on the matter. Chaosdruid (talk) 20:22, 15 September 2013 (UTC)
- Comment: The deep learning article basically includes a claim in its own lead section that implies it is a content fork. Chaosdruid aptly points out above that deep learning is a sub-topic of artificial neural networks. But, that contradicts the quote included in the lead section of the deep learning article. There is no (clear) explanation anywhere in the article on how deep learning is related to neural networks, so the readers are left to figure it out on their own. If they take the lead section's word for it, they will go away with the belief that deep learning is not a sub-topic of neural networks, which is what the lead strongly implies. See Talk:Deep learning#"Deep learning" synonymous with "neural networks"?. The Transhumanist 02:00, 25 September 2013 (UTC)
- I don't think there's any clear-cut definition of "deep learning" out there, but all the DL research that I've seen revolves around techniques that would usually be considered neural nets; the remark in deep learning's lead that it's not necessarily about NNs is, I think, OR. (And "neural nets", in computer science, is also a vague term that nowadays means learning with multiple layers and backprop.) QVVERTYVS (hm?) 16:39, 25 September 2013 (UTC)
- Withdrawn. QVVERTYVS (hm?) 22:55, 22 October 2013 (UTC)
- Oppose - Not only deep learning covers at best a small proportion of neural computation, it has already risen to stardom, more than deserving its independent article. MNegrello (talk) 12:12, 28 August 2017 (UTC)
Rename and scope
There is a major problem of this article. It only covers the use in computer science. There are biological neural network that are artificially created. See here for an example: Implanted neurons, grown in the lab, take charge of brain circuitry.
Also, in computer science, the term, neural network, is very established. Major universities use NN instead of ANN as the name of subjects. Here is an example: http://www.cse.unsw.edu.au/~cs9444/ It should be renamed to neural network(computer).
My views of merge with other articles can be found on the talk page of neural network. Science.philosophy.arts (talk) 01:45, 20 September 2013 (UTC)
- Perhaps neural network (computer science) or neural network (machine learning) is more appropriate then? But I agree; the neural network articles are currently a mess and don't have clearly defined scopes. I've been trying to move content from neural network to this article and remove all non-CS-related materials to get a clearer picture, but at some points my efforts stalled. QVVERTYVS (hm?) 12:26, 20 September 2013 (UTC)
- We should make the title as short as possible. Science.philosophy.arts (talk) 15:03, 20 September 2013 (UTC)
Last section should be deleted
While looking at the article, I realized that the "Recent improvements" and the "Successes in pattern recognition contests since 2009" sections are very similar. For instance, a quote from the former section:
Such neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[21] on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun and colleagues at NYU.
And the latter:
Their neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[21] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem of Yann LeCun at NYU.
Wow. Since the former section is better integrated into the article and the latter section seems to be only something tacked on at the end, beginning with the slightly NPOVy phrase "[the] neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions", I would strongly recommend that the latter section be deleted and its content merged into the former section (this process seems to have been halfway carried out already). Comments? APerson (talk!) 02:20, 21 December 2013 (UTC)
Backpropagation didn't solve the exclusive or problem
"Also key later advances was the backpropagation algorithm which effectively solved the exclusive-or problem (Werbos 1975).[6]"
The Backpropagation algorithm doesn't solve the Xor problem, it allows efficient training of neural networks. It's just that a neural network can solve the Xor problem while a single neuron/perceptron can't.
217.140.110.23 (talk) 13:07, 15 April 2014 (UTC)Taylor
it's not clear to what degree artificial neural networks mirror brain function
I would take out this sentence as it is 100% clear brain doesn't compute gradients. Mosicr (talk) 16:10, 13 September 2016 (UTC)
Machining application of artificial neural network
Artificial neural network has various application in production or manufacturing[1] that are capable of machine learning[2] & pattern recognition[3]. Various machining[4] processes require prediction of various results on the basis of the input data or quality[5] characteristics[6] provided in the machining process & similarly back tracking of required quality characteristics for a given result or desired output characteristics.--Rahulpratapsingh06 (talk) 12:39, 5 May 2014 (UTC)
References
- ^ pratapsingh, rahul. https://en.wikipedia.org/wiki/Manufacturing.
{{cite web}}
: Missing or empty|title=
(help) - ^ pratap singh, rahul. https://en.wikipedia.org/wiki/Machine_learning.
{{cite web}}
: Missing or empty|title=
(help) - ^ pratap singh, rahul. https://en.wikipedia.org/wiki/Pattern_recognition.
{{cite web}}
: Missing or empty|title=
(help) - ^ pratapsingh, rahul. https://en.wikipedia.org/wiki/Machining.
{{cite web}}
: Missing or empty|title=
(help) - ^ https://en.wikipedia.org/wiki/Quality.
{{cite web}}
: Missing or empty|title=
(help) - ^ https://en.wikipedia.org/wiki/Characteristic.
{{cite web}}
: Missing or empty|title=
(help)
Relationship between quality characteristics and output
The relationship between various quality characteristics & outputs can be learned by the artificial neural network design on the basis of the algorithms and programing over the data provided, which is machine learning or pattern recognition.--Rahulpratapsingh06 (talk) 12:29, 5 May 2014 (UTC)
Types of artificial neural networks
I remove this part : "Some may be as simple as a one-neuron layer with an input and an output, and others can mimic complex systems such as dANN, which can mimic chromosomal DNA through sizes at the cellular level, into artificial organisms and simulate reproduction, mutation and population sizes.[1]" because dANN is not popular. What do you think ? --Vinchaud20 (talk) 10:05, 19 May 2014 (UTC)
- Absolutely right. This seems to be a plug for dANN, a rather minor project. QVVERTYVS (hm?) 09:14, 20 May 2014 (UTC)
Also " Artificial neural networks can be autonomous and learn by input from outside "teachers" or even self-teaching from written-in rules." should be remove because it is a reformulation of the learning process. And here, we speak about the "Type of Neural network" and not the "learning process" --Vinchaud20 (talk) 10:12, 19 May 2014 (UTC).
References
- ^ "DANN:Genetic Wavelets". dANN project. Archived from the original on 21 August 2010. Retrieved 12 July 2010.
{{cite web}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help)
Fluidization
I want to know about fluidization
recent improvements and successes since 2009
recent improvements and successes since 2009 are nearly identical. I think the since 2009 section is obsolete
LuxMaryn (talk) 13:26, 26 November 2014 (UTC)
- Agreed. Feel free to merge the two. QVVERTYVS (hm?) 14:24, 26 November 2014 (UTC)
This is a stupendously bad article
I am trying to imagine someone like a very bright high school student who heard that neural networks might be interesting, and visited this article to learn at least a little about them.
The student will learn nothing whatsoever about neural networks from this article. They will learn, however, that many people who write for Wikipedia have not the slighest idea of what an encyclopedia article ought to be like.
The text does not explain anything to anyone who doesn't already know what neural nets are. There is not even one — not even one — example of a simple neural net for someone who has never seen one before. All the inscrutable definitions and diagrams do nothing at all toward helping a newcomer to the subject understand what a neural net is.Daqu (talk) 01:40, 8 May 2015 (UTC)
- @Daqu: Which changes to this article would you propose, then? Jarble (talk) 03:02, 20 July 2015 (UTC)
- The answer to your question, Jarble, is in what I posted. If I were any kind of expert on neural nets, I would fix the problems myself. But I am not. That did not prevent me from noticing some important things that the article does not have.Daqu (talk) 00:03, 7 August 2015 (UTC)
I know a lot of math and statistics, but I honestly still don't understand the concept of a neural network after looking at this article. Looking at this talk page, it's obvious that a lot of people are dissatisfied with it. I think that, as suggested by the original poster in this thread, a lot of improvement could be made simply by putting in, immediately after the lead, a very simple example of a neural network, with all details included—each variable defined explicitly, each neuron defined explicitly, etc. Loraof (talk) 20:34, 14 September 2015 (UTC)
Neural networks are so hot right now, there are so many incredibly exciting applications out there, and this article barely mentions one. Examples, we need examples. Stupid practical examples to depict how a simple NN works, and interesting examples of possible applications to show what it is possible to achieve with NN (self-piloting helicopter anyone?) 193.205.78.232 (talk) 16:47, 7 October 2015 (UTC)
- Seems like neural network means as one of its applications Data mining from groups of people and subsequent showing related content on TV. Contend is get out from large library using mined info. RippleSax (talk) 20:30, 1 February 2016 (UTC)
- Or Advertising (contextual advertising) using user model and user HTTP cookies for mining. RippleSax (talk) 21:04, 1 February 2016 (UTC)
Connection to graphical models
The wikipedia article states that neural networks and directed graphical models are ``largely equivalent``. While I know both feed forward neural networks and directed graphical models, I don't understand where this equivalence should come from (admittedly both models seem to be similar enough to suspect something like this though). Could anyone elaborate on this or at least add a source with the precise meaning of this statement? — Preceding unsigned comment added by 77.9.130.141 (talk) 07:03, 6 November 2015 (UTC)
Explanation
See Reverse engineering. Example. Modeling of the nervous system of reptiles (Russian) (and kangaroo): F-22 Raptor Main article is bad. Much math without physical meaning. RippleSax (talk) 16:00, 1 December 2015 (UTC)
- Such network modulates/reflects (as a model like graph or other structure) bioelectrical templates/patterns in biosystem: like On the Origin of Species or Eugenics RippleSax (talk) 01:49, 10 December 2015 (UTC)
Inconsistent and Uncited Timeline Events
The article cites Hebbian learning as one of the origins of Artificial Neural Networks. The only related citation in this article, the Wikipedia entry for Hebbian learning, and my own research indicate that Hebb's oldest work on this topic was published in 1949.
Simultaneously, the article states that "Researchers started applying [Hebbian learning] to computational models in 1948 with Turing's B-type machines."
Neither Hebbian learning prior to 1949 or the 1948 models are cited and it seems to imply that the ideas published by Hebb in 1949 were already being applied a year earlier in 1948. Thedatascientist (talk) 16:59, 20 January 2016 (UTC)
Facts, not opinions
The "Theoretical" section of the article is badly in need of revision. Specifically, this excerpt: "Nothing can be said in general about convergence since it depends on a number of factors. Firstly, there may exist many local minima. This depends on the cost function and the model. Secondly, the optimization method used might not be guaranteed to converge when far away from a local minimum. Thirdly, for a very large amount of data or parameters, some methods become impractical. In general, it has been found that theoretical guarantees regarding convergence are an unreliable guide to practical application."
- Logical inconsistency: "Nothing can be said in general...[but,] in general, it has be found that..."
- There are no citations for this section. This information "may" or "might not" be entirely fictional.
- Lack of any specific information or elaboration:
* "There may exist many local minima." How does this affect convergence? * "The optimization method used might not be guaranteed to converge when far away from a local minimum." Example? * "Some methods become impractical." Which ones? * "In general, it has been found that theoretical guarantees regarding convergence are an unreliable guide to practical application." By who?
97.79.173.131 (talk) 01:44, 7 June 2016 (UTC)j_Kay
Dr. Gallo's comment on this article
Dr. Gallo has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:
This article is well organized and written, with an adequate level of detail. No inaccuracies or errors seem to be present.
We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.
Dr. Gallo has published scholarly research which seems to be relevant to this Wikipedia article:
- Reference : Crescenzio Gallo, 2007. "Reti Neurali Artificiali: Teoria ed Applicazioni Finanziarie," Quaderni DSEMS 28-2007, Dipartimento di Scienze Economiche, Matematiche e Statistiche, Universita' di Foggia.
ExpertIdeasBot (talk) 13:41, 11 June 2016 (UTC)
Suggestions for clarification
Maybe these are buried somewhere in the article, but I can't figure them out. I think a new section should be added near the beginning to address these questions.
- Is each arrow in the flowchart estimated separately? Or are they all estimated as one combined function or by some joint estimation technique?
- How is an arrow's function estimated? Is a functional form assumed and a prespecified set of parameters estimated (and if so, by what estimation technique)? Or is there some way in which the data determine the functional form?
- Does "learning" mean re-estimating each function using a data set augmented with the latest data? Or is there also learning about the functional forms?
- How are the nodes and the number of hidden layers chosen—are they pre-specified, or do the data determine them (and if so, how)?
Loraof (talk) 17:08, 17 July 2016 (UTC)
L. Ron Hubbard?!?!?!
When I read this article just now, the article referred (in History-->Improvements since 2006) to the simple and complex cells in the visual cortex discovered by David Hubel and L. Ron Hubbard. I'm not an expert in the field, but as far as I can tell, Hubel's partner in that research was Torsten Wiesel. I can't find any reliable source that mentions Hubbard studying neurology or vision, and I suspect that it was either a mistake or vandalism. I've corrected it to Torsten Wiesel. — Preceding unsigned comment added by 91.135.102.171 (talk) 15:39, 26 September 2016 (UTC)
I will try to add a basic introduction
I agree with all the comments that this is a lot of detailed information without a good overview. I will work on an introduction to it all that makes it a bit more clear. — Preceding unsigned comment added by Anximander (talk • contribs) 08:57, 16 October 2016 (UTC)
Cellular automata?
I can see a number of similarities between cellular automata and neural networks. Is there a known relationship between these two models? Are they computationally equivalent/inequivalent? 75.139.254.117 (talk) 13:47, 18 March 2017 (UTC)
This Article Does Not Do A Good Job of Explaining the "Neural" part of "Neural Network"
I have a degree in mathematics. I am understanding the math parts of this just fine, as I think anyone with a general grasp of college level mathematics will. However, I have found it very difficult to understand how the math set forth in this article corresponds to what is actually happening when an ANN-modeled computer is trying to compute something. Mainly because this article doesn't explain what activation or inhibition mean mathematically, or even what they are conceptually. I think. Are the neurons in the input layer observing individual elements of a vector, or are they observing the whole vector but competing with each other because they are taking slightly different values? What's going on in the "hidden" layers? Are the output layers putting out individual elements of a vector, or something else? I'm not even sure if these questions make sense. 100.33.30.170 (talk) 16:06, 5 April 2017 (UTC)
Update: I tagged the models section to reflect this and added some explain tags to some sentences. 100.33.30.170 (talk) 16:17, 5 April 2017 (UTC)
Update: What would help most if there was a very basic practical example of a neural network computing something. 100.33.30.170 (talk) 16:20, 5 April 2017 (UTC)
Addition of Components Definitions in "Models" but conflicting notation
Very helpful in improving the clarity of what the neural network is actually computing. However the next section seems to be referring to similar mathematical functions using different notation. For instance in components section, the activation function is written as a_j(t), whereas in the next section the activation function is referred to as K. Perhaps this article should be edited to unify the notation as I think this creates an unnecessary confusion. Or if they are different then explain why they are different. 38.140.162.114 (talk) 16:15, 3 August 2017 (UTC)
Redirect and Disambiguation: Neural Computation
The Neural Computation redirect sends here, but the concept is well established in its own right. It is more closely related to the page 'Models of Neural Computation' or 'Biological Neural Networks'. Artificial Neural Networks as in this present article are but a subset of neural computation which took flight (through its applications in machine learning).
I propose editing the disambiguation page for Neural Computation and splitting the concept between this page, the eponymous journal, or 'Models of Neural Computation'. The redirect eclipses the good article on 'Models of Neural Computation', of more interest to neuroscientists.
MNegrello (talk) 12:26, 28 August 2017 (UTC)
- First, to clarify, it is currently a redirect page, and so you are proposing converting it into a multi-branched disambig page? North8000 (talk) 13:16, 28 August 2017 (UTC)
- Good and important suggestion! But I don't think this can be a pure disambiguation page, because neural computation does not mean ANN. That relation is rather loose. I tried to put your suggestions in the article. Please improve as you find suitable. --Ettrig (talk) 14:11, 30 August 2017 (UTC)
Edited templates - talk page type opinions in the body of the article
A large amount of edited templates were inserted that essentially put talk page opinions into the body of the article. These opinions should be moved to the talk page.
I think that the comments look like something I myself would write.....in essence that the text does not do a good job of teaching the topic. However, when writing such things I've received responses that Wikipedia does not teach, it provides information. I disagree with a categorical interpretation of that. So I agree with the comments (once moved to the talk page), they are iffy in Wikipedia.
Finally, persons with such thoughts should work on the article.
Sincerely, North8000 (talk) 23:38, 23 October 2017 (UTC)
Removed final sentence on ANNs
I read this paragraph 3 times and did not find anything of meaning or merit in it. So I made an edit removing it. Please re-add it if you will take on the task of actually making it say something meaningful and well cited.
Sincerely Patriotmouse (talk)
Too many redundant cleanup tags
User:Nbro recently added a few dozen cleanup tags to this article, but they appear to be mostly redundant. Would it be possible to rollback these redundant tags instead of removing them manually? Jarble (talk) 18:58, 15 November 2017 (UTC)
- Per my post above I agree with reduction or removal of the tags. Especially due to the text that was included in the tags....misplaced and somewhat high-handed sounding. But in general I agree with the criticisms expressed by the tagger.North8000 (talk) 13:00, 22 November 2017 (UTC)
Applications: pattern recognition
As the article Pattern recognition itself suggests, arguably any type of machine learning can be understood as pattern recognition, so I don't know what the category "pattern recognition (radar systems, face identification, signal classification,[203] object recognition and more)" in the list of applications is supposed to tell me. — Preceding unsigned comment added by 141.84.69.83 (talk) 18:29, 11 July 2018 (UTC)
- Not arguing either way about inclusion, but I think that it is using the common meaning of the term which is patterns on images. North8000 (talk) 12:00, 24 August 2018 (UTC)
Some possible edits to the article
1) Basically, the recent papers which affect the advances n neural nets should be mentioned. Specifically papers like properties of neural nets which discusses how neural nets can be fooled by just adding noise so that any image could be classified as ostrich.
2) New architectures like attention based models and new areas like forensic where it is applied is not discussed in the article. — Preceding unsigned comment added by Shubhamag97 (talk • contribs) 03:27, 13 September 2018 (UTC)
Dump from backpropagation
I have dumped two sizable sections from backpropagation in this article. I am not sure that this improved this article. I am sure though, that it improved the Wikipedia, because the texts became neither worse nor better and the material does belong in the topic of this article. I know the problem of dividing material up between ANN and deep learning is severe. But again, learning is not part of BP. Maybe we should have a separate article about artificial neural network learning. --Ettrig (talk) 13:59, 28 November 2018 (UTC)
Capsule networks
Someone has dumped a small paragraph about capsule networks into the section about convolutional networks, "A recent development has been that of Capsule Neural Network (CapsNet)…"[1] This is not quite right as the capsule network in Hintons variant has a routing component, and this does not fit very well convolutional networks. The convoluting action in the first capsule layer is somewhat similar to a convolutional network, but the routing has more in common with recurrent networks. There are other differences too, but the previous is perhaps the simplest way to see how they diverge.
An alternative way to describe the differences is to say a convolutional network is a stateless network, the transform from the input to the output is a stateless function, while a capsule network manage an internal state. It needs the internal state to converge to a solution.
I would say Hintons capsule network is a specific type from a new class of networks, where the class is "networks that correlates normalized data". The normalization can be done in several ways over several layers. It can even be learned in deep networks.
[Addition: A fun thing is that Hintons capsules does the correlation in the routing, and nearly avoids the normalization, but by doing so it perpetuates the core problem, it must learn the new overall pose. It seems like the cortex avoids this problem altogether and learn a general pose.] Jeblad (talk) 13:18, 31 March 2019 (UTC)
- I'd urge you to fix or delete it. Also that addition sort of has the look of a reference spam addition. North8000 (talk) 13:27, 31 March 2019 (UTC)
- No biggee here, but Jeblad you should avoid changing your posts (as you just did) after people have responded to them. You can use a new post to add, or strikethrough if removing something. North8000 (talk) 13:45, 31 March 2019 (UTC)
Wrong Author for Memory network papers
I noticed that the first three papers in the section on Memory networks are cited to be authored by Jürgen Schmidhuber, while in the papers he does not appear as an author.
Meerpirat (talk) 09:36, 4 April 2019 (UTC)
Merger proposal
- Merge with Neural network and rename as Neural network(Machine Learning)*. The article Neural network contains only about Artificial Neural networks. In 2009, when the previous merge discussion happened, the article contained enough information about Biological Neural networks. But after the advent of deep learning "Neural Network" is synonymous to "Artificial Neural network". I strongly feel the articles must be merged. Dheerajmpai23 (talk) 10:56, 14 May 2019 (UTC)
- Shouldn't biological neural networks have a place somewhere? North8000 (talk) 11:13, 14 May 2019 (UTC)
- Strong oppose. Biological and artificial (computational) neural networks are very different topics. And this has been a perennial proposal that has consistently resulted in consensus that the two are different: Talk:Neural circuit#Proposal to rename and restart, Talk:Neural network/Archive 2#Merger proposal - sorta, Talk:Neural network#Proposed merge with Artificial neural network, #Merge suggestion, #Against Merging. --Tryptofish (talk) 18:31, 14 May 2019 (UTC)
- Oppose This has been discussed in depth before. They are two different subjects with only some overlap. North8000 (talk) 23:52, 14 May 2019 (UTC)
- Oppose In support of above. Though artificial neural networks have been garnering a great deal of press as of late, the current hierarchy is appropriate to acknowledge the distinction. I enjoy sandwiches (talk) 01:30, 25 May 2019 (UTC)
- It looks to me like the discussion has quieted down and that there is consensus against the merge, so I am going to remove the merge templates from the tops of the two pages. But if anyone wants an uninvolved closure, that's fine with me. --Tryptofish (talk) 21:13, 5 July 2019 (UTC)
- Comment @Tryptofish, I enjoy sandwiches, North8000, and Dheerajmpai23: The discussion linked above, from Talk:Neural circuit, proposes by consequence, that neural circuit be moved to biological neural network, and that seems agreeable, as well as satisfying to the request of merger here, though they didn't quite get around to discussing that idea at the time. Alternatively the "circuit" article could be split into circuit and biological, though there seems to be a recent trend toward article length. But the move circuit to biological seems so obviously sensible and non-contradictory to any of the relevant discussions, including this one. ~ R.T.G 00:06, 7 August 2019 (UTC)
Forward propagation of historic assumptions and comprehension failures
[2] — Preceding unsigned comment added by 14.177.144.110 (talk) 00:09, 3 June 2019 (UTC)
- Thanks for posting here. However, a comment board is not a reliable source that we can use on Wikipedia. --Tryptofish (talk) 21:01, 3 June 2019 (UTC)
- You're welcome. I just wanted to highlight that almost everyone is using the highly inefficient 'bubble sort' neural network algorithm.
- Unless they are using NEAT or other minority algorithm. You don't need to burn up the planet with CO2 emissions just to train neural networks. Just as there are more efficient algorithms for sorting than bubble sort so there are more efficient algorithms for neural networks:
- https://github.com/S6Regen/Fixed-Filter-Bank-Neural-Networks
- https://discourse.numenta.org/t/fixed-filter-bank-neural-networks/6392
- https://discourse.numenta.org/t/distributed-representations-1984-by-hinton/6378/10 — Preceding unsigned comment added by 113.190.215.228 (talk) 04:25, 6 August 2019 (UTC)
- Not a comment board, not self-published company resources... but reports, by respected third parties, newspapers/magazines (on or offline), science (and other) journals, books, similar spoken/televised reports, or anything with similar respectability for factual accuracy. We report that there were dinosaurs, but we do (almost always, particularly about hard science) not all come to the talk page and agree there were dinosaurs. We pull out a load of dinosaur books instead and we say, this book says this, and that book says that. We come to the talk pages then to weigh the value and verity of each fact, usually meaning we just report stuff that has been published repeatedly elsewhere, or can be agreed essential to the subject. By judging the value of a fact to describing a particular subject, facts may be reported without relying on third party publishing, but these are usually trite or short, and significant facts, where it makes sense to use a primary or secondary source where third party publishing is unavailable or too numerous to decide which is referenced. Quite frankly, to save us being qualified adjudicators, blogs and chat forums are blankly rejected, as the best respected third parties will professionally fact-check and either publish no mistakes, or follow up mistakes in ways which come to light easily. We are an encyclopaedia which rejects commercial interest, but our contradiction in that is, we are strictly an advert for published information. ~ R.T.G 19:36, 6 August 2019 (UTC)
- one clarification, when you say "notability" you mean real world notability. WP:Notability is something completely different and is a criteria for having a separate article. North8000 (talk) 21:30, 6 August 2019 (UTC)
- Apologies. Posted quickly without checking. Edited a little better now hopefully. ~ R.T.G 23:24, 6 August 2019 (UTC)
- one clarification, when you say "notability" you mean real world notability. WP:Notability is something completely different and is a criteria for having a separate article. North8000 (talk) 21:30, 6 August 2019 (UTC)
- Not a comment board, not self-published company resources... but reports, by respected third parties, newspapers/magazines (on or offline), science (and other) journals, books, similar spoken/televised reports, or anything with similar respectability for factual accuracy. We report that there were dinosaurs, but we do (almost always, particularly about hard science) not all come to the talk page and agree there were dinosaurs. We pull out a load of dinosaur books instead and we say, this book says this, and that book says that. We come to the talk pages then to weigh the value and verity of each fact, usually meaning we just report stuff that has been published repeatedly elsewhere, or can be agreed essential to the subject. By judging the value of a fact to describing a particular subject, facts may be reported without relying on third party publishing, but these are usually trite or short, and significant facts, where it makes sense to use a primary or secondary source where third party publishing is unavailable or too numerous to decide which is referenced. Quite frankly, to save us being qualified adjudicators, blogs and chat forums are blankly rejected, as the best respected third parties will professionally fact-check and either publish no mistakes, or follow up mistakes in ways which come to light easily. We are an encyclopaedia which rejects commercial interest, but our contradiction in that is, we are strictly an advert for published information. ~ R.T.G 19:36, 6 August 2019 (UTC)
Reference to the neural network zoo
Hi @Tryptofish, I added a sentence at the top of Variants section, and linked to the neural network zoo. Why did you revert my edit? Your note mentions WP:COI, but I do not know the zoo authors, and I am not affiliated with them. The catalogue that they have created helps the reader have a better understanding of different ANN architectures. --Habil zare (talk) 21:56, 26 June 2019 (UTC)
- There would still be an issue with WP:UNDUE. Perhaps it would be better to link to it in an External Links section. Sorry if in fact you don't have a COI, but the edit looked like a plethora of edits that we get where there is self-promotion going on. --Tryptofish (talk) 22:04, 26 June 2019 (UTC)
- I've added it back, as a WP:EL. --Tryptofish (talk) 22:42, 26 June 2019 (UTC)
- Thanks @Tryptofish. That is indeed appropriate. — Preceding unsigned comment added by Habil zare (talk • contribs) 18:12, 27 June 2019 (UTC)
Why it is impossible to improve this page: The concept will never be explained
I am going to stay on topic and simply state the reason why, in my view, this page will never be acceptable. in other words, I'll discuss improvements to the page and nothing else.
Since there is no such thing as an artificial neural network (a lot of stuff related to Artificial Intelligence is pure bull), no one can explain it in layman's terms, in simple terms and all you're going to get is difficult explanations that don't make any sense. Such as this one: https://arxiv.org/pdf/1609.08144.pdf?fbclid=IwAR21rxrFrNqJ3G-flYcqbpUbhG79ChD9DBG8uzo9htlnu-dXhAWaaKwBuGw
AI people will pretend that their concept is so complex that only super brains can understand it, which is bull of course. There is a lot of bull concerning artificial intelligence. Why? Researchers want to keep money pouring in in research projects so they pretend they have accomplished a lot, which is a lie. In my view, at one point, Governments will step in and investigate for a lot of actions perpetrated by AI people are close to fraud and thousand of investors are investing into a field that hypes itself and lives on false claims.
So in the end, you'll have the following choice:
• a) Publish a complex explanation that does not make sense
• b) Publish nothing for you won't find a simple explanation anywhere.
Since you don't want a) there is only b) left. You have no other choice.
Therefore the best thing to do in my view, is to simply state in the article that since there is no reference available that explains the concept in layman's terms, you refrained to provide an explanation coming from the Artificial intelligence milieu because no one could understand it.
If you're expecting a simple and plausible explanation to insert in the article, an explanation you can reference, don't hold your breadth because it just won't happen in a million years. So forget it!
-- Robert Abitbol — Preceding unsigned comment added by 24.54.3.100 (talk) 18:20, 15 December 2019 (UTC)
- I don't agree with the negativity but there are good points in there. Einstein said "if you can't explain it simply you don't know it well enough." People who know technical topics that well (and have the empathy and interest to explain it simply) are scarce in Wikipedia, partly because Wikipedia discourages editing by experts. North8000 (talk) 13:31, 17 December 2019 (UTC)
- I agree on certains points, North8000. One of the GREAT thing about Wikipedia is that it forces so called experts to explain their discoveries, inventions in simple terms. If they can't it is sure sign they are coming up with a fraud, a fabrication. You write: People who know technical topics that well (and have the empathy and interest to explain it simply) are scarce in Wikipedia. I disagree on this one: Here is why. There are a lot of people on Wikipedia who are able and interested to simplify. But here when it comes to neural systems, this is pure FRAUD and this is why no one can explain it in layman's terms. There is nothing to explain for this claim is pure bull. When someone says to you that he or she has invented a concept so complex only super brains can understand it, it means it is PURE BULL" Look at it this way: A computer has the brain of a 6month old. How could it understand this super duper brainy concept?
- Bottomline is: People out there are very gullible but editors at Wikipedia aren't. We just don't buy the it-is-so-complex-it-is-impossible-to-explain -it-in-layman's terms line.' THERE IS NOTHING IN THIS WORLD THAT IS SO COMPLEX THAT IT CANNOT BE EXPLAINED IN SIMPLE TERMS. Even rocket science can be explained simply.
- The axiom is simple: If a concept cannot be explained in simple terms, it means it is a fraud.
-- Robert Abitbol
— Preceding unsigned comment added by 24.54.3.100 (talk) 19:42, 19 December 2019 (UTC)
- No one can explain how the Risch algorithm works in layman's terms, therefore the Risch algorithm does not exist. Wolfram Mathematica is an illusion. - MrOllie (talk) 20:49, 19 December 2019 (UTC)
ridiculous statement and unrelevant detail
This change changes two things. One of them has a motivation that is clearly wrong. That can be seen by a look at the detailed article about the history. That article also shows that the revert introduces an error. The other change back has a motivation that does not respond to the motivatione for the change (in the change comment) that it reverts.--Ettrig (talk) 15:08, 18 May 2020 (UTC)
- I'm not following your post but I agree with your removal of that paragraph. It has no useful content for the article, it is pretty obviously promotional for the book and reference spamming. I'm going to take that paragraph out. North8000 (talk) 17:18, 18 May 2020 (UTC)
- I did it. North8000 (talk) 17:35, 18 May 2020 (UTC)