Jump to content

Talk:Gaussian adaptation/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Original research/unverified claims?

Does this count as original research or not? That is a tricky question. The author is published, but the articles are minimally cited and discussed according to research databases. The author describes his theories relating to evolution as "alternative" ( http://sv.wikipedia.org/wiki/Diskussion:Socialdarwinism ) (as user Rogerg) ( http://www.evcforum.net/cgi-bin/dm.cgi?action=msg&f=25&t=1887&m=4 ) (as user gregor). I think the answer is yes, it is OR, mainly because there are really no independent sources that discuss or support the claims that are made. Sjö 14:57, 19 April 2007 (UTC)

Some figures in the link to evcforum above are now visible again.--Kjells 12:03, 23 April 2007 (UTC)

Improvements have been made

The only”research” made by me concerns two mathmatical theorems. The theorem of Gaussian adaptation (here abbreviated GA) was discovered 1969 at random, see http://www.evolution-in-a-nutshell.se/background.htm[dead link]. Information theory was used already in 1969, but the theorem of efficiency – based on information content was discovered later, in 1991. The rest is more like discovering that 1 + 1 = 2, using results that are well known, cited and discussed.

I admit that there are unverified claims, so I have tried to fill the gaps by adding references to: Hartl, Kandel et al., Kirkpatrick, Levine, MacLean, Maynard Smith, Mayr and Zohar. Some other improvements has also been made. For instance, in order to avoid self-assertion, my name has been removed from the ingress. But some references must be made to our papers.

I admit that very few authors cite or discuss our work. The only technical paper and dissertation are due to Pinel & Singhal and Stehr. Pinel & Singhal writes 1981: “ Only one general purpose algorithm has been successfully demonstrated on large examples” with reference to GA.

In his dissertation Stehr gives a comprehensive discussion on GA. There are also references to Antreich and Lüder who may have references to us.

To my knowledge, the only biologists that have discussed GA are Brooks and Wiley.

The dissertation of my colleague, Taxén 2003, addresses the problem with information systems used for engineering information management purposes. One detail focuses on the balance between chaos and order in developmental systems. This is also the main concern of the theorem of efficiency in the GA-article. References to GA are made.

Gaines (about learning theory, 1997) has not given any reference to us, but he arrives at the same conclusions as given by our theorem of efficiency – based on information content - from 1991; that it is advisable to be successful in about 100/e = 37% of random trials.

Fisher’s fundamental theorem of natural selection has been discussed by me and others on different forums for instance: http://www.evolutionisdead.com/forum/viewtopic.php?t=330[dead link] and http://www.iidb.org/vbb/showthread.php?t=163730&page=5&highlight=rogerg[dead link] [ EDIT: Page moved to http://www.freeratio.org/thearchives/showthread.php?t=163730 ]

When it comes to the evolution in the brain, the figure visualises the GA-computation as a matrix multiplication. It fits wery well to the MacLean brain model and that neurons may add, synapses may multiply and axons may delay signal values. That the updating of the moment matrix of the Gaussian in princple follows the well known Hebbian theory of associative learning is perhaps not easily seen by the layman. Every such operation must also be uncertain to some degree and according to Levine, signal values may appear as Gaussian distributed. This is well known from the theory of digital filters and neural networks. Kandel et al. also states that many neurons fire at random. In this sense GA may also contribute to the discussion about free will as an illusion, cp. Zohar 1990.

Even if GA is not discussed and cited very much, it has a wide span over many different fields in optimization , science and philosophy.--Kjells 10:46, 23 April 2007 (UTC)

The well known Finnish brain researcher Matti Bergström developed an entropy model of the brain - based on information theory - already in 1969. A reference to Bergström has now been included. --Kjells 06:23, 29 May 2007 (UTC)


creationists have reason to doubt the classical teory of evolution

A discussion about Fisher's fundamental theorem has recently been held at ScienceBlog, where I have been encouraged to publish some new paper about it. Here is my blog:

Submitted by kjellstrom on Sat, 2008-01-12 03:04. bioscience and medicine

Creationists have reason to doubt the theory based on Fisher’s fundamental theorem of natural selection published in 1930. It relies on the assumption that a gene (allele) may have a fitness of its own being a unit of selection. Historically this way of thinking has also influenced our view of egoism as the most important force in evolution; see for instance Hamilton about kin selection, 1963, or Dawkins about the selfish gene, 1976 in http://en.wikipedia.org/wiki/Gaussian_adaptation#References

On the other hand, if the selection of individuals rules the enrichment of genes, then Gaussian adaptation will perhaps give a more reliable view of evolution (see the blog “Gaussian adaptation as a model of evolution”).

In modern terminology (see Wikipedia) Fisher’s theorem has been stated as: “The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genic variance in fitness at that time”. (A.W.F. Edwards, 1994).

A proof as given by Maynard Smith, 1998, shows the theorem to be formally correct. Its formal validity may even be extended to the mean fitness and variance of individual fitness or the fitness of digits in real numbers representing the quantitative traits.

But, if the selection of individuals rules the enrichment of genes, I am afraid there might be a risk that the theory becomes nonsense, and that this is not very well known among biologists.

A drawback is that it does not tell us the increase in mean fitness (see my blog “The definition of fitness of a DNA- or signal message”) from the offspring in one generation to the offspring in the next (which would be expected), but only from offspring to parents in the same generation. Another drawback is that the variance is a genic variance in fitness and not a variance in phenotypes. Therefore, the structure of a phenotypic landscape – which is of considerable importance to a possible increase in mean fitness - can’t be considered. So, it can’t tell us anything about what happens in phenotypic space.

The image shows two different cases (upper and lower) of individual selection, where the green points with fitness = 1 - between the two lines - will be selected, while the red points outside with fitness = 0 will not. The centre of gravity, m, of the offspring is heavy black and ditto of the parents and offspring in the new generation, m* (according to the Hardy-Weinberg law), is heavy red.

http://www.evolution-in-a-nutshell.se/image001.gif[dead link]

Because the fraction of green feasible points is the same in both cases, Fisher’s theorem states that the increase in mean fitness is equal in both upper and lower case. But the phenotypic variance (not considered by Fisher) in the horizontal direction is larger in the lower case, causing m* to considerably move away from the point of intersection of the lines. Thus, if the lines are pushed towards each other (due to arms races between different species), the risk of getting stuck decreases. This represents a considerable increase in mean fitness (assuming phenotypic variances almost constant). Because this gives room for more phenotypic disorder/entropy/diversity, we may expect diversity to increase according to the entropy law, provided that the mutation is sufficiently high.

So, Fisher’s theorem, the Hardy-Weinberg law or the entropy law does not prove that evolution maximizes mean fitness. On the other hand, Gaussian adaptation obeying the Hardy-Weinberg and entropy laws may perhaps serve as a complement to the classical theory, because it states that evolution may maximize two important collective parameters, namely mean fitness and diversity in parallel (at least with respect to all Gaussian distributed quantitative traits). This may hopefully show that egoism is not the only important force driving evolution, because any trait beneficial to the collective may evolve by natural selection of individuals.

Gkm

http://www.scienceblog.com/cms/creationists-have-reason-doubt-classical-theory-evolution-15214.html[dead link]

http://www.scienceblog.com/cms/blog/kjellstromk[dead link]

--Kjells (talk) 07:22, 9 February 2008 (UTC)


Gaussian adaptation used for other purposes

I have now been able to see that Gaussian adaptation is used for other purposes. One such algorithm is known as the "The Stauffer and Grimson algorithm". See for instance page 2 in

http://lilaproject.org/imatge/_Montse/pub/ICASSP05_xu_landabaso_pardas.pdf[dead link]

Thus, some text and references has been included in the article. --Kjells (talk) 07:31, 9 February 2008 (UTC)

Clarification?

The theorem as stated on this page says:

If the centre of gravity of a high-dimensional Gaussian distribution coincides with the centre of gravity of the part of the distribution belonging to some region of acceptability in a state of selective equilibrium, then the hitting probability on the region is maximal.

This is somewhat vague. Here's my guess as to what it means: Firstly, I'm guessing that "center of gravity" of a region means center of gravity of a bounded region with "uniform" density (weights proportional to volumes, if you like). Secondly I'm guessing that what is meant is just that if you move the mean of the Gaussian distribution from the center of gravity of the region to a different location without changing the covariance structure of the distribution, then the Gaussian assigns lower weight to the region than it did when its mean coincided with the region's center of gravity. I.e. the maximization problem is to find the best location for the Gaussian distribution, NOT the best shape of the region or anything else about the region. Thus "hitting probability" does NOT mean what that term means in the field of stochastic processes. Am I OK so far? Michael Hardy (talk) 00:58, 15 June 2008 (UTC)

Sorry for the late answer and the misinterpretation. I did'nt expect anyone to comment the article. If you read the theorem carefully you will find that it is about two different centres of gravity 1) the centre of gravity of a Gaussian distribution and 2) the centre of gravity of the part of the distribution (the same Gaussian) belonging to some region of acceptability, NEVER the centre of gravity of the region. The shape of the region is given beforehand and its centre of gravity is irrelevant. The algorithm called Gaussian adaptation is referenced to in the article about Genetic Algorithms. --Kjells (talk) 14:07, 5 August 2008 (UTC)

Tone of the Article

I just ran across this article on Gaussian adaptation. As I've never heard about it before (it's not directly in my field, but fairly close), I decided to read it. The tone of the article does not read like a Wikipedia/encyclopedia article. It reads like somebody who is desperately trying to get a journal published after going through several iterations with a skeptical editor/reviewer. After reading through a few things, it seems like the article has a fair amount of original research in it and it the brain-child of a single devout person. After skimming the discussion page, now I see why. As the author of the article and research himself points out "The only”research” made by me concerns two mathmatical theorems. ... The rest is more like discovering that 1 + 1 = 2..." I would really like to see this article conform to quality standard, or deleted all together. Borky (talk) 13:06, 23 June 2008 (UTC)

It may be that the quality standard may be improved, but I am not sure I will do it. I am getting rather tired about Wikipedia. It does not react in the way I expect when I try different things. Another problem is that Fisher's fundamental theorem of natural selection was misinterpreted for many years. Biologists thought that it prooved the maximization of mean fitness by natural evolution. But this was nonsense. The theorem of Gaussian Adaptation, GA, proves the maximization of mean fitness at least with respect to all Gaussian distributed characters in a large population, which is better than nothing, I think. If you replace the Gaussian with some other distribution, then you can't prove the maximization of mean fitness any more, and the possibility may get lost until the theorem of GA is discovered again. References has been made to the article in many other articles on Wikipedia and in many philosophical discussions on blogs and different fora. But, if you like to remove the article you can do so, of cource. I may write about it elsewhere on internet. Besides, I have no intension to write any more papers for scientific journals, than the two already published. Internet is sufficient for me.--Kjells (talk) 17:22, 5 August 2008 (UTC)
The real problem with this article is the lack of secondary sources. The pure reliance on primary sources means that the article is highly technical and lacks anything in the way of context for the general reader (as well as meaning that it fails to establish the notability of the topic -- meaning that it is highly likely that it will end up getting deleted). HrafnTalkStalk 17:40, 5 August 2008 (UTC)


Cite this quote

The theorem of Gaussian adaptation is presented in pop-version on Wikipedia and is no quote. The proof has been given in the technical papers, See Kjellström 1970 and Kjellström & Taxén 1981 in references. --Kjells (talk) 17:52, 5 August 2008 (UTC)

It used the <blockquote>-tag, making it look like a quote. Regardless, it should be cited to a (reliable) source. HrafnTalkStalk 18:02, 5 August 2008 (UTC) Do either of these two papers contain this articulation of the theorem (or a close paraphrase), or merely the proof? I.e. can this articulation be cited to these papers, or does it constitute original research? HrafnTalkStalk 18:05, 5 August 2008 (UTC)
The paper from 1981 gives definitions of the centres of gravity μ and μ* and a proof of the condition of optimality for GA, μ = μ*. A similar condition of optimality for the moment matrix M reads M = βM* meaning that M should be proportional to M*. This more complex part of the theorem has been omitted in the pop-scientific Wiki version. The same definitions and proofs are presented in the paper from 1970 together with a discussion of the conditions of optimality. The algorithm was presented as Gaussian Adaptation for the first time in 1991. In the paper by Kjellström 1991 the region of acceptability is more generally replaced by a probability function s(x) and the mean fitness is P. Then the following version of the theorem is presented: "For any s(x) and for any value of P < q, there always exist a Gaussian p.d.f. that is adapted for maximum dispersion. The necessary conditions for a local maximum are m = m* and M proportional to M*. the dual problem is also solved: P is maximized while keeping the dispersion constant." And dispersion is mathematically the exponential of disorder, entropy, average information.--81.234.148.221 (talk) 07:18, 6 August 2008 (UTC)
From this I will infer that the "pop-scientific Wiki version" of the theorem's articulation is original research, not traceable to any single paper. As such it is inappropriate for inclusion per WP:NOR. HrafnTalkStalk 08:26, 6 August 2008 (UTC)
Do you mean that it is o.k. to use the articulation used in the paper by Kjellström 1991?--81.234.148.221 (talk) 13:28, 6 August 2008 (UTC)
Yes. HrafnTalkStalk 15:01, 6 August 2008 (UTC)
I have now been making some improvements. The conflict of interest is removed, the theorem of GA has been given in its most general form, even thogh the gradient of mean fitness and average information has not been presented. I also wonder if the corresponding tags may be removed?--Kjells (talk) 08:34, 7 August 2008 (UTC)


Some years ago a Swedish biologist told me that Gaussian adaptation (GA) could not contribute anything new to the theory of evolution. The theorem of Fisher (ToF) was sufficient. I recognized that both theorems were about the increase in mean fitness, but also that they told a different story. I expected that ToF should tell the same story as GA in phenotypic space. But there was a difference.

Now, I see no conflict of interest. My interest was to find out why there is a paradox here. I now see why the theorem of Fisher differs from the theorem of Gaussian adaptation (GA), in which only one definition of mean fitness is used. But, in Fisher's theorem two different definitions are used: 1 the mean fitness of offspring (before selection) and 2 the mean fitness of the parents to offspring in the next generation (after selection). Thus far I have always used the same definition of a mathematical entity when trying to investigate its increase (see the definition used in the article). Therefore ToF tells me nothing about the increase in the mean fitness of offspring from one generation to the next (my main concern) or likewise for the parents. The entropy law is ignored and without entropy there will be no evolution. GA will also be unable to work properly without a suitable increase in entropy. I look forward to see a new ToF considering also the entropy law.--Kjells 08:38, 7 August 2007 (UTC) Retrieved from "http://en.wikipedia.org/wiki/Talk:Fisher%27s_fundamental_theorem_of_natural_selection"

You still haven't resolved the conflict of interest, see Wikipedia:Conflict of interest.Sjö (talk) 12:27, 7 August 2008 (UTC)
Agree -- you are too closely connected to the topic of 'Gaussian adaptation' for there not to be a WP:COI. Per this policy you should "avoid, or exercise great caution when: Editing articles related to ... projects ... [you] are involved with" HrafnTalkStalk 12:50, 7 August 2008 (UTC)

The most important sections improved

The most important sections has been improved by references to peer-reviewed papers by Kjellström, 1991 and 1996, accepted by the scientific community. I can't see any more unverifiable claims, or?--Kjells (talk) 13:46, 7 August 2008 (UTC)