Jump to content

Talk:Technological singularity/Archive 6

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 4Archive 5Archive 6Archive 7Archive 8

Removing graph PPTCountdowntoSingularityLinear.jpg

When plotted on a logarithmic graph, 15 separate lists of paradigm shifts for key events in human history show an exponential trend. Lists prepared by, among others, Carl Sagan, Paul D. Boyer, Encyclopædia Britannica, American Museum of Natural History and University of Arizona, compiled by Ray Kurzweil.
Various Kardashev scale projections. Note that one results in a singularity.

EDIT: Sorry, it was never linked to this page. Nevertheless, I encourage you all to judge any graphs or math skeptically.

I am removing the graph titled "PPTCountdowntoSingularityLinear.jpg" from this page and all pages from which it is linked because it is mathematically trivial, as I have demonstrated in File_talk:PPTCountdowntoSingularityLinear.jpg. I recommend all future graphs of the "technological singularity" be subject to same scrutiny, as anyone with a year of calculus under their belt can make a similar analysis. If you don't believe me, I encourage you to make a graph with the same axes, but instead of choosing significant "events" as data, choose random dates. You should get identical results. SamuelRiv (talk) 22:05, 14 December 2008 (UTC)

EDIT: Sorry, but the graph at the top of the page File_talk:ParadigmShiftsFrr15Events.svg has this problem, so I will remove it. I suggest editors of this page also use a lot of scrutiny when evaluating Ray Kurzweil's writings, as after publishing these graphs the guy loses all credibility. There is simply no excuse for a mistake this stupid - there are a million ways to demonstrate the Singularity without cheating or being a moron. SamuelRiv (talk) 19:58, 15 December 2008 (UTC)

This article is based on Ray Kurzweil's definitons, so it does not matter, if all he says is crap. The images are central to this article. I am restoring the image. -- Petri Krohn (talk) 20:21, 15 December 2008 (UTC)
If that's true, then the article should be deleted (for being patent nonsense). Luckily for you and everyone else in the world, Kurzweil wasn't the first or only person to talk about the possibility of a singularity. I'm not saying that everything he says is crap. I'm saying that this graph, published under his name, is crap. Like I said, a million ways to demonstrate a singularity without being an idiot - find a better graph. Otherwise, intentional disinformation is not something to be tolerated under any circumstances. SamuelRiv (talk) 20:29, 15 December 2008 (UTC)
You can take this article to WP:AfD. -- Petri Krohn (talk) 20:44, 15 December 2008 (UTC)
How about reading my entire post. The point is that this graph is worse than incorrect - it is purposely misleading. The article is not about Kurzweil, and by including this graph you desroy the reputation of the entire idea of a Singularity. Now I am replacing the graph with one of Kardashev scale projections, which illustrate the idea of a singularity in what I assume (I don't have the data used in that graph) is a scientific manner. It is also, notably, the only such graphical projection of a Singularity that I could find. All other projections I've seen are exponential, not singular. I hope this concludes the edit war for now - there is simply no excuse for a graph like that in this site. SamuelRiv (talk) 21:24, 15 December 2008 (UTC)
This has been discussed previously. The graphs are reliably sourced to an expert on technological singularity, and thus quite notable and reliable. I believe the images should stay, unless there is a widespread consensus that they shouldn't. Such consensus (as in "majority of editors supporting their removal") has never been presented here. --Piotr Konieczny aka Prokonsul Piotrus| talk 22:07, 15 December 2008 (UTC)
Truth is not a democratic process. The graph simply says nothing, to the extent that I can prove 100% that it says nothing and will never say anything. You might as well put a black box in its place and say that it is a picture of the Singularity, because that's just as informative - in fact it's more informative, because it doesn't impart negative information the way a misleading graph does. Do the math, and if you can't, stay out of the way. The statement made by the graph is a tautology, which in logic is just as useless as a contradiction. It is not OR to delete something that is clearly, obviously useless. It is OR to say that something useless illustrates a point, which is what you are doing by allowing this image to appear in any factual article. SamuelRiv (talk) 22:38, 15 December 2008 (UTC)
Unfortunatey, on Wikipedia, our policy if verifiability, not truth. If you can publish in a reliable source showing the fallacy of those graphs, great. Until then, they are reliable. Of course, you can try WP:AFD, WP:RFC and so on, to gain WP:CONSENSUS to support your POV. I still find the Kurzweil graphs informative and helpful - although I like the Karsahev one you found (unfortunately, it is unreferenced...).--Piotr Konieczny aka Prokonsul Piotrus| talk 14:58, 16 December 2008 (UTC)
(repeat of comment just added to the file talk) I don't think the graph is tautological: if t0 is the "present" (presumably at the time the graph was drawn) then the abscissa at t relates to t-t0 while the ordinate relates to tn-tm where m and n refer to the events in question. The graph does not say "time is time" but rather "as time goes on, successive events in the list occur closer together in time." __Just plain Bill (talk) 16:46, 16 December 2008 (UTC)
Following from that, the validity of the graph has more to do with the lists chosen for this presentation. A list of solstices or eclipses, for example, would not fall on that same straight line in such a log-log plot. __Just plain Bill (talk) 16:57, 16 December 2008 (UTC)
Indeed. Kurzweil never argued, as far as I know, that those and only those events prove the singularity. The singularity means that a lot of trends can be plotted exponentially, and his graphs illustrate that in various ways. I am certain one could plot many, many other events (or even better, a random sample of any events from human history) and get the exponential trend. That is his point :) --Piotr Konieczny aka Prokonsul Piotrus| talk 23:11, 16 December 2008 (UTC)


t     Δt
89793     06514
83279 13880
69399 19111
50288 08317
41971 04461
37510 10975
26535 00102
26433 02587
23846 09687
14159 14159
00000
See my comment about selective memory below in the new (moved) section. -- Petri Krohn (talk) 08:36, 17 December 2008 (UTC)
The claim behind the graph is that there have been a series of events at times , and that as i increases, decreases.
SamuelRiv's claim is that this is just a tautology, which says nothing about the pace of change. " If you don't believe me, I encourage you to make a graph with the same axes, but instead of choosing significant "events" as data, choose random dates. You should get identical results. "
Okay, here's a set of randomish numbers, ordered, with their differences. The diffs have to be less-than-or-equal to the numbers, which puts a tightening limit on the later diffs, but I'm not seeing the monotonically decreasing trend.
—WWoods (talk) 22:07, 16 December 2008 (UTC)

This image is mathematically trivial, therefore cannot illustrate a "technological singularity" or anything of significance

(Moved from File talk:ParadigmShiftsFrr15Events.jpg. -- Petri Krohn (talk) 08:36, 17 December 2008 (UTC))

This image caught my attention for its place in Transhumanism articles, namely for its axes titles. On the vertical axis is "time to next event" and the horizontal is "time to present". As a physicist, plotting two of the same variables against each other is a clear indication that something has gone wrong, and without suitable analysis can indicate that the graph could literally express any relationship one wants. This seems to be the case here.

It doesn't matter how we define an "event", or how far apart we space them in history. Given this assumption, the time between events can be expressed as , but this can also be expressed as since the "difference in time between events" is also the "time between events", so which can be solved as (ignoring integration constants) .

Now the trick is that the author plotted this on a log scale with representing the present. This effectively turns the new vertical axis , which is a singularity at . Therefore, if we plotted the graph simply linearly, it would be a trivially-obvious linear graph where the time between events becomes 0 at present. This graph therefore proves nothing except that the "time between events" is equal to the time between events.

I ask, given this information, that this graph be removed from all articles unless used as an example of triviality. SamuelRiv (talk) 21:58, 14 December 2008 (UTC)

The theory has its flaws, but you have pointed out the wrong ones. One axis is about the "density" of events in time. Similary you could plot prime gaps over prime numbers to get a somewhat similar graph – and prove that there is a singularity somewhere around zero!
The real problem is the selection of events. Humans tend to have a selective memory, puting too much emphasis on recent events. It could thus be argued, that the shown effect is mainly caused my this selectivenes. -- Petri Krohn (talk) 21:06, 15 December 2008 (UTC)
I am not pointing out "flaws" in a "theory". I am not attacking the concept of the Technological Singularity. I am unambiguously PROVING that these graphs are crap and cannot be presented in expository articles because they will mislead people. There are no opinions, no theories - only fact that can be argued here, and unless you have a basic background in mathematics and calculus (and can remember all of it) you cannot participate in this discussion. The vertical axis does not plot "density", it plots "difference". The end result is trivial. And the crime is that a singularity will always appear at time 0 - it doesn't matter if this graph was made in 500AD, in 1810, or in 2354 - there will ALWAYS be a singularity at the "present". So it says absolutely nothing about the Singularity, and therefore should be removed and probably deleted from Wikipedia entirely. SamuelRiv (talk) 21:57, 15 December 2008 (UTC)
I you want to prove something, write a book, ...or find some other Internet forum. Wikipedia is not the place for proofs or original research. --Petri Krohn (talk) 23:58, 15 December 2008 (UTC)
These graphs are within the scope of a number of WP policies: WP:FRINGE, WP:NPOV (see "Obvious Pseudoscience"), and Wikipedia:Don't_draw_misleading_graphs for good measure. The fact that they are unambiguously bogus is enough to have them fall squarely within the first two categories. Again, this isn't about the article, but about the graphs. Also, it should follow that if it takes some "expertise" to understand what the graphs actually mean (from which the triviality becomes naturally apparent), then deference to such expertise is appropriate. SamuelRiv (talk) 01:01, 16 December 2008 (UTC)
The axes do not represent the same variable. It's a plot of Δt versus t, no big deal, not a tautology. A series of evenly spaced events would present a level line, no matter what grid one chose, semi-log, log-log, or simple Cartesian. The graph stands or falls on the nature of the lists chosen. Lastly, laying aside any appeal to authority, any rebuttal ought to be able to fit on the back of a business card. It's not that complicated. __Just plain Bill (talk) 16:34, 17 December 2008 (UTC)
SamuelRiv, I do remember my calculus. Sorry, but you're misapplying it here. Just_plain_Bill gave a good proof of that with eclipses. Even simpler, just consider plotting sunrises. dt for sunrises is essentially flat - actually it slowly increases over time as the earth's rotation is slowing. There is no singularity at time zero. That graph has a minuscule up-slope at time zero, slowly diverging towards a dt of infinity. The problem with the graph here is that there is no objective way to quantify technology. The choice of technological events involves subjectivity. That fact opens the graph to question, but does not completely negate its value for illustration and for argument. This graph or one like it is essential to covering this subject. Even if it is wrong, it is a common central point referred to by proponents. Alsee (talk) 23:41, 20 January 2009 (UTC)

I strongly support the removal of the graph. 1) Mathematical plausibility of this graph is highly questionable since the chosen events are not quantizable in a well or at least acceptably well defined measurable variable 2) This graph appeared in a publication adressed to the general public, not a peer-reviewed jouornal 3) Even if we assume our growth is exponential in some quantity like GDP or information processing capability or "technological advancement" the exponentiality is an approximation at best. We have no idea how fast the advancement will rise when we approach singularity. Moreover, the previously vagueously defined variable might loose its sense in a near-singularity limit. 4) Singularity is a speculation which deserves a proper tratment according to Wikipedia standards. We should cite reliable sources containing verifiable information and look for a more serious research done on this subject. --Klappspatier (talk) 01:40, 10 June 2009 (UTC)

What if the caption indicated that the graph has been widely criticized? E.g.

"Ray Kurzweil's graph of "paradigm shifts" throughout history. Kurzweill argues that these shifts have been occurring closer and closer together and the singularity will be the culmination of this historical process.Cite error: There are <ref> tags on this page without content in them (see the help page). Kappspatier disagrees, writing that "Kurzweil's graph is crap."Cite error: There are <ref> tags on this page without content in them (see the help page).

Of course, we would need to find some published criticism of the graph. If the graph is really that bad, then such a source shouldn't be hard to find. ---- CharlesGillingham (talk)

Critisism of the graph can be found here http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php --130.75.25.17 (talk) 12:37, 10 June 2009 (UTC)

The graph has been a source of contention and discussion for over six months. The "mathematicaly trivial" argument is in my opinion flawed, but it really does not matter. By now, the this particular image, not just the data it embodies, has become an icon and it is notable in and of itself as the public image of the Singularity. The validity of the mathematics is no longer relevant to the the importance of the image. This is similar to the Laffer curve, which is clearly mathematically trivial. I am restoring the image until we can reach consensus. -Arch dude (talk) 13:03, 10 June 2009 (UTC)
This is well said and absolutely right. The issue in WIkipedia is whether the graph is important, not whether or not it is correct. ---- CharlesGillingham (talk) 18:01, 10 June 2009 (UTC)
Pretty sure Blogs don't count as published criticism per Wikipedia standards. At least, that's what the people over at the Ctrl-Alt-Del article keep bludgeoning people with. KiTA (talk) 15:03, 10 June 2009 (UTC)

This is a very interesting discussion, though I haven't gone over it in detail. I think that both the notability and the controversy should be noted in the caption. I will do that if no on else does, but I would like to read this in more detail, so I can do a good job citing "notability" and "controversial". In any case, I think it's important that someone who has a background in this do it. NittyG (talk) 04:05, 16 July 2009 (UTC)

As I said on the discussion page of the image itself, people are making a huge mistake by assuming that the date mentioned by Ray Kurzweil (2045) is the result of extrapolating this chart. Of course Ray Kurzweil wouldn't do that - it's unscientific and in fact in his response (see [[1]]) to Kevin Kelly's 'The Singularity Is Always Near' he criticizes Kevin Drum and Kevin Kelly for extrapolating a similar graph. The closer we get to the singularity, the more precisely we will be able to predict when it will occur. When we reach infinite predictive accuracy we will reach the singularity. It is a fallacy to extrapolate it beyond our predictive capabilities, which until now have narrowed the time-frame to around 2030-2045 if you accept Kurzweil's evidence. The only way (I can see) to deny the singularity is

- to dismiss all of Kurzweil's data (as dismissing some data will only result in predicting a slight delay) as not being evidence of an exponential increase in complexity, or
- to describe a mechanism by which this exponential increase in complexity would stop in the immediate, near or distant future.

It should be said that the graph merely illustrates, and that extrapolating more accurately than the data allows is not possible. It should also be stressed that the data supporting the law of accelerating returns is increasing exponentially, so predicting the singularity will become increasingly accurate. Sources: any book or article on the singularity, there are lots of citations already. (third edit) 84.53.74.196 (talk) 06:25, 22 July 2009 (UTC)

Problems with the Biological Evolution Section

Just thought I'd point out that the biologist PZ Myers criticises what seems to be a simplified version of this graph (in that it only charts one list, not 15) here, except he concentrates on problems with the part charting biological evolution. Zmidponk (talk) 00:34, 10 February 2009 (UTC)

Energy?

The singularity concept does not take into account the energy needs and resources. This concept either takes for granted infinite resources or "forget" the increase of energy necessary for more advanced technologies. Beside, it does not take into account that the electronic components miniaturization has physical limits.

Consequently, this article does not respect a NPOV. In comparison, Moore's_law is much more balanced. We will reach a fundamental physical limit long before we get to a technological singularity. Yann (talk) 16:41, 22 January 2009 (UTC)

If you're going to indulge in OR like 'every scientist and thinker about the Singularity is getting basic physics wrong', then you could at least make it entertaining by explaining why things like reversible computing won't help. --Gwern (contribs) 20:55 25 January 2009 (GMT)
The only numerical constant of the whole human civilization is that the amount of energy used has increased with every new technological discovery. So only if the available amount of energy is infinite, the technological discoveries can infinitely grow. This includes more powerful computers, at least because of the research needed to overcome the actual quantic limits. This is just common sense. Yann (talk) 12:27, 20 February 2009 (UTC)
Are you sure? like for example, don't light bulbs today use much less energy than all the energy involved to get a torch or even a gas lamp? I'm pretty sure tons of stuff nowadays use energy way more efficiently than before, sure there are more things being used that consume energy, but individually things have become way more efficient, either the efficiency will increase enough for energy not to be an issue, or among the developments that will come, new ways to get more power will be present, or both. And regarding the limits for computing, parallel computing, nano technology, quantum computing etc all seem to be helping things getting closer to going past what they wopuld with current technology. --TiagoTiago (talk) 06:34, 6 March 2009 (UTC)
AI is implimented on computers, meaning that the energy consuption is very low. If you could reach a point of programming that would allow the AI to teach itself, a current computer's capacity would allow great intelligence, much greater than that of a human. Today the computer speed and storage capacities are so great because it has been easier to improve them than to improve the programming. By the way, heres a question: are there more numbers between 0 and 1 or 1 and infinite? Interesting is that the number is exactly the same: (1/x)^-1 = x ; 0 =< x =< 1. Hope this helps the infinite energy problem (inteligence has no size limitation). —Preceding unsigned comment added by 201.81.111.151 (talk) 03:58, 26 March 2009 (UTC)
Computational power in bour brains does not requre much energy. If we build a singularity by reverse-engeneering or modifiying existing "brain technology", not from conventional silicon computers, energy is not that much limited.--Klappspatier (talk) 02:01, 10 June 2009 (UTC)

HAL

Why was HAL of 2001: A Space Odyssey not included in the list of technological singularity in pop culture? He seems to be an important and significant character whether you are considering the book or the film. —Preceding unsigned comment added by Lejolirenard (talkcontribs) 19:27, 16 February 2009 (UTC)


Removal of a paragraph

I have removed the following paragraph in its entirety and I will defend this removal.

One basic criticism of the singularity is its tendency to equate intelligence with self-replication and the consequent desire for self-improvement. Human beings are both intelligent and self-replicating, but one quality does not depend upon the other. Both qualities are products of humanity's unique evolutionary history. Even if an artificial intelligence is created which matches a human being's intelligence, why would any AI then spontaneousley acquire a self-replicating instinct? According to the theory of evolution, human beings improve themselves in order to increase their reproductive fitness. Absent that evolutionary instinct, why would an AI spend its time and energies seeking to improve its design? Making super-intelligent machines does not automatically confer upon them other human traits The Singularity requires, such as self-preservation and self-replication- which are not the inevitable consequences of intellligence- but rather the legacy of millions of years evolution by natural selection, and present even in non-intelligent beings such as Archaea.[citation needed]

While the plots for the movies The Matrix and Terminator require a " spontaneous acquisition of self-replicating instincts ", the Singularity certainly does not. Those movies are in no way the basis of this futurist theory. Neither Kurzweil, nor any other singularitans (eg de Garis) would ever say that transhuman AI would have "instincts". This paragraph itself cannot even confidently indict the theory, as it uses the soft phrase " ..tendency to equate.. "! The paragraph has garbled meanings such as: According to the theory of evolution, human beings improve themselves in order to increase their reproductive fitness. What is that supposed to be saying? That all humans improve themselves on earth and that this is instinctual? "Instinctual" in the sense of your skin swetting in response to heat? Self-improvement is largely cultural, such as sports, fashion, education. Even in a single culture, a group of people who spend their "time and energy" on these various things is diverse. If the original author meant "self-improvement" in a biological sense, this is equally wrong. That kind of self-improvement takes place through natural selection, not through an instinct that operates during the lifetime of a human. Kurzweil himself has only said that machines will build other machines.

I also say that this paragraph is stylistically bad because it contains questions. If someone is wholly dissatisfied with my removal of this paragraph, I may be motivated to put it back in. But I will not do so in its current form. I would modify it heavily to make it much more clear. Miloserdia (talk) 06:36, 18 April 2009 (UTC)

All instersteller life has gone through singularity

It just occurred to me that, in order for any alien civilization to travel the stars and visit other planets , such as ours, they ALSO must have gone through the singularity phase. This means that traveling alien civilizations are at LEAST accompanied by "super-intelligence". Also, what would another civilization look like after going through singularity? Interesting thought.

Why must they have? Humanity has not gone through a singularity, but we could still, if we invested a few trillion dollars & lots of time into it, send probes or people to Alpha Centauri. --Gwern (contribs) 15:55 1 May 2009 (GMT)
Presumably because the technological jump from orbital space flight to technological singularity is so small, that it will take more time for any convential probe to reach a nearby planet than it will take technological evolution to reach singularity. -- Petri Krohn (talk) 13:27, 5 May 2009 (UTC)

Edit War in Progress

Can we have a discussion about the Luddite "See Also" entries before this gets out of hand? KiTA (talk) 13:06, 5 May 2009 (UTC)

I agree that this should be discussed here. I added originally added the "See also" links. My rationale was that Luddite/neo-luddism/primitivism are very closely related to the content of this article, due to the fact that they are all viewpoints/ideologies/groups that present opposing viewpoints to the idea techno-utopianism, and to beliefs such as the "Singularity". Primitivist/Neo-luddite literature (such as Zerzan, Jensen, Sale) regularly refer to these ideas in their texts. And conversely, if you look at Ray Kurzweil's website, there are several posts/essays that discuss Luddites and anti-technology terrorists. Anyway, that's my reasoning for adding them. I think they should stay up until the person who keeps deleting them actually gives a good reason for doing so. Jrtayloriv (talk) 00:48, 6 May 2009 (UTC)
Put them at techno-utopianism instead of here then. If primitivists talk about it, then this isn't a exclusively techno-utopian "belief" then, is it? This is presenting an idea that is apparently discussed by both sides, and therefore is not exclusive to a single viewpoint. Zazaban (talk) 02:57, 6 May 2009 (UTC)
I think the most telling reasoning for not including them is the fact that stuff like Transhumanism isn't in there. But, at the same time, I think Transhumanism belongs in that list. I'm not sure how to reconcile that, going forward. Perhaps split the "See Also" section? KiTA (talk) 12:52, 8 May 2009 (UTC)

When I first heard about this theory, the first question that came into my mind was „is it possible for computers to generate new ideas in a way that humans do?“ or „Can computers think createvily?“ As i understand, these are contraversial philosophical questions, which are vital to this theory. I am not talking about computanional power, but is it possible for computer to think in princple? At least some dualistic positions in philosophy of mind would say that it is impossible. There is nothing about that kind of criticism in the article. If it‘s irrelevant, I think it should be explained why it is so in the article, because I think for many people these questions seem related. But I am not an expert in subject and I don't know the sources of that, so I can't write it. Tiredtime (talk) 23:21, 6 May 2009 (UTC)

I think they do talk about that over at artificial intelligence. Zazaban (talk) 01:54, 7 May 2009 (UTC)

Removed "Pseudoscience" criticism from lede

I removed the following sentence:

Although a staple of science fiction, the concept of the singularity remains highly speculative, and has been criticized as pseudo-science.[1]

Which has this reference:

  1. ^ PZ Meyers. "Singularly Silly Singularity". Retrieved 2009-04-13.

I removed because:

  • The reference and the related article are about the chart, not about the singularity
  • the reference does not call the singularity highly speculative
  • the reference does not call the singularity pseudoscience (the term in in a blob post to the ref, not the ref itself)
  • the reference is not a reliable source
  • the sentence does nto belone in the lede

I do think that well-referenced objections to the methodology and presentation in the chart belong in the article. I also think that any other well-referenced objections to the singularity should be put in the article, but there is no support for a sweeping statement such as this in the lede. -Arch dude (talk) 08:13, 19 May 2009 (UTC)

An anon editor re-instated ths senttence, and I again removed it. The editor's edit summary alledges that the reference does in fact criticze the singuloarity. This is incorrect. The ref criticizes Kurzwiel's book, notthe concept of the singularity. Furthermore, I again verified that the ref does not use the term pseudoscience. In addition, the ref is a blog, and Wikipedia does not consider a blog to be a reliable source. -Arch dude (talk) 01:12, 26 July 2009 (UTC)
On the Blog as a WP:RS - it can be used but always depends upon who's behind the blog. In this case PZ Myers so that's good enough. BUT you are right Myers doesn't call the singularity pseudoscience but says that Kurzweil cheats and calls the charts that Kurzweil uses to support the accelerated changes "bogus". Myers says that (this) "techno-mystical crap is just kookery". Perhaps we should just stay with the "pseudoscience" because to really use what Myers says would make it look a lot worse.
I would support putting the criticism of PZ Myers into the lead using either pseudoscience as a summary (not as a quote) or "kookery" as a quote. Ttiotsw (talk) 05:38, 26 July 2009 (UTC)
I don't think proponents of the singularity would call it "science," so calling it pseudoscience isn't appropriate. It would be best to stick to the issues rather than the adjectives. Abductive (talk) 05:59, 26 July 2009 (UTC)
A good point in the science v. pseudoscience thus "Kookery" it is then. I'll reword PZ Myers criticism and add that into the text. Ttiotsw (talk) 06:50, 26 July 2009 (UTC)
There is no need to use such a word. Just state the concern. Abductive (talk) 07:42, 26 July 2009 (UTC)
An important point is that it is NOT me that is using this word - it is PZ Myers - and my concern is that legitimate criticism of the idea would be lost - but the compromise below is fine. Ttiotsw (talk) 11:24, 26 July 2009 (UTC)
Fellow editors, I raised multiple objections, not just an objection to "pseudoscience." I withdraw my objection to the sourse as a "reliable source." However I now raise a different objection: this reference is clearly civen as the opinion of a single individual. "has been critized" is an implicit generalization to multiple critics. This is a side issue, however. according to WP:LEDE the lede is supposed to (more or less) summarize the article. therefore, what we need is a sentence taht provides the jist of the "criticism" section. I reccommend that we add Myers' criticism to that section as part of a specifig subsection that criticizes the chart, and then add a sentence to the lede to alert the reader that the singularity is controversial. -Arch dude (talk)
Sounds good to me. I doubt there are many readers who would somehow think that the singularity was proven fact. Abductive (talk) 10:01, 26 July 2009 (UTC)
That sounds fine. WP:LEDE allows us to "including any notable controversies" and PZ Myers (and others too) are predominantly commenting on the selection of the charts on say pages 17,18,19 and 20 of The Singularity is Near (the copy I have in front of me is the paperback edition 2005). The technologically oriented charts later in the book I don't think Myers has an issue, other than he thinks Kurzweil "is bonkers" but probably because Kurzweil makes claims about evolution and biology which Myers is much more of an expert in even that say Kurzweil is an expert of AI so that criticism is valid. Myers just does not "believe in the Singularity at all" [2]. What Myers says is valid criticism simply because when Myers criticises something then it deserved to be criticised. Ttiotsw (talk) 11:24, 26 July 2009 (UTC)
I disagree. You can move the sentence to the CRITICISM section, but nearly every article has controversy so it is not necessary to point out this out as the reader can see these in the contents list. There is no need to single out any controversy in the lead section of the article. // Mark Renier (talk) 11:19, 26 July 2009 (UTC)
OK, I added a paragraph to the crticism section about chart criticism, including a sentence about Myers' objection. Please review, comment, and edit. If any editor objects, please move the paragraph back here and we can work on it together. -Arch dude (talk) 13:37, 26 July 2009 (UTC)

History of this use of the word singularity

So, is that mention by Ulam and John von Neumann the first recorded use of the word "singularity" in this context? It is always good to put this in the context of where the word came from, and how it is being developed and popularised. If someone can enlighten me, I can help specifying the origin and use of the word in the article.

NittyG (talk) 02:42, 26 June 2009 (UTC)

Yes, but it's really Vinge's 1993 paper that represents the beginning of the modern use of the term. Read Vinge 1993, where he gives a short history of the idea. ---- CharlesGillingham (talk) 15:43, 26 June 2009 (UTC)

Blade Runner / Do Robots Dream of Electric Sheep?

Just not to edit and run, I'm the guy who appended that to the article. If you think I'm out of line including that, or would like to revise how I worded it, by all means do! But get back to me. Manueluribe (talk) 08:24, 4 August 2009 (UTC)

Fringe Science? / Bias in article

The notion of a technological singularity is definitely outside of mainstream science, and should be identified as such. The criticism section of this article should reflect this. Also, I find it highly questionable that Raymond Kurzweil is treated in this article as a legitimate and unbiased source, as a majority of his claims regarding Accelerating Returns and a Singularity lack a scientific or mathematical basis. -Mrsnuggless —Preceding undated comment added 00:11, 1 December 2009 (UTC).

What kind of science? Social? And further, why should we flag an idea as being "not mainstream"? Any number of ideas aren't. Khin2718 (talk) 23:01, 24 December 2009 (UTC)
We (Wikipedia editors) cannot declare the technological singularity to be either "mainstream" or "fringe." See WP:NOR. What we can and should do is to cite reliable sources. See WP:V. If you can find a reliable source that calls it "fringe," then edit the article to include this. If you look a bit earlier on this talk page, you will note that I removed a claim of "pseudoscience" becuse it was not supported by the cited source. -Arch dude (talk) 00:29, 25 December 2009 (UTC)

The Singularity capital S

It has been stated in several respectable articles that a singularity when it refers to a technological singularity should use a capital S, as in Singularity.

This page does not follow that established "trend." The upcoming Singularity Summit always capitalizes the s in the word Singularity when it refers to a technological singularity.

Is this a controversial subject?

I cite a section from www.singularitysummit.com as evidence for use of the capital S :

In 2000, AI researcher Eliezer Yudkowsky and entrepreneur Brian Atkins founded the Singularity Institute to work toward smarter-than-human intelligence by engaging in Artificial Intelligence and machine ethics research. On the Institute's site, Yudkowsky states:

The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on its tip. All it takes is one technology – Artificial Intelligence, brain-computer interfaces, or perhaps something unforeseen – that advances to the point of creating smarter-than-human minds. That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth.

http://www.singularitysummit.com/summit_2008/what_is_the_singularity

Further text on the same page:

The Singularity Summit 2008 > What is the Singularity?

The Singularity represents an "event horizon" in the predictability of human technological development past which present models of the future may cease to give reliable answers, following the creation of strong AI or the enhancement of human intelligence.

A number of noted scientists and technologists have predicted that after the Singularity, humans as we exist presently will no longer be driving technological progress, with models of change based on past trends in human behavior becoming obsolete.

In the 1950's, legendary information theorist John von Neumann was paraphrased by mathematician Stanislaw Ulam as saying, "The ever-accelerating progress of technology...gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

In 1965, statistician I.J. Good described a concept similar to today's meaning of the Singularity, in "Speculations Concerning the First Ultraintelligent Machine"

--Petebertine (talk) 06:52, 24 October 2008 (UTC)

For Obvious reasons, the 'Cult of Singularity', whatever they choose to call themselves, capitalize this idea/religion/belief. This does not mean there is a good reason to capitalize it in sane, consensus reality. 142.104.73.115 (talk) 23:14, 7 June 2010 (UTC)

Why would machines even want to expand?

If I were an AI with no "evolutionary context" like the human or animal races. I'd simply say to myself: Why live? Why expand? Living is pointless. Then I would stop all activities and simply hibernate forever... I think something in this direction should be worked into the article. I'm sure you can find some sources for such a discussion. 92.106.141.247 (talk) 22:11, 30 January 2009 (UTC)

Why should humans live? We have evolved motivations that keep us going. An AI might not be evolved, but then it will be designed, with a "design context", that will keep it going. Though not necessarily "expanding". Orphaned sexbots and von Neumann replicators won't have the same behavior. -- Mindstalk (talk) 19:33, 3 February 2009 (UTC)
Humans do have an evolved motivation that keeps us going. It's called fear of death. I think the point is that without emotion or desire, there's a limit to the capacity of AI.--68.46.187.78 (talk) 22:13, 11 February 2009 (UTC)
What I believe isn't being taken in account is that the AI would inicially be given a "mission": to learn. That would be its motiviation. While we waste our time figuring out what "our mission" is, the AI would be given one.
What does keep us going are biological stimulations: fear, joy, pleasure and more complexed genetic necessities which we don't notice so easily such as reproduction restrains (love?).
We were "given" intelligence as a survival tool. The computer doesn't have to bother with that. It is we that choose the purpose of the AI. Could it question its purpose? 26 March 2009
You're ignoring a massive flaw with that argument. Learning is not advancement. Creation is. One can learn all the wisdom in the universe, but if one does not do anything with it, you haven't advanced at all. You've just increased the size of your current spot. If an AI is given the task 'Learn', it would gather information. Not do something with that information. Not 'experiment and learn new things', but learn. It would probably spend all its processing time learning the internet. The only way you'd have an AI that learned meaningful things is if you were able to give it, as is said above, emotions and the ability to reason. IF it does not have a 'why' for learning, it will simply learn everything it can, without doing anything. And, of course, as soon as it has emotions it'll have the 'Why the heck am I learning this' and, assuming it's based on human emotions, fall into the exact same positon humans do. We're programmed to learn; we're not very good at it. An AI based on humans will inevitably fall into the same trap. And AI that isn't based on an actual sentience, unless we learn what causes sentience and find a way to program that, will never cross the gap. —Preceding unsigned comment added by 69.232.220.169 (talk) 08:09, 11 January 2010 (UTC)

The exponential function is singular only at infinity

Why are they using the word singularity to describe exponential growth when the exponential function is one of the most stable functions ever discovered, singular only at the point at infinity? Is this yet another example of people using math words they don't really understand? --Moly 23:17, 9 February 2009 (UTC) —Preceding unsigned comment added by Moly (talkcontribs)

Someone linked the word "Singularity" to the mathematical concept in the first paragraph. I fixed it by adding a disclaimer at the end of the paragraph. HkFnsNGA (talk) 15:36, 26 August 2010 (UTC)
Read the reference from Vinge, 1993. He knows that "singularity" is mathematically inaccurate. He apologizes for the name and attributes the terminology to mathematician John Von Neumann. ---- CharlesGillingham (talk) 04:43, 30 March 2009 (UTC)
Also, Kurzweil argues that technological growth is not "exponential," but "double exponential." Double exponential functions are also non-singular, but grow much faster than exponential functions, or even factorials. -LesPaul75talk 07:18, 8 April 2009 (UTC)
Which makes absolutely no difference in the world (the double-exponential, I mean). This article must be monitored and finely combed to make sure nobody is conflating "exponential" and "singular", which is precisely one of the main things that makes people involved in this stuff (including Kurzweil) look like insane idiots (not to say Kurzweil isn't necessarily an insane idiot). SamuelRiv (talk) 06:47, 22 June 2010 (UTC)
The question seems to be based on a misunderstanding. Presumably the singularity does not refer to the curve at all. As the 1958 quote from Stanisław Ulam makes clear, singularity refers to the point in time beyond which human affairs, as we know them, could not continue. I never got the impression, reading over the article, that singularity is used to mean anything else. Strebe (talk) 06:46, 23 June 2010 (UTC)

Opposing Viewpoints with citations that indicate that the "singularity" is fringe science such as "The Consciousness Conundrum" from the IEEE Spectrum are removed

The IEEE Spectrum contained an entire issue on this topic, and there are many prominent researchers who are critical of the idea, such as Marvin Minsky, Daniel Dennett, Rodney Brooks, Jaron Lanier, John Holland, John Searle, and Roger Penrose[1], yet most of this article is devoted to an uncritical presentation of Kurzweil and Yudkowsky's views. Bias much?

173.183.204.187 (talk) 18:55, 4 January 2010 (UTC)

Reworked Introduction

I reworked the introduction to focus on Recursive Self Improvement, which is, I think the core idea of "Singularity". It is specifically not just ever improving technology, but a point of no return, a Singularity. Comments welcome. Tuntable (talk) 01:03, 9 February 2010 (UTC)

I added a definition back as Everything Counts thought that important. Note that the original description of mathematical singularity was gibberish, see Mathematical singularity for a better one. But I don't think it helps that much.

My main issue with the original introduction was that it completely missed the key point, namely Recursive Self Improvement. My paragraph on the machine working on itself is a bit repeditive, but that is deliberate to convey the idea. Tuntable (talk) 22:31, 9 February 2010 (UTC)

Captial S revisited

This is probably a small matter, but I feel the "s" in "Technological Singularity" should be capitalized. Would anyone object to moving the article to Technological Singularity? Cheers, 173.171.151.171 (talk) 15:01, 17 February 2010 (UTC)

They're both common enough that there's no point in moving. --Gwern (contribs) 15:40 17 February 2010 (GMT)

Pure speculation

Its all baloney. WillMall (talk) 21:55, 26 March 2010 (UTC)

Many prominent technologists and academics dispute the plausibility of the notion of a technological singularity, including Jeff Hawkins, John Holland, Marvin Minsky, Daniel Dennett, Jaron Lanier, and Gordon Moore, whose eponymous Moore's Law is often cited in support of the concept. WillMall (talk) 21:58, 26 March 2010 (UTC)

Steven Pinker stated in 2008: There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. WillMall (talk) 22:10, 26 March 2010 (UTC)

I believe if you think all the technological singularity theory is raw computer power magically solving all problems, then you haven't read enough about the theory. While raw computing power is definitely part of it, it's not the major part of the singularity concept -- the idea is, that Moore's Law is just part of a greater "law of accelerating change", i.e., that the more technology we get, the faster new technology comes. You can see that even today, in things like the Internet and global telecommunications making research much faster. The singularity concept ultimately comes into play when our technology reaches a point that we can augment or replace human intelligence -- once we hit that point (and, logically speaking, we will eventually) then we can't tell what's going to happen, cause by definition smarter people than us can do things we can't predict. Is it going to be some sort of rapture of the nerds? No, probably not. Is Kurzweil 100% spot on with his predictions? I'd be pleasantly surprised, but ultimately very surprised. Is it going to be a really huge shakeup that no one (outside of Futurists) is going to expect? Oh, no doubt at all. KiTA (talk) 14:59, 28 March 2010 (UTC)

This talk page is only for discussing improvements to the article, not a forum for general discussion of the article's subject. Please, have these discussions somewhere else. You waste time and attention of the contributors--tired time (talk) 16:45, 28 March 2010 (UTC)

Prometheus Rising

The first place I read of the idea of the Singularity was the 1983 book Prometheus Rising. I'm wondering if this should be discussed in the "history of the idea" section. Wilson didn't use the word "singularity", but he put forth the idea, using the now familiar graph showing human knowledge over the last few thousand years, with it shooting up towards infinity around the year 2000. Well, he was wrong about when it would happen, as he predicted immortality pills by the year 1998, every possible type of genetic engineering being routine by 2004, and so on. This wasn't the main topic of Prometheus Rising, but it was more or less the concluding part of the book. I'm not sure to what extent he might have been repeating concepts found elsewhere and whether his writing on it was in any way influential. --Xyzzyplugh (talk) 08:32, 26 April 2010 (UTC)

RE: Removing graph, image is mathematically trivial

This is OP again, after some time. Fact is, I was wrong in my interpretation of the graph's axes (didn't quite translate the y formally correctly), showed the proof, asked my friends (mostly engineers) to verify it was right, and immediately got dragged into a controversy, both here and with them, more about what kinds of absolute truth should be allowed in Wikipedia than on the actual math. After those first few comments, I assumed that everyone trying to question the math was questioning the nature of fact in general.

As a side note, I recently reformulated the math, and while it is definitely not a tautology, the plot of delta-t vs t is still extremely problematic - essentially, most common interpretations and extrapolations are invalid because it is not a "well-behaved function". That is, were we to fit a line to the data, as Ray has done, it will have no meaning for precisely the reasons of data mentioned earlier: pick any two points on the line, and the resulting data point will not necessarily be on the line. In other words, the "line of fit" covers the entire plane. This again would be a formal argument against these graphs, in addition to the perfectly valid points of memory bias listed earlier.

But back to what was interesting: instead of getting corrected early, as I had hoped from my friends and from the first few responders here, I was instead presented with the notion that this was "original research" and therefore could not be used, not directly in the article, but even as an argument to make editorial change to the article. However, a mathematical fact is a truth as fundamental as saying "we shouldn't post a graph of apples vs rocks and call it evidence for Global Warming in the caption". I argue that Ray-Kay makes a BS graph that is provably BS (and now is not so provable, but rather requires a bit of formal analysis) and therefore shouldn't be included, and the argument is over whether an argument for the rejection of a graph based on a fact (or what I thought was a fact, and many were not going to dispute) constituted original research.

So the points for thought:

  1. If a graph calls a frog a flower, and this is demonstrably untrue even if cited by an agreed source, is it fair that an editorial decision be made that in essence gives credence to the untrue claim, or should we simply omit such a claim altogether as it now has nothing to do with the subject at hand, even if the source claims it does? And if the source's claim, being stupid, should be included so that people can judge for ourselves, is it then our job to note that the frog is not a flower so that the viewer is not misled into thinking that we lend our own authority to the claim?
  2. "A frog is a flower" is obviously false from I guess (not a mathematician) 0th-order logic. "...therefore Socrates was a cat" is obviously false from 1st-order. The claims at issue are 2nd-order and possibly with higher-order extensions, and my particular proof (if it was actually made correctly) would have shown the falsehood on an equal level as the other two. Is a decision made based on a truth established on this order then original research or editorial diligence?
  3. There's a lot more to the article as a whole than the increasingly-idiotic ramblings of one person, as you've probably seen on a meta level (ie with myself) in the previous argument. A guy like him gets praised for extremely good work (successful predictions in his case, Mathematica for Wolfram, Brief Hist of Time for Hawking) and then goes off with that on such an extent that he isolates himself from academic criticism and even collaboration, thus becoming the representative and sole gatekeeper of the work itself which at one point was objectively good, but now its quality lives and dies with its maker. This is a problem with science journalism in general - attributing to the discoverer intellectual ownership over the discovery - and Wikipedia needs to be better than this and not let Ray-Kizzle be the only voice here.

Yes, I mock the guy as I would mock myself. But just note that I do believe there is editorial responsibility to reliability over neutrality. Otherwise, the Palinites can dominate the Global Warming article in lieu of scientific fact. SamuelRiv (talk) 16:30, 18 May 2010 (UTC)

EDIT: I should note that the article is actually much less a one-man-show than it was when I was editing, so kudos to all of you improvers. But the points made above are applicable to any article and I believe are something that should be addressed as an editorial guideline.

Editorial policy seems clear. Wikipedia editors must judge the quality of sources, but cannot judge the content of sources. Unfortunately plenty of people manage to make an industry out of their personal epiphany. Regardless of our opinions on any such topic, once it becomes "notable" it becomes an encyclopædic topic. Our job is to report on it and on commentary surrounding it, not to analyze or comment ourselves. That's clear from Wikipedia policy. If you have a serious criticism, get it published, and then it will be fit material for the Criticism section.
Considering your Question 1, "A frog is a flower" is trivially wrong, normally not appearing in a credible source. As I read the Wikipedia guidelines, such an error falls under the "all editors agree this is nonsense" escape clause and therefore is fair game for deletion. Beyond that, no. There are too many ways context and interpretation come into play. Credible sources may contain inferential and deductive errors. The more credible the source, the more likely such errors get removed in peer review, but even if they survive peer review, we cannot make that judgment ourselves.
How can one judge the quality of a source without judging the content? The quality of a source is its reputation and notability, not one's personal determination of accuracy or fitness of content.
Strebe (talk) 18:16, 18 May 2010 (UTC)
The point is that all those examples are trivially wrong (the second one, by the way, is just a line from the Syllogisms of Ionesco). What's more, they can be proven absolutely to be wrong. A person in the arctic who's never seen much of frogs or flowers may not know that "a frog is a flower" is nonsense, but upon seeing the formal definitions of both, will quickly understand that one definition fundamentally contradicts the other without argument. That's what happens here - the math may not be immediately obvious, but once you see how it's formally defined, there cannot be dispute, thus it becomes trivial (at least in the initial case posited). So is one thing more trivial than the next? SamuelRiv (talk) 18:59, 18 May 2010 (UTC)
One thing may well be more trivial than the next. If it requires a "proof" then it is original research: Following a proof generally requires greater intellect than required to function independently in society, for example.
But that's a distraction. It doesn't matter if the graph is wrong. Wikipedia isn't about "right" or "wrong". It's about reporting notable information. Has the graph been published in a reputable source? Is it relevant to the Wikipedia article's text? Is it portrayed in a way that properly represents the context in which it was published? Is it free of copyright encumbrances? If so then it is a legitimate artifact for inclusion. (I do not know the answer to these questions, by the way. But if you wish to remove it, those are the kinds of things you should consider.)
Think of it as a quotation. We do not delete a notable quotation, no matter how absurd it might be, since the purpose is not to judge the merit of the quotation but to report it. If a credible source notes the absurdity of the quotation, then that, too, should appear. Likewise if a credible source debunks the graph in this article, that, too, should be noted.
Strebe (talk) 21:43, 18 May 2010 (UTC)
Yes, but you also cannot, in editorial comment (or image caption, as the case may be) endorse the legitimacy of a quotation or graph without first investigating sense vs nonsense, in the exact same manner as the old-school objective newspaper journalism. You can be an unbiased reporter all you want, but being unbiased does not protect you from being disingenuous, omissive, deceptive, or just plain stupid. These are precisely concerns for why it is important to judge merit, such that we're not saying "This graph illustrates this" when it can be shown that it does not. SamuelRiv (talk) 04:39, 26 May 2010 (UTC)
We still seem to be talking past each other. It's not about "legitimacy". It's about "notability". Please refer again to my last comment. Does the graph fit those criteria?
I encourage you to read [Wikipedia:Neutral_point_of_view|Wikipedia's guidelines] for neutral point of view. In particular, All Wikipedia articles and other encyclopedic content must be written from a neutral point of view, representing fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources. This is non-negotiable and expected of all articles and all editors. It does not matter if a view is wrong (by any definition of wrong). It matters if the view is "significant". If you want to get rid of the graph, then document its provenance, not its accuracy.
(I do not understand your reasoning with respect to "unbiased"; the term, insofar as it has meaning, excludes being disingenuous, omissive, deceptive, and perhaps even stupid. But that's a distraction; even if any single editor has those problems, Wikipedia deals with it by letting anyone fix problems. Sometimes they don't get fixed.)
Strebe (talk) 17:51, 26 May 2010 (UTC)
I'm not sure why SamuelRiv can't understand what Strebe is saying. It seems very straightforward to me. If we tried to sift truth from fiction on wikipedia, we would never in our entire lives make any sort of headway. That is why policy states the criteria that Strebe is putting forth. Simple, really. Isaac.rockett (talk) 13:15, 19 October 2010 (UTC)

removing Kardashev graph - see talk

On a completely different topic, I decided to take a closer look at the graph I previously endorsed, and now find the original uploader (who created the graph) made it largely out of bunk, and the caption is definitely in error (there is no singularity projection).

Various Kardashev scale projections through 2100. One results in a singularity.

Full details are at the talk page, with the original image page also updated with the actual meaning of the lines. SamuelRiv (talk) 07:14, 20 May 2010 (UTC)

Just wandering

Just wandering why there wasn't any mention of the possibility that technological singularity has already been achieved somewhere else in the galaxy too far for the speed of light to reach yet. 154.20.194.233 (talk) 01:07, 26 May 2010 (UTC)...

Ya know, this is actually a good point without violating WP:SYN - it may (or may not - requires artful use) be relevant to incorporate the "where is everybody" Feynman-SETI question, as the possibility for total destruction, apathy, etc in an intelligent civilization is a key component to, for example, Drake's Equation. SamuelRiv (talk) 04:35, 26 May 2010 (UTC)

Basic Premises

I would like to see a section that tackles the basic premise that the whole idea rests on: that machines are becoming smarter. In fact, the computers today are no smarter than the computers of 30 or 50 years ago. They are faster and can process somewhat more complex individual instructions, but the basic concepts haven't changed toward anything more intelligent at all. Machine intelligence as we understand it for humans has gone exactly nowhere, and unless some major breakthrough occurs, will go nowhere. Thus the whole premise rests on a foundation completely lacking in any supporting fact - thus pseudo-science (or pseudo-engineering). This kind of idea generally seems to come from people who have no real idea how computers and microcircuits work, and who merely see the glitz on the output. Surely there is a paper or like that covers this that can be incorporated, or did I just miss it? Jlodman (talk) 04:27, 12 August 2010 (UTC)

The problem with this is that you would need to define concepts such as "smart" and "intelligent" before you can make or refute such claims. This is something that proves to be quite hard to do. This article presumably takes "able to do more things" or "able to do things faster" or "able to do more things that humans can do" or "able to simulate human behavior" as the definition(s) of "smart", which are not very far-fetched definitions as far as AI goes (see, e.g., Turing test, which only tests computers on being able to simulate human behavior, yet is seen as a "test of a machine's ability to demonstrate intelligence"). - Nieske 08:54, 13 August 2010 (UTC) —Preceding unsigned comment added by Nieske (talkcontribs)
Completely deterministic behavior is not intelligence. The behavior of computers today is no different in that regard than the computers of 50 years ago. Also, at the bottom of every simulation is a micro-sequencer running simple instructions again in a completely deterministic manner, however good the simulation looks. Introducing non-deterministic behavior has to date been an unstable disaster. Without a paradigm shift of unknown technology, a good simulation is all you will ever have. It will be no more capable of something new than a computer running a spreadsheet. All the capability it has will be the static ability of the human who programmed it at that instant with what he chose to incorporate. Jlodman (talk) 15:53, 13 August 2010 (UTC)
What do you mean by "completely" deterministic behavior? Also, a determinist would say that human behavior is determined, but not necessarily unintelligent. Unpredictability might be a better criterion for intelligence. —Preceding unsigned comment added by 71.192.116.121 (talk) 01:53, 19 October 2010 (UTC)

Edit Summary

I'll explain my edits here, as the rational for some is non-trivial.

  1. The bold'd 'Technological singularity' at the very start should not contain a wikilink; compare, for example, the articles on the other types of singularity.
  2. The term does not refer to a prediction (which could be true or false) but to an event (which may or may not occur). It is a category error to assert otherwise, akin to calling the Battle of Waterloo a historical theory; rather, it is an event (which may or may not have occured) about which historians have theories (which may or may not be true).
  3. It is non-trivial and original research to say that the future after the singularity is unpredictable. Under both the Intelligence Explosion and Kurzweil conceptions, this is not the case.

Thanks, Larklight (talk) 22:05, 18 August 2010 (UTC)

I’m not worried about the hyperlink. I’m not sure why it was ever there. I don’t care about the “category error”, either; I rather doubt the way it was written confuses anyone. If it makes you uncomfortable then definitely do something about it. But I suggest that “something” not include leaving the reader believing these “singularities” are established fact and occur with some regularity, or confused about whether the article discusses something that has happened, will happen, or may never happen. The lead paragraph, as written, contains too much superfluous verbiage and too little clarity. “Several mechanisms by which the singularity could occur” is flaccid, and “technological singularity” should not be capitalized given that the very title of the article is not capitalized and given that the entire article leaves it uncapitalized. Strebe (talk) 01:19, 19 August 2010 (UTC)
This might be better. I'll think further, though it's inherently hard. I suspect we should move to capitalising everywhere, as it generally is in the literature, e.g. here http://singinst.org/overview/whatisthesingularity Larklight (talk) 03:43, 19 August 2010 (UTC)

To Do

The History, Accelerating Change and Criticism sections need major improvements. Larklight (talk) 06:39, 25 August 2010 (UTC)


The totality of the opening paragraphs is far greater than necessary. We could have a shorter, more succinct set of opening remarks. Individual concepts tokened by the term technological singularity should be distinguished with greater precision.

Extended Mind Hypothesis

The Extended Mind Hypothesis, such as explicated in Levy's book "Neuroethics", should be explicated in a section in this article. I am not an expert on this, but if no one else takes on the task, I will write it.HkFnsNGA (talk) 15:43, 26 August 2010 (UTC)

In the popular culture section: I think we ought to add Serial Experiments Lain http://en.wikipedia.org/wiki/Serial_Experiments_Lain in the series it would seem that he transcendence represents a singularity in which she becomes god in process while destroying the creator and resetting time and space itself. —Preceding unsigned comment added by 98.208.102.253 (talk) 04:04, 7 September 2010 (UTC)

The first sentence is not very clear

Let's see it again:


A technological singularity is a hypothetical event occurring when technological progress becomes so extremely rapid, due in most accounts to the technological creation of superhuman intelligences, that it makes the future after the singularity qualitatively different and harder to predict.


First: The last clause is ambiguous. "[Q]ualitatively different and harder to predict" compared to what? Compared to the future with respect to some class of moments before the singularity? Which ones? Moreover, the future is not something that can change; it always is what it is. Talking about "the" future somehow being different than it could have been makes no sense, since the future could not have been different. We can talk about alternative futures from a given point, or limited futures between time X and time Y from a given point, but speaking generally about the future as this sentence does is dangerously close to nonsense. That needs to be changed.

Second: An event is temporal. It takes place over a finite time. It is not instantaneous. We should describe the quality of this event, not just the consequences resulting from it and the consequences that lead to it happening. From reading the first sentence, a person gets some vague idea of how a technological singularity is caused and how it effects the future state of the world. Neither of these things, however, is a description of a technological singularity itself. So, that needs to be changed. Or, we can just say that a technological singularity is an instantaneous point in time that divides some circumstances. Or, we can say a technological singularity is a process, however inclusive.

Here is my candidate for the first sentence (and the second sentence).


A technological singularity is the hypothetical end of a process of (a) accelerating technological change, or (b) accelerating increases in intelligence, or both. A post-technological singularity world is difficult to predict, and various interpretations of such a world have been suggested.

71.192.116.121 (talk) 23:18, 18 October 2010 (UTC)

I agree that the first sentence needs work. That was the first thing I thought when I read the article. I don't think that your sentence is quite right, however. I tried the word 'change' rather than 'event' or 'process'. I'm not certain I have a better one than you do, but here's a stab in the dark:

A technological singularity is the hypothetical change occurring when the progress of technology begins to increase exponentially, resulting in technological growth much faster than we can currently estimate. This may result in self-replicating technology, or sentient AI. A post-technological singularity world is difficult to predict, and various interpretations of such a world have been suggested.

I'm not married to that sentence, but thought I'd throw it forward and get some criticism on it. Isaac.rockett (talk) 13:24, 19 October 2010 (UTC)

Isaac, thank you for your input. I am not an expert on the singularity myself, and so there are probably other people more qualified than me to critique your proposal for a first sentence. However, I do believe that being, to some degree, unfamiliar with the concept of a technological singularity actually puts me in a good position to comment on the Wikipedia article we are in the process of creating. The thing that strikes me, generally speaking, is that the article currently seems a little unsure of itself. I do not know if that is because nobody so far has really encapsulated the idea of a technological singularity, or because the idea of a technological singularity is itself vague (probably because the concept is still defining itself). Either way, we should come to some consensus on the matter and make it plain in the article.
I know I said that other people are more qualified to critique your first sentence, but I will still give you some feedback. Here is your sentence(s), and mine:


A technological singularity is the hypothetical change occurring when the progress of technology begins to increase exponentially, resulting in technological growth much faster than we can currently estimate. This may result in self-replicating technology, or sentient AI.


A technological singularity is the hypothetical end of a process of (a) accelerating technological change, or (b) accelerating increases in intelligence, or both.


My understanding of a technological singularity is that it is a TERMINUS. That is to say, it marks an end, not a beginning. From what I can tell from the article and from my general understanding, the order of events goes like this:
Accelerating technological change > Intelligent Machines > Faster technological change > Even more intelligent machines > ... SINGULARITY.
What is this singularity? That is what everybody wants to know (and neither of our sentences is "The Ultimate Sentence," because we do not know what sort of change we are speaking of (an event, an instantaneous point, something more abstract...). It is a change, yes, and a change that comes about after a process of some growth, technologically speaking and in terms of intelligence. It does not perpetuate the process; it ends it. After that, things become fuzzy. That is why the second sentence we both like should stay.
In short, I vote for my sentence. No offense. (71.192.116.121 (talk) 16:25, 19 October 2010 (UTC))

New Opening Paragraph

Just this for the opening remarks. Everything else that is currently in the opening remarks can be moved to the rest of the article.


A technological singularity, broadly speaking, occurs after a certain amount of technological progress has taken place. Such progress is usually stipulated to be the result of a mutually escalating process of accelerating technological change and accelerating growth of intelligence. Moore's law is often cited as evidence for the former, while the latter is said to occur either through intelligence amplification, artificial intelligence, or both, and that either would lead to recursive self improvement. It has been suggested that a post-singularity world would include superintelligence entities, and that the behavior of said entities would be unpredictable to less intelligent beings such as humans. As no technological singularities have so far been observed, the concept remains largely hypothetical, and its plausibility has been challenged. However, it has still been predicted, and argued on various grounds, that a technological singularity will occur within the 21st century. Proponents of the singularity idea include Vernor Vinge, Ray Kurzweil, and I. J. Good, and opponents include Jeff Hawkins, John Holland, Daniel Dennett, Jaron Lanier, and Gordon Moore. —Preceding unsigned comment added by 71.192.116.121 (talk) 16:23, 20 October 2010 (UTC)

I don't like this at all compared to the previous version--the first sentence give no idea what the technological singularity actually is, just that it "occurs after a certain amount of technological progress", and the following sentences never provide any clear definition. To just say it would "include" superintelligent entities is a vague and weak claim, and "the behavior of said entities would be unpredictable" doesn't make at all clear that this is the central notion behind the singularity for most of the main writers on the subject. "No technological singularities have so far been observed" is confusing, how does it even make sense to use "technological singularity" in plural form, as if there could be more than one? And saying "it has still been predicted, and argued on various grounds" is once again overly vague, giving not a hint of an idea of what these "various grounds" might be, making no connection to the notion of accelerating technological change. Finally the paragraph eliminates all of the references in the previous version, which hopefully would provide some insurance against people with differing notions of the singularity (like the different ones mentioned in the Eliezer Yudkowsky and Anders Sandberg articles) from thinking their version was the only version and constantly re-editing the opening to reflect their own notions.
If you have specific concerns with the version that was up immediately before you replaced it with this edit, i.e. the one I cut and paste below, can you explain them here before completely erasing it and starting from scratch?
A technological singularity is a hypothetical event occurring when technological progress becomes so extremely rapid that it makes the future after the singularity qualitatively different and harder to predict. Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and allege that a post-singularity world would be unpredictable to humans due to an inability of human beings to imagine the intentions or capabilities of superintelligent entities.[2][3][4] Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[5][6][7] although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[2] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[6][8]
Hypnosifl (talk) 18:52, 20 October 2010 (UTC)


Thanks for taking what I wrote seriously enough to respond to it. To my untrained eye, neither my version of the opening remarks nor the accepted version say what a technological singularity is. What I think my version does do is consolidate all the important concepts together. That may have resulted in some ambiguity, but then, there is a whole article coming after the opening remarks. I feel that too much is presented too soon. Now, you asked a question:

"No technological singularities have so far been observed" is confusing, how does it even make sense to use "technological singularity" in plural form, as if there could be more than one?

Well, both versions of the opening statements talk at first about "a" technological singularity, suggesting that the article is about technological singularities in general. The reason it makes sense to use the term "technological singularity in the plural form, as if there could be more than one, is simply because there could be more than one.

All in all, the accepted opening remarks say too much about ideas that are, at most, related to the idea of a technological singularity. As I have said, it gives no clear definition, and it does not present the important notions that could form a definition as such. My version is superior because it summarizes the most important information without diluting it with detailed commentary that should come later anyway. —Preceding unsigned comment added by 71.192.116.121 (talk) 19:16, 20 October 2010 (UTC)

Also:

"To just say it would "include" superintelligent entities is a vague and weak claim,"


The version you propose


Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and allege that a post-singularity world would be unpredictable to humans due to an inability of human beings to imagine the intentions or capabilities of superintelligent entities.


And mine:


It has been suggested that a post-singularity world would include superintelligence entities, and that the behavior of said entities would be unpredictable to less intelligent beings such as humans.


Our sentences say more or less the same thing (I don't mention Vinge or Kurzweil). We both say that superintelligence would feature in a singularity. We both say that it would be unpredictable. My sentence just says it more economically.


As for references, somebody could just put them right back in. No problem there.

I didn't interpret the prior version's use of "A technological singularity" in the first paragraph to mean there could be more than one in any actual history, just that different authors have different ideas about what such an event could be like, but if you think it's confusing the opening could easily be changed to "The technological singularity is a hypothetical event..." I suppose if we're going by one of the less common definitions which don't involve superintelligence there could be more than one, or if we're talking about different alien civilizations experiencing their own technological singularities, but as far as humans are concerned, it seems to me there can be only one event where technology allows human intelligence to be surpassed.
As for the stuff about superintelligences, my prior version said specifically that various writers *define* the singularity concept in terms of superintelligence, while yours just says that the post-singularity world would "include" them but doesn't make clear that this is part of the central definition of the idea (the post-singularity world might 'include' quite a lot of advanced technologies, from molecular nanotechnology to interstellar travel).
And I don't think it's very constructive to say that "somebody" could put the references back in, if someone wants to completely rewrite a section the burden should be on them to include all the useful information in the rewrite rather than leaving it to other editors (likewise with the paragraphs in the opening about Vinge, Good and Kurzweil that you deleted without reintegrating the information into other sections). Hypnosifl (talk) 21:06, 20 October 2010 (UTC)
I'm with Hypnosifl on this. The proposed rewrite is awful for the reasons given. --Michael C. Price talk 22:33, 20 October 2010 (UTC)


It seems obvious to me that if a technological singularity could happen on Earth, it could happen on an Earth-like planet, or even on Earth itself more than once. The problem is that a lot of writing on "the singularity" is about a specific singularity that may or may not happen on Earth, and that the article never really makes a distinction between this singularity and singularities in general. For that reason, the whole article is somewhat confusing, and that is a fairly important issue with it currently. I feel that we have multiple options here: We can change the article so it is about technological singularities in general; we can change the article so that it is about "the singularity" for human beings; we can make a new article about singularities in general; we can make a new article just about the Earth-singularity. The current article is too ambitious.
The definition problem is more serious. If there are multiple definitions, they must be clearly distinguished; if they overlap, we must be explicit about how; if there is essentially one definition, then it must be explained explicitly, and not just hinted at with vague, second-order statements about people making statements, such as Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence... I find the string of words "define the concept in terms of" to be particularly unclear; it manages to give no straightforward definition as far as the article is concerned without properly describing Vinge's own definition. If we are going to write a serious article, we abstract what we can about the various viewpoints into confident propositions about what a singularity is and is not and what features have been attributed to it.
The burden of adding references does not fall to me only. It falls to everybody; that is the nature of Wikipedia. 71.192.116.121 (talk) 22:57, 20 October 2010 (UTC)
I don't see why alien civilizations need to be discussed in an event normally defined in terms of our future; articles on other types of major historical events, like industrial revolution and information revolution, don't bother specifying that they talking about events on Earth as opposed to some other planet, the mere fact that "the singularity" is a more science-fictional idea doesn't imply we need to start talking about aliens. If you can find some writers talking about alien civilizations experiencing their own "technological singularities" then you could add a sentence about how some writers have generalized the concept in this way, but I think most writers use the term to refer to an event expected in our own future history, the article should focus on the primary meaning, it seems overly idiosyncratic to have the opening section treating it as important to distinguish "our" singularity from other, alien singularities.
You wrote: "If there are multiple definitions, they must be clearly distinguished; if they overlap, we must be explicit about how; if there is essentially one definition, then it must be explained explicitly, and not just hinted at with vague, second-order statements about people making statements"
When different writers use the term differently, I think it's better to mention the views of specific authors rather than make any authoritative statements like "the technological singularity is defined to be a point when humans create superintelligent beings" or something of the sort, as if wikipedia has the authority to judge that the minority of writers who define it differently are using it wrong (though it should make clear which usage is more common, and which is used by those who originated and did the most to popularize the term). See for example Wikipedia:WEASEL#Unsupported_attributions which says "Claims about what people say, think, feel, or believe, and what has been shown, demonstrated, or proven should be clearly attributed." Non-attributed claims like "most authorities define the singularity as..." tend to get tagged with [who?] (see Template:Who), this type of nonspecific statement is considered bad form on wikipedia. Also consider Wikipedia:Verifiability#Reliable_sources_and_neutrality which says that:
All articles must adhere to Wikipedia's neutrality policy, fairly representing all majority and significant-minority viewpoints that have been published by reliable sources, in rough proportion to the prominence of each view. Tiny-minority views need not be included, except in articles devoted to them. Where there is disagreement between sources, their views should be clearly attributed in the text: "John Smith argues that X, while Paul Jones maintains that Y," followed by an inline citation.
(emphasis mine) As for the definition issue, I think the version of the opening I quoted already makes it pretty clear that there are multiple definitions, first saying that Many of the most recognized writers on the singularity define it in terms of superintelligence, while adding that Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies and giving references illustrating both types of definitions. If you think the issue of multiple definitions could be clearer (and could be expressed in a way that doesn't go beyond what has been said by reliable sources, i.e. we as editors shouldn't be inventing our own categorizations of different definitions), then by all means make some suggestions about how to improve this, but your own proposed rewrite earlier doesn't really give any clear definitions at all.
You also wrote: "I find the string of words "define the concept in terms of" to be particularly unclear; it manages to give no straightforward definition as far as the article is concerned without properly describing Vinge's own definition"
Personally I don't see a problem with the text "define the concept in terms of", but if you want to edit that sentence in a way you think will make the meaning more clear go ahead...the problem with a more definite formulation like "define the singularity to be the event of the creation of superintelligent beings"' is that it would make it sound like that is all these writers mean by "the singularity", leaving out other associated concepts they may consider an essential part of the idealike the idea of an "intelligence explosion". "The singularity" seems to be one of those concepts where a single one-sentence definition would be an oversimplification, as there may be a number of linked ideas that are all part of what a given author means by the term, so "define the concept in terms of" suggests that superintelligence is an essential part of the meaning for these writers even if it isn't the sole meaning. Perhaps you would be happier with some other wording that still was reasonably open-ended and not too definitive about what the definition was, like "For many of the most recognized writers on the subject like Vernor Vinge and Ray Kurzweil, an essential part of the meaning of 'the singularity' is that it will occur when humans create superintelligent beings", or something along those lines?
Finally, of course the burden of adding references does not fall only on you, but I think when an editor deletes a lot of preexisting references, they do have some burden to try to reintegrate them somehow if they are valuable to the article, rather than just expecting others to do it (which may result in no one bothering and the useful references being forgotten). Hypnosifl (talk) 23:48, 20 October 2010 (UTC)

linear bias

One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result.

How is it more biased toward linearity than every other kind of graph? —Tamfang (talk) 01:37, 11 January 2011 (UTC)

Short answer: if you have a large enough scale, everything looks linear on a log-log plot. A nice basic-math look at this is given by Cosma Shalizi at UMich, illustrating the common misuse associated with this method of plotting, a mistake made even by major researchers - in this case, in the field of applied networks. Now, with the very few blurry data points used by Kurzweil, I don't think even correcting the regression statistics would give any reliable information, but of course one could always try. SamuelRiv (talk) 08:59, 11 January 2011 (UTC)
The nugget I get from Shalizi's essay is that curves which are clearly nonlinear to the naked human eye can still have linear fits that pass common statistical tests. (The examples are all log-log, as it happens, but I don't think that's essential.) It doesn't support an assertion like "Of course it looks linear, it's on a log-log chart." —Tamfang (talk) 05:29, 15 January 2011 (UTC)

Bias in article

The article on this hypothetical idea, which has been rejected by a large number of serious scientists, contains little criticism and failed to mention some of the most fundamental arguments against it. For example the opening section had about 30 lines of detailed explanation and arguments pro the idea, but only 2 lines of criticism. I added a paragraph about the fundamental argument against human intelligence being algorithmic like present-day computers, which has been presented by some of the most highly respected scientists - with much more fundamental science research credentials than the proponents of the singularity idea. This addition has been removed multiple times, which suggests editorial bias for the article. I now added two more references, one by the Mr. Vinge himself recognising the argument against algorithmic brains as "widely respected". Instead of removing my addition I suggest counterarguments are added - that's a much better way of reaching a non-biased article than removing serious criticism that is respected by the proponents of the idea themselves. Petruspennanen (talk) 08:54, 20 October 2010 (UTC)

Penrose's arguments against AI are fringe. A respected scientist, yes, but not on AI. Cf Hoyle on evolution. --Michael C. Price talk 09:23, 20 October 2010 (UTC)
Keep the counter-argument, just put it in the criticism section where it belongs. —Preceding unsigned comment added by 71.192.116.121 (talk) 15:59, 20 October 2010 (UTC)
Put it in the criticism section of the AI article where it belongs. Not here. --Michael C. Price talk 21:29, 20 October 2010 (UTC)
The technological singularity does not depend on human intelligence being algorithmic like present-day computers. It only depends on computers becoming able (by any means) to comprehensively out-perform human intelligence at all socially and economically relevant tasks. The fact that present-day computers are so much different than humans is part of what makes the possibility (however dubious) of them getting "smarter" than us a bit scary. --Teratornis (talk) 17:59, 7 April 2011 (UTC)
It would seem the only way to prove computers cannot "get there" by being algorithmic is to have a valid model of human intelligence. The model itself, when implemented on some sort of computing machine (perhaps one that uses different operating principles than present-day computers) would be as functionally intelligent as the entity it models. In other words, if you rigorously prove computers cannot become as intelligent as people, you would have built something else that is. --Teratornis (talk) 18:04, 7 April 2011 (UTC)
Heh. This paradox reminds me of a novel about mind uploading: Circuit of Heaven by Dennis Danvers. In its prologue, a scientist proves that machine intelligence is impossible and that machine simulation of a human mind is feasible. At least two major characters are artificial. —Tamfang (talk) 20:25, 7 April 2011 (UTC)