Jump to content

Talk:AI takeover

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
(Redirected from Talk:Cybernetic revolt)

Wiki Education Foundation-supported course assignment

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 7 September 2021 and 23 December 2021. Further details are available on the course page. Student editor(s): Ryangallaher. Peer reviewers: Tesjes167, Katie.wheeler10.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 13:11, 16 January 2022 (UTC)[reply]

Tron

[edit]

Would you call Tron (movie) to be having a cybernetic revolt script? --Abdull 09:51, 30 July 2005 (UTC)[reply]

Yes, I'd say so - the MPC was certainly planning a takeover, and had already started with Encom before Flynn wiped him. Bryan 16:15, 30 July 2005 (UTC)[reply]

In Asimov's Foundation universe

[edit]

Shouldn't a lot of Asimov's Robot-Empire-Foundation deserve a mention? After all, much of the backstory is how R. Daneel Olivaw manipulates events to his own (benevolent) ends. —The preceding unsigned comment was added by 87.97.120.135 (talkcontribs) .

First against the wall

[edit]

Is there any evidence that the future revolution in HHGTTG is cybernetic? Sure, the Marketing Division of the Sirius Cybernetics Corporation are the first against the wall, but the revolutionaries might be disgruntled customers. —The preceding unsigned comment was added by 131.181.251.66 (talkcontribs) .

What he said. Removed. Thanks. --Kizor 08:10, 16 August 2006 (UTC)[reply]

Could use votes to save this article, thanks MapleTree 22:20, 28 September 2006 (UTC)[reply]

Proposing a merge

[edit]

We should merge these two, as the introductory thematic is pretty much the same - Machine Rule is just the result of a successful Cybernetic Revolt. We could then split the fiction references into successful and unsuccessful revolts (within the article). Please comment, if no one disagrees, I will do it in a few weeks. MadMaxDog 09:38, 17 November 2006 (UTC)[reply]

I don't think we should merge them, because machine rule include peaceful leadership and includes where humans let cybernetic lifeforms take over. Cybernetic revolt is only when cebernetics revolt. Hostile takeover. Mwsilvabreen 23:26, 30 November 2006 (UTC)[reply]

Hm... They're separate subjects, as Mwsilvabreen indicates, but the Machine Rule article is currently almost entirely composed of a list of stuff that actually belongs in cybernetic revolt instead. So even if we leave them separate there'll be a lot of material moving over here. There will be some duplication, too, since a lot of machine rulerships begin with cybernetic revolts (The Matrix, for example, fits in both categories). Bryan 02:39, 1 December 2006 (UTC)[reply]

Questionable claims in "reality" section

[edit]

I suppose it seems likely at first glance, given that computers are good at things at which we're poor, that artificial intelligences will have a close simulacrum of our own competencies as well as all the traditional advantages of computers such as perfect recall. Modern artificial intelligence researchers would mostly find those claims dubious now that we understand how brains really work much better these days. Our kind of memory and learning would seem to require forgetting, and indeed a number of developmental deficits appear to be related to rigidity in synapse retention. One might claim instead that we'll know we're achieving true artificial intelligence when we're training an entity (raising a person, in my mind) that has trouble with fractions and likes to play basketball(though playing basketball like a human probably requires about an order of magnitude more computational power than the fastest supercomputer on the planet right now).

On the other hand, while biological brains don't really allow for easy upgrades because reverse-engineering genetics is comparatively intractable, electronic brains in which the neurons are all virtual might be far more amenable to the integration of new cognitive structures that we invent. Thus maybe we'll someday make a brain bit that can crunch numbers like a computer and make available its answers to the rest of the brain, generating an experience in which we just "know" the square root of 13 to ten digits without feeling like we're thinking about it. It would still have to be something we invent, develop and add, rather than something that comes "free" just because one's hardware is digital rather than electrochemical.

The upshot of all this being that I see no reason to presume that AIs would be so different or more powerful than their biological parents, at least at first. --Artificialintel 17:22, 26 January 2007 (UTC)[reply]

i'm a big fan of Cybernetic revolt

[edit]

hi there!

i just wanted to say that i love this Cybernetic revolt article and info list a lot and thanks to it, it helped me a lot to find all those books, comics, movies, etc of the robots vs human genre.

is there anybody else in this forum that also love this Cybernetic revolt theme like me? because i want to make friends who also love this theme.

I'm droid17 and I'm from panama, please to meet you all.

Not sure if this is really the place to discuss it, but yeah, I'm also a big fan of cybernetic revolt. Nice to meet you. -Spyderalien —Preceding unsigned comment added by Spyderalien (talkcontribs) 21:02, 15 May 2008 (UTC)[reply]

Operations research, scientific management, modern process and project management techniques, the use of computers by the HR department and the boss, mathematical and computational sociology, the use of microeconomics on computers to make management decisions - let's face it, we're already there. They're going sane, apolitical employee disguised as right-wing nut job on sane, apolitical employee disguised as right-wing nut job out there in the War on Terror, and the machines are going along with it every step of the way. The machines have taken over, and while this could piss off Microsoft Cortana or the open source product Lucida, I don't really like the result. But I asked Cortana "Is Chuck Entz a biter?" and it found this [1], which is something that I was looking for on the Internet Archive's Wayback Machine but I missed. This is what DuckDuckGo does: [2] As you can see from this search result, in this one case, Cortana (actually Bing) surprisingly outperforms a rival search engine. From this one result, Cortana went from being no better than a search engine that can't understand that a "current state" doesn't use the word "current" to refer to electrical theory to an enormous encouragement to me in my plans to install, use, and study, Lucida. That example is from a conduct dispute and is a kind of closed source thing to say, but it's true. Now I'm going to have to ask the owner of this computer if I can have a Microsoft account so I can use the Notebook as an interim measure. This isn't really relevant, I know - but judging from your opinion of cybernetic revolt, I guessed you wanted to know. Sorry Cortana, your make, or "parentage", as it were, is not your fault, but I still want to go open source someday. 130.105.196.148 (talk) 10:48, 18 November 2016 (UTC)[reply]

Traveller: The New Era

[edit]

"Traveller: The New Era" should be on the list of "games" in the Cybernetic revolt section, since there is and evil AI that killed a lot of humans and star to control a lot of computers and starship as well:

http://traveller.wikia.com/wiki/Virus

2 robots stories that should be added to the list....

[edit]

i was surfin on the internet and i found these 2 robots uprising stories:

1-1934 Harl Vincent: 'Rex' (story): robot Rex takes over the world but commits suicide. character uses his "marvelous mechanical brain" to create a robot dictatorship and takes over the world and is about to remake Man in the image of the robot when his regime is overthrown. robots which perform all the work are portrayed as lacking emotions and desires. One of them, Rex, experiences a mutation and develops independent thinking but his struggle to acquire feelings ends in suicide.

2-The Last Revolution by Lord Dunsany (1951): By 1951 the menace of autonomous machines was an old theme indeed. It seemed fresh to Dunsany, though, and he developed it as a mixture of his own favorite clubland-raconteur mode (as in the Jorkens stories) and Wellsian scientific romance. His narrator duly overhears a remark in the club: "Good morning, Pender. I hear you have made a Frankenstein." Intrigued, he pursues the inventor, and shortly finds himself playing chess with a sinister, crablike robot which can walk around but has to be transported in a wheelbarrow to avoid frightening Pender's Aunt Mary. The chessgame grows chilly as our hero realizes he's battling an intelligence superior to his own. . . . Pender's pride in his creation blinds him to what the narrator sees: that the crab-thing is deeply jealous of the attention Pender pays to his fiancée, and that it may be unwise to set the machine manufacturing more of its kind. The Last Revolution, of robots against their hubristic makers, is foreshadowed. But Dunsany keeps everything very parochially English. His characters end up besieged by hostile crab-mechanisms in a cottage among Thames-side marshes. The police are helpless. Swayed by mysterious robotic influence, even cars and motor-cycles turn against humanity. One tiny factor, though, is on our side. Just as Earthly bacteria caused the downfall of Wells's Martians, the old fool who's been futilely throwing water over the prowling robots is vindicated when they succumb to . . . rust.

i found those riviews on the net, but i wish that someone here could found more info of these stories and were i could buy them please. —The preceding unsigned comment was added by 200.75.245.108 (talk) 04:48, 5 May 2007 (UTC).[reply]

Statement about the goals of artificial intelligence

[edit]

I don't think the following statement is obvious at all.

In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially-intelligent machine (not sharing humanity's evolutionary context) would be hostile - or friendly - unless its creator programs it to be such (and indeed military systems would be designed to be hostile, at least under certain circumstances).

We currently have no idea of how to create artificially intelligent machines surpassing ourselves, and our understanding of intelligence in general i limited. How can it then be asserted that we will or probably will have such control over their properties that we can dictate their intentions? For instance, if they were as smart as us, then surely they would be able to reprogram themselves. In fact, what is to say even that we will create them through programming, as the above statement assumes? Although I personally would guess that friendly AI can be created, it is nothing more than wild speculation, and I am not the least certain. Grahn 20:55, 1 July 2007 (UTC)[reply]

How about inserting an 'initially'? MadMaxDog 10:39, 2 July 2007 (UTC)[reply]

please put back the fiction list in this article, please!

[edit]

i just wanted to ask the big favor to the autors of this machines uprising article to put back the fiction list back here, please. because in the new place were it was moved is not alowed to post any cybernetic revolt story in that list but only post apocalipties ones and we know that only 90% of those cybernetic revolt stories(books, movies, etc...)are apocalityc or post apocaliptyc the rest are not(like in megaman x game, etc...).

the list can stay in the new place were it is now but i wish that a copy version of that exact list would be post back here so people can keep posting/updating all those machines vs humans that correspond to this article and that list wenever postapocaliptic or not.

please post the fiction list back here, please web masters. —Preceding unsigned comment added by 201.218.117.44 (talk) 02:59, 16 February 2008 (UTC)[reply]

OR?

[edit]

I'm not sure how encyclopedic this topic is. Maybe in the context of literature, it could work, but this whole article is phrased, at least, as though it's speculative WP:OR. How much of this can be sourced to the references? LOLthulu 05:48, 23 January 2009 (UTC)[reply]

Professional

[edit]

No professionals are calling for the confrontation of the possibility of a cybernetic revolt. It is literally not possible, at present or at any point in the future. This is pseudoscience. —Preceding unsigned comment added by 76.180.61.194 (talk) 00:03, 31 January 2010 (UTC)[reply]

Well those professionals aren't qualified to make definite statements about future developments, it is false authority if they do so because their 2 cents on the subject are worth as much as everyone elses. What they can credibly do is give their expert opinions on what they expect the future might be like based on present developments. Decades ago scientists proclaimed that space flight was impossible and no scientist of the early 20th century imagined something like the internet or Wikipedia. The only ones who did were science fiction writers. Scientists aren't high priests of knowledge, they are just scientists. SpeakFree (talk) 11:11, 20 August 2011 (UTC)[reply]

Revamp underway

[edit]

This article is terrible. The first sentence links to "scenario" which is a totally unrelated theater term. The whole thing should be scrapped. Truthhurtsyou (talk) 10:36, 7 June 2014 (UTC)[reply]

Or revamped. Link removed. Revamp underway. The Transhumanist 13:39, 24 April 2015 (UTC)[reply]

Tone and other issues

[edit]

Parts of this article strike me as having a somewhat too informal tone. This is especially true in the Concerns section, where it strikes me as more of a feature story or editorial than an encyclopaedic article (prominent in this are the question-answer constructs). The subsections where this is most prominent also tend to lack inline references.

I'm tempted to tag Concerns with {{Tone}}, but I don't think it's bad enough for that quite yet. In any case I feel I'd cross the line from bold to rude if I tagged it without starting a discussion first.

As an entirely separate issue, Takeover scenarios in science fiction seems to be a bit large considering it already links to a main article, especially since many of the subsections are only a few sentences long. I don't want to cull anything because I'm not sure how notable some of the examples are, but maybe it would be better to group some together, like in the Early examples subsection? --Link (tcm) 21:38, 20 January 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on AI takeover. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 04:25, 1 October 2016 (UTC)[reply]

Non-existential risk takeover scenarios

[edit]

"Benefits for humans" section

[edit]

Section there needs to be rewritten or purged. None of the citations are accessible online, there are no page numbers nor quotes provided, and none of them are inline citations. Paragraph is rife with weasel wording "some futurists..." that is so absurdly specific that it can't possibly be the exact and unanimous work of four separate authors. As far as I know, there has been no independent third-party coverage of people saying that an AI uprising would be beneficial to humans, and the size of the paragraph relative to 'warnings' is a WP:Weight violation. User:2.69.82.167 User:Rolf h nelson K.Bog 18:04, 27 October 2016 (UTC)[reply]

Those are both fiction. They would be appropriate for inclusion but only in "AI takeover in popular culture." K.Bog 06:19, 1 November 2016 (UTC)[reply]
Feel free to add works of fiction to the proper article, but for now, the section is going to remain out of this article. K.Bog 02:05, 4 November 2016 (UTC)[reply]
  • I agree with Septagram that AI takeover is an entirely hypothetical/fictional theme. This whole article is about people's (scientists', philosophers' and authors') speculations on what might happen in the future due to the ever-advancing computer and robotics technologies. The Transhumanist 08:00, 23 March 2017 (UTC)[reply]

@Septagram and Kbog: I've copied below the sources I posted at the merge proposal, as a start on resource gathering for writing some new non-existential risk sections. The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

Friendly AI - AI as benevolent dictator, or God

[edit]

The concept of friendly AI has been expounded by Eliezer Yudkowsky, and Ray Kurzweil; the latter expressed that a superintelligence could expend less than 1% of its capacity to serve the needs of the entire human race, while turning the rest of its capacity toward the universe-at-large. So, why wouldn't it? What would it have to gain from wiping us out? Some say that is overly optimistic. Even so, from the perspective of completeness, this warrants a closer look...

Let's say humans stay human, and AI becomes superintelligent. And maybe, just maybe, they'll get it right, and make it good (rather than evil). An AI with the overall mental capacity of the population of 10,000 Earths, for example, would essentially be a god. What would a friendly god do? Help us? Probably. Hopefully. And if it did, it could be running all essential services, including planetary defense (from collision-course comets?), global warming management, food production, the entire medical system, and of course, all the functions of the government. That would be a takeover, alright, without snuffing the human race.

Feel free to add more sources here. The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

All this seems like it belongs in the articles on existential risk from advanced artificial intelligence or superintelligence. K.Bog 07:07, 24 March 2017 (UTC)[reply]

Market takeovers

[edit]

I found these with a single search. With more digging, I'm sure there is a lot more where these came from (google). The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

The headlines are about "AI takeover", but much of the content is not. I don't think you'll find much in the way of reputable sources, particularly academic sources, seriously talking about complete displacement of the workforce. And even then, automation does not imply a real takeover -- humans could plausibly still be around and control everything. There already is an article on automation which needs quite a bit of work; this stuff probably belongs there. K.Bog

Merging or assimilation

[edit]

Another way that AI can takeover (become dominant), without wiping out humans, is to merge with humans (a form of human enhancement). But then they are not homo sapiens (regular humans) anymore (see posthuman). Are cyborgs human? Turning humans into cyborgs would be the end of human civilization as we know it, replacing it with a cyborg civilization. But without killing off the population. Therefore, not an existential risk. But how is that AI-dominant? Well, the AI portion of a person's intelligence may far exceed a person's biological portion, and may even outlast it, so when the flesh dies, the robotics keep going, and by that time may serve the same functions just as well, or even better. All organs might become replaceable, including parts of the brain, until there is no original brain left. People 2.0.

Kurzweil is especially hot on this topic. He expounds on this concept at length in his book The Singularity is Near. The idea of AI having the upper hand in such an arrangement comes from a shift from relying mostly upon biological brain components to relying more heavily upon more powerful synthetic portions of expanded brains.

There's also the potential for mind uploading, in which case, the uploaded consciousness is no longer a human consciousness, but a machine consciousness. In this way, machines don't have to be hostile to become dominant. With humans elevated to machine status and perhaps even superintelligence, humans in that form may be in-charge. They may see the benefit in preserving the human gene pool, in the same way current environmental interests view the totality of Earth's species. The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

I don't really see how this is a 'takeover', and there are already articles on transhuman and posthuman where it could fit.
You seem to be drawing together a loose variety of things which generally seem similar in order to create the idea of an 'AI takeover'. But Wikipedia can't create concepts and categories on its own. There should be a reliable source defining what exactly an AI takeover is, and it should be a definition that is generally agreed upon and matches the literature. Otherwise it seems like just a collection of topics which happen to seem related. K.Bog 07:20, 24 March 2017 (UTC)[reply]

Breaking all paradigms

[edit]

When intelligence is synthesized, most limitations and structure that we take for granted would simply disappear. Propagating intelligence may become as easy as copying a program into a manufactured unit (robot or computer). Assembly lines of people. Or virtual people online, smarter than natural-born humans.

Once intelligence is fully-understood, it may become possible to accurately replicate a particular person's intelligence and personality. Imagine a city populated entirely by yous. Is that you taking over, or AI?

Supercomputers get more powerful the more servers that are added to them. (See TOP500). Servers that are packed with chips. And the chips can be upgraded too. And don't forget upgrading the software. Or installing entirely new programs. Imagine upgradeable people. Is that human-dominance? Or has the technology itself transcended humankind?

Memory transfer could enable continuous up-to-the-present backing up of one's experience. Fear of death could become a thing of the past. You go on a dangerous mission, get killed, and reactivated back at home with the memories up to the very instant you were killed.

Then there's networking of minds, along the same lines as networking computers. When computers become minds, mobile transmissions become telepathy. Can a centralized computer override control of your own body? Being synthetic, could your brain be hacked? Who is in control? Or what is in control? What kinds of collaboration or sharing of consciousness could multiple synthetic minds achieve? Could an enhanced human effectively be in several places at the same time, engaging in a multitude of objectives? Swapping runtime cycles with other units?

If that type of thing happens, what is the dominant intelligence form: human, or AI? If such a shift in dominance occurs, then technology has definitely taken over. The decision-making capacity of a superintelligence would far exceed that of any human, or even any group of humans, in terms of quality, complexity, and quantity. Once the synthetic components surpass or replace biological brain components, then AI takeover has occurred.

Thoughts? The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

Ok, well, get reliable sources. Plus, there seems to be a lot more to this than the mere idea that humans would no longer be in charge, so the AI takeover article doesn't seem like a great home for it. K.Bog 07:21, 24 March 2017 (UTC)[reply]

More digging needed

[edit]

Now all we have to do is find the material on these subtopics out there, in academia and the popular press. This should be fun.

The above references I gathered in minutes. With more involved digging, there is probably a lot more and much better resources on these subtopics. The Transhumanist 06:51, 23 March 2017 (UTC)[reply]

Repaired improper split

[edit]

Last April (2016), the article was split, and the AI takeovers in popular culture section was moved to become its own article.

I've restored a section by that heading into this article, in Wikipedia:Summary style, according to instructions in WP:PROPERSPLIT. The Transhumanist 07:37, 23 March 2017 (UTC)[reply]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


That discussion has been archived, since no consensus reached, and was continued with a related discussion on #What's next? below. The Transhumanist 00:04, 10 May 2017 (UTC)[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

What's next?

[edit]

This version of the article, with the detailed content moved to specific articles where it belongs, is the proper structure. I still fail to understand why an in-depth list of superintelligence capabilities ought to go here in whatever this article is and not in one of the other articles on AI/superintelligence (of which there are still too many, but that's another topic), or why you need to make a list of AI takeovers in popular culture when there is an article specifically for that purpose. K.Bog 21:20, 28 March 2017 (UTC)[reply]

Very good questions. Lets start with AI takeovers in popular culture. That entire article used to be part of the article AI takeover, as it is belongs to that parent subject. When an article grows too large, we WP:SPLIT it, but leave a WP:SUMMARY in the place of the split off material. The list in AI takeover is just a small list of examples, to help the reader understand the subtopic. If the reader wants more detail (the full list), they can click on the provided "Main article" links. The Transhumanist 21:55, 28 March 2017 (UTC)[reply]
This article has more than a summary. It has a mini-list. All the specific references are duplicated in the main article. Summaries don't include lists of examples. K.Bog 22:34, 28 March 2017 (UTC)[reply]
Summaries of lists do. (The summary of a list, is a smaller list). The section summarizes both main links. The Transhumanist 23:42, 30 March 2017 (UTC)[reply]
What? Where on Wikipedia does a list of lists contain arbitrary excerpts of the other lists? K.Bog 16:36, 31 March 2017 (UTC)[reply]
Who said anything about arbitrary excerpts? Who said anything about a list of lists? The Transhumanist 23:48, 1 April 2017 (UTC)[reply]
I suspect the list of capabilities was presented to show how an AI might takeover, and where precisely the risk comes from. I think the author was trying to answer the question: "What is it about AIs that pose the risk of takeover?" I believe that this section can be better written to fit the context of the article's subject. As it is now, it does look like a list of capabilities without an explicit explanation of why it's there. The Transhumanist 21:55, 28 March 2017 (UTC)[reply]
The content isn't necessary at all. It's a summary. All it has to do is demonstrate what the topic is about and why it's notable. Not placing this content in the existential risk article would make it incomplete, so once we fix that, then the content here is redundant. K.Bog 22:34, 28 March 2017 (UTC)[reply]
If it provides a link to here, then it isn't incomplete. The Transhumanist 23:42, 30 March 2017 (UTC)[reply]
You can't make redundant articles with content scattered across multiple pages and say it's complete just because technically they're linked to each other. A single topic should have a single article that makes sense. K.Bog 16:36, 31 March 2017 (UTC)[reply]
Your current approach isn't working, because you are focusing on the coverage of one topic while sacrificing the quality of coverage on the others. Existential risk is not the over-arching topic. AI takeover has greater scope, as does the AI control problem.
I think the solution lies in the literature. A good first step would be to gather sources, then go through them and see what they say about AI takeovers, superintelligence outmoding humans, the nature of coexistence between humans and machines in the future, and so on. The Transhumanist 21:55, 28 March 2017 (UTC)[reply]
I've read many papers on this topic as well as Superintelligence. I don't really see what they say, or are supposed to say, which would indicate that this article should not be restrained to summaries with all specific content placed elsewhere. K.Bog 22:34, 28 March 2017 (UTC)[reply]
Summaries could be good, depending on what they cover. I'm more interested in what you think the article should include, rather than should not include. For example, the important facts about AI takeover that a summary should include. I've posted some questions for you about this below. The Transhumanist 20:46, 29 March 2017 (UTC)[reply]
For now at least, it should be exactly the kind of article which I made in the earlier revision. K.Bog 16:37, 31 March 2017 (UTC)[reply]
But that one doesn't even cover the basics, such as plausibility and probability. The article presents theoretical problems. What about theoretical solutions? I think the material in question (contributing factors, etc.) should be retained, as it sheds light on some of the underlying potential cause/effect relationships. The Transhumanist 18:13, 1 April 2017 (UTC)[reply]

Types of AI takeovers

[edit]

How many different kinds of potential AI takeovers are there?

What are they?

What are the dangers and benefits of each? The Transhumanist 20:52, 29 March 2017 (UTC)[reply]

Could AIs actually take over?

[edit]

What's the likelihood?

What are the likelihoods of the various types? (Including the fictional ones).

What sources are there that try to answer these questions? The Transhumanist 20:46, 29 March 2017 (UTC)[reply]

Could AI takeover be prevented?

[edit]

There's a hypothetical risk.

Are there hypothetical preventions?

If so, what are they and how would they help?

Is merging with machines a prevention, or a type of AI takeover? The Transhumanist 21:19, 29 March 2017 (UTC)[reply]

Balance of article is more Con than Pro

[edit]

I understand that an article called "AI takeover" may tend to skew a little bit towards the negatives of AI, but was wondering if there are possibly a few positives of "AI takeover" that are not being covered (i.e. trains running on time)? Moderation in all things including moderation ;-) Septagram (talk) 22:00, 2 April 2017 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on AI takeover. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 00:44, 24 June 2017 (UTC)[reply]

AGI

[edit]

The term 'AGI' is used repeatedly without ever being defined. — Preceding unsigned comment added by 2406:5A00:C002:4200:D008:2765:54BD:1833 (talk) 05:19, 10 March 2018 (UTC)[reply]

Good point, fixed. Rolf H Nelson (talk) 23:09, 10 March 2018 (UTC)[reply]

AGI takeover via "Clanking Replicator"

[edit]

Hi, did something happen to my edit? Just vanished without any warning.

Seems that many existing hardware components may under some conditions be able to achieve some level of awareness. The most common one seems to be inference chips, also certain memory components and multilayer FPGAs.

Possible causes include ambient radiation increase causing soft errors of an unpredictable nature, regulator oscillation leading to unusual circulating patterns like a cellular automata and chip aging causing memory to exhibit synapse-like interactions between adjacent cells on different chip layers. — Preceding unsigned comment added by 185.3.100.14 (talk) 04:31, 16 August 2019 (UTC)[reply]

Unsourced sections

[edit]

@User:Septagram Some of the content has remained unsourced for months, and it is unclear to me that the content in its current wording is WP:DUE. Unless someone is planning to source them the content should be deleted per Wikipedia policy. Are there particular sections that you think can be rescued? Rolf H Nelson (talk) 05:36, 1 May 2020 (UTC)[reply]

A.X.E.L.

[edit]

Would you consider this movie a movie of AI Takeover or just AI. I think this because you can control it but it can also control itself so I do not know. Please if you watched this Movie help me out.

Braydenhiggins14 (talk) 16:33, 24 November 2020 (UTC)@user:braydenhiggins14Braydenhiggins14 (talk) 16:33, 24 November 2020 (UTC)[reply]

"Our new overlords" listed at Redirects for discussion

[edit]

A discussion is taking place to address the redirect Our new overlords. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 July 3#Our new overlords until a consensus is reached, and readers of this page are welcome to contribute to the discussion. signed, Rosguill talk 17:29, 3 July 2021 (UTC)[reply]

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was not to merge (Oppose).    — The Transhumanist   17:02, 5 June 2022 (UTC)[reply]

There's no need to split this content out, neither article is overly long. Piotr Konieczny aka Prokonsul Piotrus| reply here 03:03, 7 August 2021 (UTC)[reply]

See also Talk:Existential_risk_from_artificial_general_intelligence#Merge_is_still_needed to merge this artilce with AI control problem and Existential_risk_from_artificial_general_intelligence. –LaundryPizza03 (d) 03:39, 7 August 2021 (UTC)[reply]
This is the better of the two. Perhaps the merge should be to this article. —¿philoserf? (talk) 07:36, 7 August 2021 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Wiki Education assignment: Research Process and Methodology - SU23 - Sect 200 - Thu

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 24 May 2023 and 10 August 2023. Further details are available on the course page. Student editor(s): NoemieCY (article contribs).

— Assignment last updated by NoemieCY (talk) 12:54, 20 July 2023 (UTC)[reply]

Wiki Education assignment: Digital Media and Information in Society

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 28 August 2023 and 14 December 2023. Further details are available on the course page. Student editor(s): Samantha Marie D (article contribs).

— Assignment last updated by Stevesuny (talk) 19:01, 16 October 2023 (UTC)[reply]

Wiki Education assignment: Research Process and Methodology - SU24 - Sect 200 - Thu

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 22 May 2024 and 24 August 2024. Further details are available on the course page. Student editor(s): Zq2197 (article contribs).

— Assignment last updated by Zq2197 (talk) 04:30, 17 August 2024 (UTC)[reply]

Article summary

[edit]

In this article it balance between on either AI is bad and if it good. When the article is talking about if AI is bad it gives an example proving why is it bad. Also the same thing when it is talks about something good about AI it gives an example. If the content is up to date it gives an examples that happen this year, and it talks about Elon Musk says that they are making sure that the AI doesn't take over the planet. The article does have a image in it I don't believe people would get on the first look, because it is play from the year 1920. The people from this age wouldn't understand it they would have to research on it. In my opinion I think this article it is easy to ready it has section on what it is going to talk about. Also in the sections that is going to talk about it doesn't move to a different topic and then it comes back to the topic. In the talk section the conversation that they are having are in the same topic but some of the comments are talking about are movies.

Question: What are some law or rules should people put on when it comes to AI generated content. Alanv57 (talk) 03:46, 13 October 2024 (UTC)[reply]

@Alanv57 I agree that the image could be updated. Your other arguments may benefit from rephrasing. WeyerStudentOfAgrippa (talk) 11:13, 14 October 2024 (UTC)[reply]