Jump to content

Talk:Three Laws of Robotics

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Former featured articleThree Laws of Robotics is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Main Page trophyThis article appeared on Wikipedia's Main Page as Today's featured article on July 5, 2006.
Article milestones
DateProcessResult
May 1, 2005Featured article candidatePromoted
December 18, 2009Featured article reviewDemoted
November 27, 2010Peer reviewReviewed
December 24, 2010Good article nomineeNot listed
Current status: Former featured article

the third law

[edit]

What is the usefulness of this law? the first two laws make sense to be so crucially required, but the third? 82.77.116.227 (talk) 09:59, 16 January 2011 (UTC)[reply]

So you buy a robot to protect your bank and one to help around your house. You come home and find the robot in the garden on the ground sparking all over from where it was hit by a falling tree while weeding. You pay £1000 to get it fixed thinking "stupid robot - why did it just stand there while it got squashed?". A few days later a burglar comes to your home and tries to break in. The robot sees him and sits there while the burglar bashes its positronic brain in and then steals all your stuff. The next morning you go to your bank and find that the same burglar decided to break into the bank where he bashed in the brains of all the robots and walked off with all your gold bullion. You say to yourself "Wow - I wish the robots had tried to protect themselves by running away and alerting me, the police or someone" Chaosdruid (talk) 10:24, 16 January 2011 (UTC)[reply]
Also I remember reading (long ago so I have no source) that Asimov wanted exactly three laws in imitation of the Three laws of thermodynamics. As a biochemistry professor, he was very aware of thermodynamics. Dirac66 (talk) 12:28, 11 September 2012 (UTC)[reply]
yep, and ditto for newton's three laws. but the thermodyamics analogy is deepened by Asimov later on (both later on in his writing career but more importantly also "later on" in the in-universe sense) adding in a zeroth law. 2A01:CB0C:CD:D800:A1A9:5881:DD7C:11F9 (talk) 14:20, 29 January 2023 (UTC)[reply]

Three laws in real life

[edit]

According to the back of The Complete Robot the laws were once programmed into real computers at the Massachusetts Institute of Technology with "interesting" results? What were these results and are they notable?Dalek9 (talk) 16:23, 17 March 2011 (UTC)[reply]

LOL! As of 2014, there is no (known) robot capable of higher (abstract) thought. That is, there is no platform on which implementation of these "Laws" is possible.173.189.78.173 (talk) 12:43, 4 September 2014 (UTC)[reply]
exactly, it will be a long time before there is AI that is capable of understanding these laws the way us humans do. till then, everybody can muck about with simulacra of their own choosing. which may be interesting, in the way any other pastime is interesting. whoever wrote the blurb wanted to emphasise the obvious connection between real-life robotics and asimov: one would be hard-pressed to find a roboticist who hasn't read them as a child! 2A01:CB0C:CD:D800:A1A9:5881:DD7C:11F9 (talk) 14:18, 29 January 2023 (UTC)[reply]

Original Research

[edit]

(Moved from my talkpage:) When you reviewed the article there were several areas where you thought that OR was prevalent. Is there any chance you could take a quick look and tell me whether or not you think those areas have been addressed? My intention is to put it up for GAN again in the next 4 weeks so would appreciate your input in particular. Thanks Chaosdruid (talk) 00:47, 17 June 2011 (UTC)[reply]

I'll respond on the article talkpage. SilkTork *Tea time 08:34, 17 June 2011 (UTC)[reply]
  • Having dipped into a few sections in the article I found some statements which are not securely cited. I tidied a few statements, and have tagged a few others as examples. This was fairly random and should not be seen as comprehensive. Sorting out the statements tagged may not be enough, and it would be appropriate to go through the entire article carefully. I still feel that the more appropriate approach would be to scrap this article and start again from scratch. I think the framework and focus of the article is grounded in OR, and encourages editors to join in with opinion and speculation based on personal observation and knowledge. What is required is appropriate reliable sources which significantly discuss the Three Laws. Comments based on editorial observation of various novels, and making comparisons between them is OR, and is to be avoided. SilkTork *Tea time 08:58, 17 June 2011 (UTC)[reply]
At the start of the section Three_Laws_of_Robotics#History_of_the_Three_Laws you have placed a {{cn}} on "Before Asimov began writing, the majority of artificial intelligence in fiction followed the Frankenstein pattern,[citation needed] one that Asimov found unbearably tedious:"
Can you tell me why you do not think the quote and ref which follow it are not good enough? Chaosdruid (talk) 13:00, 18 June 2011 (UTC)[reply]
@Chaosdruid: I agree with you, and I believe that this section adequately explains the Frankenstein reference. I'm going to remove the {{cn}} until a good explaination for it is provided. HappyGod

Roboassessment - B class

[edit]

I have reinstated the Robotics project "B" class. If anyone who is not familiar with the Robotics assessment scale would like to check it out before deciding whether they should reassess the article for us, they would note that the article clearly falls within the B class parameters. If they feel those parameters need changing, there is a forum for discussion at the Robotics project. I did not make those guidelines, but I am following them. Chaosdruid (talk) 10:06, 10 February 2012 (UTC)[reply]

In Translation

[edit]

Very good Wikipedia editorial cabal: You've replaced Asimov's own words with your own inaccurate paraphrase of them, in addition to removing the French words that provide the evidence for your claim. Well done. Just what I would expect.

See http://en.wikipedia.org/w/index.php?title=Three_Laws_of_Robotics&diff=next&oldid=607488250

Modern Understanding?

[edit]

I have seen references to Asimov's Three Laws of Robotics in AI (academic) literature. My understanding is that the Laws have fundamental problems which should be addressed in this article. The most obvious, from a practical pov, is that each of them require a huge body of data and computation - effectively an infinite amount - which precludes any robot ever acting. Second is the implication that harm is black and white. The probability of harm is rarely either 100% or 0%. There is a huge body of relevant neuropsychological literature on moral judgements (the child on the train tracks, the fat man on the bridge, etc.) It turns out that our behavior is generally rationally justified AFTER the act, which is often NOT rational. There are several other problems (and this is OR, in the sense that I've not seen it in print (although extremely unlikely to actually be original!)). What is meant by "harm"? What is the meaning of "human being"? What is the definition of "humanity"? EG. Would a robot force you to eat exactly what is "best" for you? Would it take away your car keys and make you walk to the store for exercise? (I forget who wrote the series about the robots occupying the galaxy(?)and becoming our caretakers until we 'matured' enough to make 'better' choices.) There is harm to our cells, harm to our organs, harm to our bodies, harm to our minds, harm to our careers... If you were tired would a robot steal you some amphetamine? Would a robot tell a child that Santa Claus doesn't exist? (That your wife is cheating on you?) The idea of "harm" requires a clear model of what a "healthy" state requires. It conflicts with free will. A person ages and develops, is this "harm"? Is education/learning potentially harmful? (imho, yes - as well as beneficial). As far as "humanity", would/should a robot prevent reproduction by people carrying the sickle cell gene? (Which is harmful in many circumstances but imparts protection from malaria.) Same thing with robot harming itself. For most machines, movement (as well as use of electronic circuitry) erodes, wears, and ages the mechanism. How does a robot protect itself? And finally, how likely is it that "orders" can be comprehensive enough so that compliance is feasible? It requires not just a theory of mind, but a very accurate model of what the person means. Just my two cents.173.189.78.173 (talk) 13:26, 4 September 2014 (UTC)[reply]

Asimov was a writer. Flaws in the laws were the whole point. 2A01:CB0C:CD:D800:A1A9:5881:DD7C:11F9 (talk) 14:14, 29 January 2023 (UTC)[reply]

Or just consider aliens different from humans. I think these rules are quite human-centric and apparently the robots are allowed to wipe out any other civilizations they might come across. 213.46.51.199 (talk) 08:13, 17 September 2015 (UTC)[reply]

Merge

[edit]

I think this should be merged with Laws of robotics. They seem to be the exact same topic. — Preceding unsigned comment added by 87.210.58.57 (talk) 01:54, 23 September 2014 (UTC)[reply]

No merger needed. https://en.wikipedia.org/w/index.php?title=Laws_of_Robotics&redirect=no is a WP:Redirect to this article. Which is why they are identical, and the exact same topic. — Lentower (talk) 02:49, 24 September 2014 (UTC)[reply]

Google's amendments

[edit]

Google recently is reported to make its "5 amendments" to the laws: http://www.cnet.com/news/google-goes-asimov-and-spells-out-concrete-ai-safety-concerns/ I think this should be included here, or elsewhere? Or is it already included somewhere? I am not qualified enough at the time with the topic, but I may try to add this later. --ssr (talk) 18:50, 28 June 2016 (UTC)[reply]

Is the Superman Robot Picture Applicable?

[edit]

It's been a very long time since I saw that particular episode of Superman, but I'm like 99% certain it was *not* acting of its own volition, and was instead remotely controlled by a human villain. The screen capture is attached to a paragraph that talks about robots destroying their own creator(s), so it seems that image is not a very good example of the kind of story Asimov was trying to avoid.

In fact, you can view the entire episode here, and it's much as I remembered it. The robots are semi-autonomous, but never turn on their creator or any similar trope. 162.252.201.32 (talk) 12:09, 25 September 2016 (UTC)[reply]

4th law

[edit]

Robot should not alter or delete the above laws. Filippos2 (talk) 07:01, 30 November 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 3 external links on Three Laws of Robotics. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 01:37, 12 May 2017 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Three Laws of Robotics. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 21:08, 20 May 2017 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Three Laws of Robotics. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 01:18, 6 December 2017 (UTC)[reply]

Dempsey's 5 questions

[edit]

Perhaps mention Dempsey's 5 questions (see https://qz.com/559432/robots-are-learning-to-say-no-to-human-orders-and-your-life-may-depend-on-it/ )? They include: do I know how to do that? do I have to do that based on my job? does it violate any sort of normal principle if I do that?

historical technological context

[edit]

IMHO the article should note that Asimov created the "three laws" before the advent of digital general-purpose computing, and his "robotic brains" are quite different from today's technology. Rather then being constructed and programmed down to the last circuit and the last if/then/else, they are more akin to the analogue, special purpose computers prevalent in the mid 20th century. His "positronic brains" are mathematically formulated by larger outlines, then grown like crystals in a single piece. In this fictional technological context, abstract rules like the three laws can be inextricably built into the basic functions of a robot. It is even implied that the three laws are a mathematical necessity to create functioning robots at all. When such a robot malfunctions, it cannot be "reprogrammed", but will sent to a "robot psychologist" who will try to re-balance abstract potentials. In comparison, today's manner of computing is open to diverge from any abstract or ethical goal at every single step in millions of lines of program, at the whims of the programmer. -- Theoprakt (talk) 05:41, 1 June 2021 (UTC)[reply]

correct, except in your assertion that this was "before the advent of digital general-purpose computing" - a more pertinent question would be how much A knew about both the theory and the state-of-the-art when he first formulated the laws. 2A01:CB0C:CD:D800:A1A9:5881:DD7C:11F9 (talk) 14:13, 29 January 2023 (UTC)[reply]

"Perhaps ironically, or perhaps because it was artistically appropriate..."

[edit]

Given that Asimov was a writer, and stories require tension and a twist, it is a pretty safe bet that mutual inconsistencies and unexpected outcomes were the point of these Laws: with a manufacturer trying to bake safeguards into these machines, and unusual circumstances causing matters to go all haywire. What is ironic though, is that Asimov explains at length how he wanted to wean us off the so-called Frankenstein complex and then goes on to write stories that ultimately corroborate just such fears. 2A01:CB0C:CD:D800:A1A9:5881:DD7C:11F9 (talk) 14:10, 29 January 2023 (UTC)[reply]