Jump to content

Talk:Knowledge representation and reasoning/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

KR Language

Should there be a catgeory for knowledge representation languages like KM, OWL, CycL and KIF? Would it be a subcategory of Category:logic programming languages or Category:declarative programming languages? What other examples are there? Bovlb 08:39, 2005 Apr 9 (UTC)

Tree of Porphyry

Will someone add an image of the tree of porphyry here? It's a great example of the knowledge representation in the natural domain, making a nice counterpoint to a lot of current interest in knowledge representation from the "artificial intelligence" community... Plus, thi sarticle needs a pictureChris Chatham

Sanskrit & Knowledge Representation ??

Sanskrit & Artificial Intelligence / Knowledge Representation

any idea ?

-- 172.174.175.119 11:34, 22 July 2006 (UTC)

see http://en.wikipedia.org/wiki/Talk:Sanskrit#Sanskrit_.26_Artificial_Intelligence_.3F.3F

In References section link http://citeseer.nj.nec.com/context/177306/0 for Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks doesn't respond —The preceding unsigned comment was added by Y2y (talkcontribs) 11:51, 4 March 2007 (UTC).

Criticism/Competing fields

I have noticed that there is not a section about competing fields of AI and criticism of knowledge representation. At some universities, professors believe knowledge representation is not a promising field in AI (at Stanford, at least). I am not an AI expert, and it would be useful to have some comparison between KR and other schools of thought. 128.12.108.147 01:30, 5 March 2007 (UTC)

I've worked with people from the AI group at Stanford and went there a long time ago. Who specifically thinks knowledge representation "is not a promising field in AI"? I've not heard that from any Stanford researcher that I know of. RedDog (talk) 13:15, 7 December 2013 (UTC)

Mycin? Hunh?

This passage is confusing: In the field of artificial intelligence, problem solving can be simplified by an appropriate choice of knowledge representation. Representing the knowledge using a given technique may enable the domain to be represented. For example Mycin, a diagnostic expert system used a rule based representation scheme. An incorrect choice would defeat the representation endeavor....

That's all very blurry, at least to a layman.

Agree. Poorly worded, I get what they are saying I think but it's not at all clear. Not sure if that is still even there if it is I will change it (I'm planning a rewrite and just reading through the Talk page first) RedDog (talk) 13:13, 7 December 2013 (UTC)

Lots of errors

After a quick read, I noticed a number of factual errors in this article. I've marked it with "expert" and "nofootnotes" tags to warn the reader. I will fix them eventually, but I have a lot on my plate. Are the original authors still watching this article? ---- CharlesGillingham (talk) 11:30, 21 November 2007 (UTC)

I'm just starting to review it and plan on doing some major reworking. I agree it's not in great shape now. RedDog (talk) 13:11, 7 December 2013 (UTC)

New Intro

I took a pass at a revised introduction based on some KR lectures I've given. I think the article should be reviesed around the notion of expressivity and complexity, with some links possibly to separate topics on general problems in KR (and in some cases how they were solved), for instance, the frame problem the symbol grounding problem, etc. Gorbag42 (talk) 18:51, 22 September 2008 (UTC)

There is an article on the Frame problem fyi. I agree it may be worth a mention but it's rather tangential to the main issues of KR as discussed in mainstream AI IMO. RedDog (talk) 13:09, 7 December 2013 (UTC)

Dubious implications

"Science has not yet completely described the internal mechanisms of the brain to the point where they can simply be replicated by computer programmers."

The word 'yet' implies that science will or is capable of completely describing the internal mechanisms of the brain. Certainly there is no scientific basis for this implication.

Even if we concede that it can, the further implication is that computer programmers would be able to replicate them. This is even more dubious than the first assertion. —Preceding unsigned comment added by 74.37.134.51 (talk) 06:47, 12 December 2009 (UTC)

Just to state my bias I think they can and will eventually be "replicated" but I agree we are so far from that it's just pointless speculation and there is no reason to include that. Also, most of the leading people in AI would be careful not to make such grandiose and vague claims. RedDog (talk) 13:08, 7 December 2013 (UTC)

Reasoning

This article was renamed from "knowledge representation" to "knowledge representation and reasoning". If the reasoning part is to be kept, then a merge with automated reasoning should be considered. Representation and reasoning are inherently related but they seem to be split into different academic communities/conferences etc. Any opinions? pgr94 (talk) 18:16, 21 June 2010 (UTC)

Strongly oppose merging with automated reasoning. The two terms are related for sure but they don't mean the same thing at all. Automated reasoning is a more vague term that usually refers to things like theorem provers, inference engines, etc. Knowledge representation refers more to languages such as LOOM, KL-One, KEE, etc. More about the structure of the data than the inferencing. I think we should just drop the "and reasoning" the term people use in AI is just Knowledge Representation. RedDog (talk) 13:05, 7 December 2013 (UTC)

Low quality article

I would advise readers not to even read 3 lines of this article. It has so many errors it is toxic. It is beyond repair, and needs a rewrite. History2007 (talk) 23:39, 9 January 2012 (UTC)

I actually didn't think it was quite THAT bad but I'm grading on a curve I've found a few articles in the AI and OO space that were really bad, there are some articles with code examples for OO that are just plain wrong, written as if by someone who wanted to say "here is what people who don't understand OO write as their first program" But I digress, anyway I've redone the Overview and plan to redo more of the article as well. MadScientistX11 (talk) 15:38, 24 December 2013 (UTC)
I've now rewritten the entire article except for the Ontology Engineering section which I left mostly as it was. MadScientistX11 (talk) 16:53, 25 December 2013 (UTC)

Plan to do some major rewriting of this article

I'm currently working on another related article which is smaller and easier to fix but when I finish that one I plan to work on this one. Just giving anyone watching this page a heads up in case they want to start or restart some discussion. So far I've just taken a quick look but I think this article needs lots of work. I am an expert in the field. I've been a principal investigator for DARPA, USAF, and NIST research projects and worked in the group doing KR research at the Information Sciences Institute. I think this article should just be called Knowledge Representation there is no need for the "reasoning" (reasoning is what you do with KR but when people talk about KR they usually just use those two words). More later. RedDog (talk) 13:02, 7 December 2013 (UTC)

Expressivity: I think this is wrong as currently stated

The overview section of the article currently says: "The more expressive a KR, the easier and more compact it is to express a fact or element of knowledge within the semantics and grammar of that KR. However, more expressive languages are likely to require more complex logic and algorithms to construct equivalent inferences. A highly expressive KR is also less likely to be complete and consistent." I think this is wrong. By more expressive I assume is meant "closer to a complete representation of First Order Logic". That is the holy grail for KR that is held up by researchers as the ideal that we can never quite achieve in reality but that we are aiming to get as close to as feasible. (e.g. see Brachman's papers on the topic in his book Reading in Knowledge Representation). But the more expressive a language is then actually the less complex any individual statement needs to be because you can't get more expressive than FOL. It's true that understanding HOW TO USE the system may be more complicated but that is a different issue than the complexity of any specific statement in the language. The same goes for completeness and consistency. The closer you get to FOL the MORE likely that you can automate things like completeness and consistency checking. The problem is that if you have full FOL then we know (it's been proven mathematically) that there will be some expressions (e.g. quantification over infinite sets) that even theoretically can never terminate and hence if you try to prove the completeness or correctness of a system with such statements your program won't terminate. Again, this is all covered by Brachman in the book I mentioned which is in my experience one of the best collection of influential KR papers. I plan to change this but wanted to document the issue in case anyone wants to discuss before I edit the article. MadScientistX11 (talk) 22:03, 23 December 2013 (UTC)

I've rewritten it to make it more accurate and have added several references to classic papers on the topic of KR in AI. One aspect that I would like to see added at some point is something on the neural net view of things. I think those guys refer to the neural networks as a "knowledge base" as well and their approach to representing knowledge is diametrically opposite to the symbolic AI view represented so far. But I don't know as much about the neural net guys so right now I'm not going to write that. I've been reading up on the topic and if I feel competent to add something later i will. MadScientistX11 (talk) 15:35, 24 December 2013 (UTC)

Reference number four

Currently reference number four is a link to this site: http://aitopics.org/ This is just a general site with AI papers, it's unclear what specific paper is being referenced if one ever was so the ref is really meaningless and I'm going to delete it. MadScientistX11 (talk) 04:26, 24 December 2013 (UTC)

Ontology Engineering section

I've now rewritten the entire article except for the Ontology Engineering section which I left with some minor edits. There is a big chunk of the Ontology Engineering section that I still don't necessarily agree with it says:

"As a second example, medical diagnosis viewed in terms of rules (e.g., MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand."

Now saying whether something is "substantially different" is a judgement call and I don't think there can be a black or white answer. But it seems to me that this argument actually contradicts the point that was made earlier in the same section. That point (which I agree with) is that Frames, rules, objects, semantic nets, Lisp code, don't ultimately matter what matters is the actual knowledge. My guess is if we actually looked at the way Internest and MYCIN work the medical concepts underneath them are essentially the same. What is different is the knowledge representation scheme which I thought the section was arguing earlier is really not that critical. Actually that isn't true either, try implementing complex rules in C code. It can be done but it will take a lot longer. I think these issues need to be teased apart better than in the current section but I'm not sure how to do that right now so I'm leaving it and documenting in case someone else agrees and wants to give it a shot. MadScientistX11 (talk) 16:52, 25 December 2013 (UTC)

WikiProject Religion, WikiProject Transhumanism, etc.

@Dimadick: A few months ago, you added this article to many other WikiProjects, including WikiProject Religion, but I don't see its relevance here. How is this article related to religion or transhumanism? Jarble (talk) 16:16, 14 July 2016 (UTC)

This is an article related to artificial intelligence and Transhumanism covers topics (to quote from the relevant article) "nanotechnology, biotechnology, information technology and cognitive science (NBIC), as well as hypothetical future technologies like simulated reality, artificial intelligence, superintelligence, 3D bioprinting, mind uploading, chemical brain preservation and cryonics." And the relevant article quotes theological arguments on transhumanism and its emerging technologies. Dimadick (talk) 16:25, 14 July 2016 (UTC)

Dimadick I agree with Jarble Knowledge representation and reasoning is not directly relevant to either religion or transhumanism. Artificial intelligence is relevant to transhumanism but KR is one of many, many sub-domains of AI (e.g., machine learning, logic programming, lambda calculus, frame problem, expert systems,...). It makes no sense to link every or any of the sub-domains. A link to Artificial intelligence is more than adequate. I don't even see the argument for linking it to religion. Unless you are making some kind of a transitive closure argument: Transhumanism is relevant to AI and Transhumanism is relevant to Religion so therefore AI (and all sub-domains) is relevant to religion?? By that kind of argument you would soon have everything in Wikipedia linked to everything else. But perhaps I'm missing what you meant there. If there isn't a stronger argument for including Transhumanism and Religion I'm going to delete them. --MadScientistX11 (talk) 02:40, 19 July 2016 (UTC)

Hello fellow Wikipedians,

I have just modified 2 external links on Knowledge representation and reasoning. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 05:45, 7 May 2017 (UTC)

Hello fellow Wikipedians,

I have just modified 4 external links on Knowledge representation and reasoning. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 17:54, 11 December 2017 (UTC)

Incorrect Reference

Currently this sentence in the introduction: " Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets" has a reference (number 2) that is titled: "Knowledge Representation in Neural Networks - deepMinds". deepMinds. 2018-08-16" First of all the link goes to an archived page that is just an outline of topics and when one clicks on the topic "Knowledge Representation in Neural Networks" the link just goes back to the same page. So there is no reference there. But even if there was from the title this would be completely irrelevant to the sentence. Neural networks don't use logic or set theory. There are two broad approaches to AI: Symbolic AI and Machine Learning. Machine learning (which neural networks are an example of) is based on mathematical algorithms from statistics, linear algebra, and calculus. They don't explicitly represent knowledge as sets or rules which a human expert would recognize, their knowledge is encoded in layers of nodes with connections of various strengths or other kinds of mathematical structures that are developed based on iterating an algorithm over large sets of test data. Typically, even the developers don't know how to map the various nodes and links (or parameters to algorithms) back to the training sets. It's one of the biggest open issues with ML right now is to provide explanation capabilities. It is completely irrelevant to the topic of this article which is about symbolic AI where the knowledge is encoded explicitly in high level languages that are designed to be intuitive to domain experts not just programmers. I'm just going to remove the reference. It's typically not required to have references in the introduction since everything in the introduction is covered in more detail in the article and the body of the article is where the references typically are. I'm going to check to make sure that statement is supported by the references later in the article and if not I will add one that is relevant. --MadScientistX11 (talk) 20:15, 20 March 2020 (UTC)

I removed that reference and I checked and there are other good references to support that statement in the body of the article. E.g., refs to KL-One, Loom, the Semantic Web, etc. --MadScientistX11 (talk) 20:23, 20 March 2020 (UTC)