User:Veritas Aeterna/Work in Progress, Neuro-Symbolic Artificial Intelligence
Neuro-symbolic AI attempts to integrate neural and symbolic AI architectures in a manner that addresses strengths and weaknesses of each, in a complementary fashion, in order to support robust AI capable of reasoning, learning, and cognitive modeling. As argued by Valiant[1] and many others,[2] the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient (machine) learning models. Gary Marcus, similarly, argues that: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning."[3], and in particular: "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can manipulate such abstract knowledge reliably is the apparatus of symbol-manipulation."[4]
Henry Katz,[5] Francesca Rossi,[6] and Bart Selman[7] have also argued for a synthesis. Their arguments are based on a need to address the two kinds of thinking discussed in Daniel Kahneman's book, Thinking Fast and Slow. Kahneman describes human thinking as having two components, System 1 and System 2. System 1 is fast, automatic, intuitive and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is the kind used for pattern recognition while System 2 is far better suited for planning, deduction, and deliberative thinking. In this view, deep learning best models the first kind of thinking while symbolic reasoning best models the second kind and both are needed.
Garcez describes research in this area as being ongoing for at least the past twenty years,[8] dating from his 2002 book on neurosymbolic learning systems.[9] A series of workshops on neuro-symbolic reasoning has been held every year since 2005, see http://www.neural-symbolic.org/ for details.
In their 2015 paper, Neural-Symbolic Learning and Reasoning: Contributions and Challenges, Garcez et al. argue that:
The integration of the symbolic and connectionist paradigms of AI has been pursued by a relatively small research community over the last two decades and has yielded several significant results. Over the last decade, neural symbolic systems have been shown capable of overcoming the so-called propositional fixation of neural networks, as McCarthy (1988) put it in response to Smolensky (1988); see also (Hinton, 1990). Neural networks were shown capable of representing modal and temporal logics (d’Avila Garcez and Lamb, 2006) and fragments of first-order logic (Bader, Hitzler, Hölldobler, 2008; d’Avila Garcez, Lamb, Gabbay, 2009). Further, neural-symbolic systems have been applied to a number of problems in the areas of bioinformatics, control engineering, software verification and adaptation, visual intelligence, ontology learning, and computer games. [2]
Kinds of Approaches
[edit]Approaches for integration are varied. Henry Kautz’s taxonomy of neuro-symbolic architectures, along with some examples, follows:
- Symbolic Neural symbolic—is the current approach of many neural models in natural language processing, where words or subword tokens are both the ultimate input and output of large language models. Examples include BERT, RoBERTa, and GPT-3.
- Symbolic[Neural]—is exemplified by AlphaGo, where symbolic techniques are used to call neural techniques. In this case the symbolic approach is Monte Carlo tree search and the neural techniques learn how to evaluate game positions.
- Neural|Symbolic—uses a neural architecture to interpret perceptual data as symbols and relationships that are then reasoned about symbolically. The Neural-Concept Learner[10] is an example.
- Neural:Symbolic → Neural—relies on symbolic reasoning to generate or label training data that is subsequently learned by a deep learning model, e.g., to train a neural model for symbolic computation by using a Macsyma-like symbolic mathematics system to create or label examples.
- Neural_{Symbolic}—uses a neural net that is generated from symbolic rules. An example is the Neural Theorem Prover,[11] which constructs a neural network from an AND-OR proof tree generated from knowledge base rules and terms. Logic Tensor Networks[12] also fall into this category.
- Neural[Symbolic]—allows a neural model to directly call a symbolic reasoning engine, e.g., to perform an action or evaluate a state.
These categories are not exhaustive, for example, they do not consider multi-agent systems. In 2005, Bader and Hitzler presented a more fine-grained categorization that considered, e.g., whether the use of symbols included logic or not, and if it did, whether the logic was propositional or first-order logic.[13]
As a Prerequisite for Artificial General Intelligence
[edit]Gary Marcus argues that "…hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient",[14] and that there are:
"...four cognitive prerequisites for building robust artificial intelligence:
- hybrid architectures that combine large-scale learning with the representational and computational powers of symbol-manipulation,
- large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge,
- reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and
- rich cognitive models that work together with those mechanisms and knowledge bases."[15]
Open Research Questions
[edit]Many key research questions remain, such as:
- What is the best way to integrate neural and symbolic architectures?
- How should symbolic structures be represented within neural networks and extracted from them?
- How should common-sense knowledge be learned and reasoned about?
- How can abstract knowledge that is hard to encode logically be handled?
Implementations
[edit]Some specific implementations of neuro-symbolic approaches are:
- Logic Tensor Networks—these encode logical formulas as neural networks and simultaneously learn term neural encodings, term weights, and formula weights from data.
Citations
[edit]- ^ Valiant 2008.
- ^ a b Garcez et al. 2015.
- ^ Marcus 2020, p. 44.
- ^ Marcus 2019, p. 17.
- ^ Kautz 2020.
- ^ Rossi 2022.
- ^ Selman 2022.
- ^ Garcez et al. 2020, p. 2.
- ^ Garcez et al. 2002.
- ^ Mao 2019.
- ^ Rocktäschel, Tim; Riedel, Sebastian (2016). "Learning Knowledge Base Inference with Neural Theorem Provers". Proceedings of the 5th Workshop on Automated Knowledge Base Construction. San Diego, CA: Association for Computational Linguistics. pp. 45–50. doi:10.18653/v1/W16-1309. Retrieved 2022-08-06.
- ^ Serafini, Luciano; Garcez, Artur d'Avila (2016), Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge, arXiv:1606.04422, retrieved 2022-08-02
- ^ Bader & Hitzler 2005.
- ^ Marcus 2020, p. 50.
- ^ Marcus 2020, p. 48.
References
[edit]- Bader, Sebastian; Hitzler, Pascal (2005-11-10), Dimensions of Neural-symbolic Integration - A Structured Survey, arXiv:cs/0511042, retrieved 2022-08-12
- Garcez, Artur S. d'Avila; Broda, Krysia; Gabbay, Dov M.; Gabbay, Augustus de Morgan Professor of Logic Dov M. (2002). Neural-Symbolic Learning Systems: Foundations and Applications. Springer Science & Business Media. ISBN 978-1-85233-512-0.
- Garcez, Artur; Besold, Tarek; De Raedt, Luc; Földiák, Peter; Hitzler, Pascal; Icard, Thomas; Kühnberger, Kai-Uwe; Lamb, Luís; Miikkulainen, Risto; Silver, Daniel (2015). Neural-Symbolic Learning and Reasoning: Contributions and Challenges. AAI Spring Symposium - Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Stanford, CA: AAAI Press. doi:10.13140/2.1.1779.4243.
- Garcez, Artur d'Avila; Gori, Marco; Lamb, Luis C.; Serafini, Luciano; Spranger, Michael; Tran, Son N. (2019), Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning, arXiv:1905.06088, retrieved 2022-07-12
- Garcez, Artur d'Avila; Lamb, Luis C. (2020), Neurosymbolic AI: The 3rd Wave, arXiv:2012.05876, retrieved 2022-07-14
- Honavar, Vasant (1995). Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy. The Springer International Series In Engineering and Computer Science. Springer US. pp. 351–388. doi:10.1007/978-0-585-29599-2_11.
- Henry Kautz (2020-02-11). The Third AI Summer, Henry Kautz, AAAI 2020 Robert S. Engelmore Memorial Award Lecture. Retrieved 2022-07-06.
- Kautz, Henry (2022). "The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture". AI Magazine. 43 (1): 93–104. doi:10.1609/aimag.v43i1.19122. ISSN 2371-9621. S2CID 248213051. Retrieved 2022-07-12.
- Mao, Jiayuan; Gan, Chuang; Kohli, Pushmeet; Tenenbaum, Joshua B.; Wu, Jiajun (2019), The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision, arXiv:1904.12584, retrieved 2022-08-12
- Marcus, Gary; Davis, Ernest (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage.
- Marcus, Gary (2020), The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence, arXiv:2002.06177, retrieved 2022-07-12
- Rossi, Francesca (2022-07-06). "AAAI2022: Thinking Fast and Slow in AI (AAAI 2022 Invited Talk)". Retrieved 2022-07-06.
- Selman, Bart (2022-07-06). "AAAI2022: Presidential Address: The State of AI". Retrieved 2022-07-06.
- Serafini, Luciano; Garcez, Artur d'Avila (2016-07-07), Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge, arXiv:1606.04422, retrieved 2022-08-02
- Valiant, Leslie G (2008). "Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence": 8.
{{cite journal}}
: Cite journal requires|journal=
(help)