Jump to content

Lighthill report

From Wikipedia, the free encyclopedia
(Redirected from Lighthill Report)

Artificial Intelligence: A General Survey, commonly known as the Lighthill report, is a scholarly article by James Lighthill, published in Artificial Intelligence: a paper symposium in 1973.[1] It was compiled by Lighthill for the British Science Research Council as an evaluation of academic research in the field of artificial intelligence (AI). The report gave a very pessimistic prognosis for many core aspects of research in this field, stating that "In no part of the field have the discoveries made so far produced the major impact that was then promised". It "formed the basis for the decision by the British government to end support for AI research in most British universities",[2] contributing to an AI winter in Britain.

Publication history

[edit]

It was commissioned by the SRC in 1972 for Lighthill to "make a personal review of the subject [of AI]". Lighthill completed the report in July. The SRC discussed the report in September, and decided to publish it, together with some alternative points of view by Stuart Sutherland, Roger Needham, Christopher Longuet-Higgins, and Donald Michie.[1]: preface  The SRC's decision to invite the report was partly a reaction to high levels of discord within the University of Edinburgh's Department of Artificial Intelligence, one of the earliest and biggest centres for AI research in the UK.[3]

On May 9, 1973, Lighthill debated several leading AI researchers (Donald Michie, John McCarthy, Richard Gregory) at the Royal Institution in London concerning the report.[4]

Content

[edit]

While the report was supportive of research into the simulation of neurophysiological and psychological processes, it was "highly critical of basic research in foundational areas such as robotics and language processing".[1] The report stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real-world domains. That is, the report states that AI techniques may work within the scope of small problem domains, but the techniques would not scale up well to solve more realistic problems. The report represents a pessimistic view of AI that began after early excitement in the field.

The report divides AI research into three categories:

  • Advanced Automation ("A"): applications of AI, such as optical character recognition, mechanical component design and manufacture, missile perception and guidance, etc.
  • Computer-based Central Nervous System research ("C"): building computational models of human brains (neurobiology) and behavior (psychology).
  • Bridge, or Building Robots ("B"): research that combines categories A and C. This category is intentionally vague.

Projects in category A have had some success, but only in restricted domains where a large quantity of detailed knowledge is used in designing the program. This was disappointing to researchers who hoped for generic methods. Due to combinatorial explosion, the amount of detailed knowledge quickly grows too large to be entered by hand, thus restricting projects to restricted domains.

Projects in category C have had some measure of success. Artificial neural networks were successfully used to model neurobiological data. SHRDLU demonstrated that human use of language, even in fine details, depends on the semantics or knowledge, and is not purely syntactical. This was influential in psycholinguistics. Attempts to extend SHRDLU to larger domains of discourse is impractical, due to combinatorial explosion.

Projects in category B have been failures. One important project, that of "programming and building a robot that would mimic human ability in a combination of eye-hand co-ordination and common-sense problem solving", have been entirely disappointing. Similarly, chess playing programs are no better than human amateurs. Due to combinatorial explosion, the run-time of general algorithms quickly grows impractical, requiring detailed problem-specific heuristics.

There is no coherent AI research program, because there does not exist successful projects in category B. What appears to be in category B, such as SHRDLU, is actually in category C. Consequently, what appears to be the AI research program is actually two programs. It is expected that within the next 25 years, category A would simply become applied technologies engineering, and C would integrate with psychology and neurobiology, while category B would be abandoned.

See also

[edit]

References

[edit]
  1. ^ a b c Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: A paper symposium. UK: Science Research Council.
  2. ^ Russell, S. J.; Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
  3. ^ Howe, Jim (June 2007). "Artificial Intelligence at Edinburgh University: a Perspective". UK: University of Edinburgh. Retrieved 29 September 2022.
  4. ^ Emanuel, Jeff (2024-10-01), Dicklesworthstone/the_lighthill_debate_on_ai, retrieved 2024-11-20
[edit]