Jump to content

Functionalism (philosophy of mind)

From Wikipedia, the free encyclopedia
(Redirected from Functionalism (philosophy))

In the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs.[1] Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Functionalism is a theoretical level between the physical implementation and behavioral output.[2] Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since a mental state is identified by a functional role, it is said to be realized on multiple levels; in other words, it is able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While a computer's program performs the functions via computations on inputs to give outputs, implemented via its electronic substrate, a brain performs the functions via its biological operation and stimulus responses.

Multiple realizability

[edit]

An important part of some arguments for functionalism is the idea of multiple realizability. According to standard functionalist theories, a mental state corresponds to a functional role. It is like a valve; a valve can be made of plastic or metal or other material, as long as it performs the proper function (controlling the flow of a liquid or gas). Similarly, functionalists argue, a mental state can be explained without considering the state of the underlying physical medium (such as the brain) that realizes it; one only needs to consider higher-level functions. Because a mental state is not limited to a particular medium, it can be realized in multiple ways, including, theoretically, with non-biological systems, such as computers. A silicon-based machine could have the same sort of mental life that a human being has, provided that its structure realized the proper functional roles.

While most functionalist theories accept the multiple realizability of mental states, some functionalist theories, such as the Functional Specification Theories (FSTs), reject this view.[3] FSTs were most notably developed by David Lewis[4] and David Malet Armstrong.[5] According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (for example, they tend to yell "Ouch!" when poked with sharp objects), we would say that their type of mental state might be similar to ours but it is not the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's[6][7] arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types

[edit]

Machine-state functionalism

[edit]
Artistic representation of a Turing machine

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam[6][7] in the 1960s. This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Turing machines). Putnam himself, by the mid-1970s, had begun questioning this position. The beginning of his opposition to machine-state functionalism can be read about in his Twin Earth thought experiment.

In non-technical terms, a Turing machine is not a physical object, but rather an abstract machine built upon a mathematical model. Typically, a Turing Machine has a horizontal tape divided into rectangular cells arranged from left to right. The tape itself is infinite in length, and each cell may contain a symbol. The symbols used for any given "machine" can vary. The machine has a read-write head that scans cells and moves in left and right directions. The action of the machine is determined by the symbol in the cell being scanned and a table of transition rules that serve as the machine's programming. Because of the infinite tape, a traditional Turing Machine has an infinite amount of time to compute any particular function or any number of functions. In the below example, each cell is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:

  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.

An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:

State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

The above point is critical to an understanding of machine-state functionalism. Since Turing machines are not required to be physical systems, "anything capable of going through a succession of states in time can be a Turing machine".[8] Because biological organisms “go through a succession of states in time”, any such organisms could also be equivalent to Turing machines.

According to machine-state functionalism, the nature of a mental state is just like the nature of the Turing machine states described above. If one can show the rational functioning and computing skills of these machines to be comparable to the rational functioning and computing skills of human beings, it follows that Turing machine behavior closely resembles that of human beings.[9] Therefore, it is not a particular physical-chemical composition responsible for the particular machine or mental state, it is the programming rules which produce the effects that are responsible. To put it another way, any rational preference is due to the rules being followed, not to the specific material composition of the agent.

Psycho-functionalism

[edit]

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psycho-functionalism.

The fundamental idea of psycho-functionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist. On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

[edit]

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:

Mental state M is the state that is preconceived by P and causes Q.

For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

[edit]

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Then M1 and M2 both supervene on the physical facts, but a change of M1 to M2 (say) could occur without any change in these facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Mechanistic functionalism

[edit]

Mechanistic functionalism, originally formulated and defended by Gualtiero Piccinini[10] and Carl Gillett[11][12] independently, augments previous functionalist accounts of mental states by maintaining that any psychological explanation must be rendered in mechanistic terms. That is, instead of mental states receiving a purely functional explanation in terms of their relations to other mental states, like those listed above, functions are seen as playing only a part—the other part being played by structures— of the explanation of a given mental state.

A mechanistic explanation[13] involves decomposing a given system, in this case a mental system, into its component physical parts, their activities or functions, and their combined organizational relations.[10] On this account the mind remains a functional system, but one that is understood in mechanistic terms. This account remains a sort of functionalism because functional relations are still essential to mental states, but it is mechanistic because the functional relations are always manifestations of concrete structures—albeit structures understood at a certain level of abstraction. Functions are individuated and explained either in terms of the contributions they make to the given system[14] or in teleological terms. If the functions are understood in teleological terms, then they may be characterized either etiologically or non-etiologically.[15]

Mechanistic functionalism leads functionalism away from the traditional functionalist autonomy of psychology from neuroscience and towards integrating psychology and neuroscience.[16] By providing an applicable framework for merging traditional psychological models with neurological data, mechanistic functionalism may be understood as reconciling the functionalist theory of mind with neurological accounts of how the brain actually works. This is due to the fact that mechanistic explanations of function attempt to provide an account of how functional states (mental states) are physically realized through neurological mechanisms.

Physicalism

[edit]

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

[edit]

In a 2020 PhilPapers functionalism emerged as the most popular theory, with 33% of respondents accepting or leaning towards it, followed by dualism at 22%, and identity theory at 13%.[17] Nevertheless, functionalism has counter-intuitive implications, often criticized using thought experiments.

China brain

[edit]

Ned Block[18] argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron. (The tremendous difference in speed of operation of each unit is not addressed.). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. Ned Block contends that this scenario is implausible, suggesting that there must be a flaw in the functionalist thesis if it allows such a system to be considered a legitimate mind.

Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[19] Indeed, it may be the case that we are constrained by our theory of mind[20] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true, either qualia will be present in any system that performs the correct functions, regardless of its physical structure, or qualia do not exist at all and are merely illusions.[21]

The Chinese room

[edit]

The Chinese room argument by John Searle[22] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules.

One common response to Searle's thought experiment is that there is a form of mental activity going on at a higher level than the man, and that the whole system needs to be considered. This suggests that the system does understand Chinese, even though the man in the room doesn't. The man is analogized to a CPU in a computer system. In response, Searle suggested the man in the room could memorize all the rules and symbol relations. Even if he internalized all the rules and performed the operations in his mind, he would still be manipulating symbols without understanding their meaning, according to Searle. Some critics consider that this symbol-manipulating subsystem of the brain can be viewed as a kind of separate, virtual mind, which would understand Chinese.[23]

Functionalists also argue that it would possible in theory to make a system that emulates on digital hardware each neuron of the brain of someone who understands Chinese, and that such a brain emulation would have the same mental processes, and would thus understand Chinese.[23]

Inverted spectrum

[edit]

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[18][24] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike normal people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[25][26]

According to David Chalmers, all "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences. He calls this the principle of organizational invariance. For example, it implies that a silicon chip that is functionally isomorphic to a brain will have the same perception of the color red, given the same sensory inputs. He proposed the thought experiment of the "dancing qualia" to demonstrate it. It is a reductio ad absurdum argument that starts by supposing that two such systems have different qualia in the same situation. It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic system that causes the perception of blue, for example implemented as a silicon chip. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the dancing qualia is impossible in practice, and the equivalent digital system would not only experience qualia, but it would have conscious experiences that are qualitatively identical to those of the biological system (e.g., seeing the same color). He also proposed a similar thought experiment, named the fading qualia, that argues that it is not possible for the qualia to fade when each biological neuron is replaced by a functional equivalent.[27][28]

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so (functionalism says that mental states are produced by the functional relations in the brain). This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments.[citation needed]

Twin Earth

[edit]

The Twin Earth thought experiment, introduced by Hilary Putnam,[29] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H2O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H2O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H2O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of inputs and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

[edit]

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[24] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states.[dubiousdiscuss] But this is counterintuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic–synthetic distinction, as many seek to avoid).

Triviality arguments

[edit]

According to Ned Block, if functionalism is to avoid the chauvinism of type-physicalism, it becomes overly liberal in "ascribing mental properties to things that do not in fact have them".[18] As an example, he proposes that the economy of Bolivia might be organized such that the economic states, inputs, and outputs would be isomorphic to a person under some bizarre mapping from mental to economic variables.[18]

Hilary Putnam,[30] John Searle,[31] and others[32][33] have offered further arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further[34] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[34]

See also

[edit]

References

[edit]
  1. ^ Block, Ned. (1996). "What is functionalism?" a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan. (PDF online)
  2. ^ Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
  3. ^ "Functionalism". Stanford Encyclopedia of Philosophy. 2023.
  4. ^ Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
  5. ^ Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
  6. ^ a b Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
  7. ^ a b Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
  8. ^ Putnam, H. (1967). “The Mental Life of Some Machines,” in H.-N. Castaneda (Ed.), Intentionality, Minds, and Perception. Detroit, MI: Wayne State University Press, p. 183.
  9. ^ Putnam, H. (1967). “The Mental Life of Some Machines,” in H.-N. Castaneda (Ed.), Intentionality, Minds, and Perception. Detroit, MI: Wayne State University Press, pp. 179-180.
  10. ^ a b Piccinini G (2010). "The mind as neural software? Understanding functionalism, computationalism, and computational functionalism" (PDF). Philosophy and Phenomenological Research. 81 (2): 269–311. doi:10.1111/j.1933-1592.2010.00356.x.
  11. ^ Gillett, C. (2007). “A Mechanist Manifesto for the Philosophy of Mind: The Third Way for Functionalists”. Journal of Philosophical Research, invited symposium on “Mechanisms in the Philosophy of Mind”, vol.32, pp. 21-42.
  12. ^ Gillett, C. (2013). “Understanding the Sciences through the Fog of ‘Functionalism(s)’”. In Hunneman (ed.) Functions: Selection and Mechanisms. Dordrecht: Kluwer, pp.159-81.
  13. ^ Machamer P.; Darden L.; Craver C. F. (2000). "Thinking about mechanisms". Philosophy of Science. 67 (1): 1–25. doi:10.1086/392759. S2CID 121812965.
  14. ^ Craver C. F. (2001). "Role functions, mechanisms, and hierarchy". Philosophy of Science. 68 (1): 53–74. doi:10.1086/392866. S2CID 35230404.
  15. ^ Maley C. J.; Piccinini G. (2013). "Get the Latest Upgrade: Functionalism 6.3.1". Philosophia Scientiae. 17 (2): 135–149. doi:10.4000/philosophiascientiae.861.
  16. ^ Piccinini G.; Craver C. F. (2011). "Integrating psychology and neuroscience: Functional analyses as mechanism sketches". Synthese. 183 (3): 283–311. CiteSeerX 10.1.1.367.190. doi:10.1007/s11229-011-9898-4. S2CID 6726609.
  17. ^ "Survey Results | Consciousness: panpsychism, dualism, eliminativism, functionalism, or identity theory?". PhilPapers. 2020.
  18. ^ a b c d Block, Ned. (1980b). "Troubles With Functionalism", in (1980a).
  19. ^ Lycan, William (1987). Consciousness. Cambridge, Massachusetts: MIT Press. ISBN 9780262121248.
  20. ^ Baron-Cohen, Simon; Leslie, Alan M.; Frith, Uta (October 1985). "Does the autistic child have a "theory of mind"?". Cognition. 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210. S2CID 14955234. Pdf.
  21. ^ Dennett, Daniel (1990), "Quining Qualia", in Lycan, William G. (ed.), Mind and cognition: a reader, Cambridge, Massachusetts, USA: Basil Blackwell, ISBN 9780631160762.
  22. ^ Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–424. doi:10.1017/s0140525x00005756. S2CID 55303721.
  23. ^ a b "The Chinese Room Argument". Stanford Encyclopedia of Philosophy. 2024.
  24. ^ a b Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
  25. ^ "Qualia". Internet Encyclopedia of Philosophy. Retrieved 2024-10-08.
  26. ^ Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
  27. ^ Chalmers, David (1995). "Absent Qualia, Fading Qualia, Dancing Qualia". Conscious Experience.
  28. ^ "An Introduction to the Problems of AI Consciousness". The Gradient. 2023-09-30. Retrieved 2024-10-05.
  29. ^ Putnam, Hilary. (1975b). "The Meaning of 'Meaning'", reprinted in Putnam (1975a).(PDF online Archived June 18, 2013, at the Wayback Machine)
  30. ^ Putnam, H. (1988). Reality and representation. Appendix. Cambridge, MA: MIT Press.
  31. ^ Searle J (1990). "Is the brain a digital computer?". Proceedings and Addresses of the American Philosophical Association. 64 (3): 21–37. doi:10.2307/3130074. JSTOR 3130074.
  32. ^ Chalmers D (1996). "Does a rock implement every finite-state automaton?". Synthese. 108 (3): 309–333. CiteSeerX 10.1.1.33.5266. doi:10.1007/bf00413692. S2CID 17751467.
  33. ^ Copeland J (1996). "What is computation?". Synthese. 108 (3): 335–359. doi:10.1007/bf00413693. S2CID 15217009.
  34. ^ a b Peter Godfrey-Smith, "Triviality Arguments against Functionalism". 2009. Philosophical studies 145 (2). [1]/"Archived copy" (PDF). Archived from the original (PDF) on 2011-05-22. Retrieved 2011-02-06.{{cite web}}: CS1 maint: archived copy as title (link)

Further reading

[edit]
  • Armstrong, D.M. (1968). A Materialistic Theory of the Mind. London: RKP.
  • Baron-Cohen S.; Leslie A.; Frith U. (1985). "Does the Autistic Child Have a "Theory of Mind"?". Cognition. 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210. S2CID 14955234.
  • Block, Ned. (1980a). "Introduction: What Is Functionalism?" in Readings in Philosophy of Psychology. Cambridge, MA: Harvard University Press.
  • Block, Ned. (1980b). "Troubles With Functionalism", in Block (1980a).
  • Block, Ned. (1994). Qualia. In S. Guttenplan (ed), A Companion to Philosophy of Mind. Oxford: Blackwell
  • Block, Ned (1996). "What is functionalism?" (PDF). a revised version of the entry on functionalism in The Encyclopedia of Philosophy Supplement, Macmillan.
  • Block, Ned and Fodor, J. (1972). "What Psychological States Are Not". Philosophical Review 81.
  • Chalmers, David. (1996). The Conscious Mind. Oxford: Oxford University Press.
  • DeLancey, C. (2002). "Passionate Engines - What Emotions Reveal about the Mind and Artificial Intelligence." Oxford: Oxford University Press.
  • Dennett, D. (1990) Quining Qualia. In W. Lycan, (ed), Mind and Cognition. Oxford: Blackwells
  • Levin, Janet. (2004). "Functionalism", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), E. Zalta (ed.). (online)
  • Lewis, David. (1966). "An Argument for the Identity Theory". Journal of Philosophy 63.
  • Lewis, David. (1980). "Mad Pain and Martian Pain". In Block (1980a) Vol. 1, pp. 216–222.
  • Lycan, W. (1987) Consciousness. Cambridge, MA: MIT Press.
  • Mandik, Pete. (1998). Fine-grained Supervience, Cognitive Neuroscience, and the Future of Functionalism.
  • Marr, D. (1982). Vision: A Computational Approach. San Francisco: Freeman & Co.
  • Polgar, T. D. (2008). "Functionalism". The Internet Encyclopedia of Philosophy.
  • Putnam, Hilary. (1960). "Minds and Machines". Reprinted in Putnam (1975a).
  • Putnam, Hilary. (1967). "Psychological Predicates". In Art, Mind, and Religion, W.H. Capitan and D.D. Merrill (eds.), pp. 37–48. (Later published as "The Nature of Mental States" in Putnam (1975a).
  • Putnam, Hilary. (1975a). Mind, Language, and Reality. Cambridge: CUP.
  • Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–424. doi:10.1017/s0140525x00005756. S2CID 55303721.
  • Smart, J.J.C. (1959). "Sensations and Brain Processes". Philosophical Review LXVIII.
[edit]