Chinese room

The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences.[1] Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]

In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just following syntactic rules without semantic comprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.[4]

The argument is directed against the philosophical positions of functionalism and computationalism,[5] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis:[b] "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[c]

Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.[6] The argument applies only to digital computers running programs and does not apply to machines in general.[4] While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.[7][8]


Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).

  1. ^ Searle 1980.
  2. ^ Harnad 2001, p. 1.
  3. ^ Roberts 2016.
  4. ^ a b Searle 1980, p. 11.
  5. ^ Searle 1992, p. 44.
  6. ^ Russell & Norvig 2021, p. 986.
  7. ^ Russell & Norvig 2021, section "Biological naturalism and the Chinese Room".
  8. ^ "The Chinese Room Argument". Stanford Encyclopedia of Philosophy. 2024.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search