4

The Chinese Room setup is as follows, quoted from an earlier question on the same topic:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

If the boxes of symbols and the book of instructions only contain a handful of rules I can see the argument that the man inside the room doesn't need to understand any of their meaning to correctly follow the instructions.

But going from there to a set of rules complex enough to encode Chinese to me requires a leap that is similar to the pile of sand paradox. 3 grains of sand are not a pile of sand, and adding a single grain to a non-pile doesn't make it a pile. Nevertheless piles of sand do exist.

In analogy three symbols and rules do not imply any understanding to carry out and adding a single rule doesn't change that. But this is not sufficient to argue that one can manipulate a set of rules complex enough for Chinese without any understanding.

I find the idea that one can encode a language including the ability to use it in a real world context just by writing down a finite list of logical manipulation rules questionable. It is not clear to me why one can/ should assume that using a language can be written down as a list of rules that behaves just the same way as a handful of individual rules. In analogy there might be a pile of sand and the Chinese room argument seems to require that considering individual grains of sand is all there is.

Has this argument been made/ discussed/ refuted by philosophers?

quarague
  • 143
  • 5
  • 1
    I’ve broken down your question into paragraphs to aid reading. Just to clarify, your essential thought is something like “manipulating individual rules doesn’t constitute understanding, but manipulating the whole system might; it’s just that language systems are really intricate”? – Paul Ross Nov 09 '21 at 08:36
  • @PaulRoss Thanks for the edit. Essentially yes, added two more sentences to hopefully make my question more clear. – quarague Nov 09 '21 at 09:00
  • There is a similar to a pile analogy in Churchlands' and Pinker's [intuition reply](https://plato.stanford.edu/entries/chinese-room/#IntuRepl), but on speed rather than complexity. They compare the Chinese room to waving a magnet without producing light, and then concluding that light is not electromagnetic waves. "*“The thought experiment slows down the waves to a range to which we humans no longer see them as light. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)*". – Conifold Nov 09 '21 at 09:42
  • Your intuition is correct. See [emergent properties](https://plato.stanford.edu/entries/properties-emergent/). Not any of our neurons can formulate an english sentence, yet all together they form an entity that can. Similarly, the person in the room may not understand chinese, but together with the room and the set of rules, they form an entity that does, for all intent and purpose, read and speak chinese. – armand Nov 09 '21 at 10:57
  • A universal Turing machine can have [very simple rules](https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html) for jumping from one location on the tape to another and changing or not changing the digit it finds there, with nearly all the complexity of the program being in the arrangement of digits itself. And a universal Turing machine can in principle execute any computable program whatsoever if it has a long enough tape, including say a detailed atom-by-atom simulation of an entire human brain. – Hypnosifl Nov 11 '21 at 00:05
  • So, I think Searle is correct that we can imagine a long-lived humanlike being hand-executing a program that acts as though it understands Chinese even though the being itself doesn't understand it. But I would say the larger process of cause-and-effect that takes place during the execution of the program might understand Chinese even if the being shuffling the symbols doesn't--this is the "systems reply" which I don't think Searle has any very good argument against. – Hypnosifl Nov 11 '21 at 00:09
  • @Hypnosifl That looks like a satisfying answer to my question. If you write it up I will accept it. Thanks. – quarague Nov 11 '21 at 08:07

6 Answers6

2

Searle is probably thinking of the fact that the rules needed to execute an arbitrarily complex program can be fairly simple, as demonstrated for example by Alan Turing's abstract model of computation, which has become known as the "Turing machine". There are various online outlines of the idea you can read like this one, but the basic idea is that the machine can read and edit data from a linear "tape" divided into a sequence squares, with each square containing one of some finite number of possible symbols, like 1's and 0's for tape filled in with binary code. The machine is supposed to only view one square at a time, and it has a finite collection of internal states, with these states giving rules for how the machine will behave subsequently depending on what digit it reads on the square. For example, a machine might have a numbered list of possible states, and state #8 could be something like "if you read a 0 on the current square, move 3 squares to the left and transition to state #17; if you read a 1 on the current square, edit it to be a 1, then move 2 squares to the right and transition to state #3." The rules are such that the behavior of the machine on each step depends only on the current internal state and the digit in the square the machine is currently reading, no other information about its history is needed. Turing also assume there would be a special "halting state" that determines when the machine is finished, and the pattern of symbols on the tape at that point would be the program's output.

If we define different Turing machines by their different lists of internal rules, Turing found that some of these machines qualify as a universal Turing machines (UT) which can simulate any other possible Turing machine, given the appropriate string of symbols for its input tape. So the set of all possible "computable functions" can be understood as the set of functions that can be calculated by a UT. Anything that we conventionally think of as a "computer program" could be run on a UT, including any arbitrary complex program for answering questions in Chinese (say, a detailed simulation of the brain of a Chinese-speaking person). All the complexities and unique characteristics of different programs would be due to differences in the symbols on the input tape, the list of internal rules for the UT would be the same for every program. And these internal rules can be quite simple--if I'm understanding the chart from this answer on the computer science stack exchange correctly, it's saying that if we want a UT that includes a halting state and that acts on binary tapes with only two symbols, 19 internal rules is all we need.

So Searle is just asking us to imagine a person playing the role of the Turing machine, jumping around between squares on the input tape and editing them according to the internal rules, with some physical stand-in for the tape divided into squares (say, a bunch of note cards laid out in a row with erasable symbols written in pencil). One could imagine the person has memorized the 19 internal rules, but even this isn't really necessary; they could also have something like a clock face with the different rules written at different positions on the rim, and a pointer that can be moved by hand, with the rules saying things like "Rule #8: if you read a 0 on the current square, move 3 squares to the left and shift the pointer to rule #17; if you read a 1 on the current square, erase it and write a 1, then move 2 squares to the right and shift the pointer to rule #3." And as mentioned before, the person wouldn't need to retain any memory of the symbols they'd seen on previous steps while executing the program.

So, I think there is no problem with Searle's claim that a hypothetical person with a mind basically like yours or mine (though perhaps with a much longer lifespan) could execute an arbitrarily complex program for answering questions in Chinese without the person themselves having any understanding of Chinese, since there is no correlation between the complexity of the program and the complexity of the rules the person has to understand.

Hopefully the above addresses your question. But as an aside, I'd like to add that there seem to be two distinct claims Searle is making about "understanding": 1) the claim that a person could execute a complex Chinese-speaking program without the person having any understanding of Chinese, and 2) the further claim the there is no entity or process within the room that can be said to understand Chinese. I understood your question to be only dealing with 1), so my above answer was defending the plausibility of 1). But that doesn't mean I would defend Searle's overall argument, because I think 2) is wrong--as I said in this answer to another question about Searle's thought-experiment, I think the main flaw in his larger argument is that the network of causally-interrelated events created by the person interacting with the cards using the rules could be said to have its own understanding separate from the person's.

Hypnosifl
  • 2,837
  • 1
  • 16
  • 18
  • 1
    You went through a lengthy explanation of TMs to finally devote only your final sentence to the claim that the "network of causally interrelated events" etc., "could be said to have its own understanding." Checkmark notwithstanding, I didn't find this responsive to the question. " You made a vague statement and didn't back it up. Not even wrong," as it were. Hence my downvote. – user4894 Nov 12 '21 at 20:47
  • @user4894 That last sentence wasn't *intended* to answer the OP's question, I already answered the question by saying I didn't think there was anything wrong with Searle's claim that a human could execute a complex program without understanding, based on the simple rules of a UTM being all the human would need to know and the point that "there is no correlation between the complexity of the program and the complexity of the rules the person has to understand". – Hypnosifl Nov 12 '21 at 20:54
  • (cont.) That last sentence about the "network of causally interrelated events" was just an aside, meant to explain that though my answer was defending the plausibility of the part of Searle's argument that OP focused on, I do think there is a *separate* problem with Searle's argument which I discussed more in the other linked answer, so that I don't agree with his ultimate conclusions (if I hadn't put in that last sentence, the OP might get the idea that I support Searle's overall argument about the larger *system* not understanding Chinese, as opposed to just the human in the room). – Hypnosifl Nov 12 '21 at 21:02
  • Ok you talked me into it. When I went to upvote it said, "Your vote is locked in unless the OP edits their post" and didn't let me fix it. – user4894 Nov 12 '21 at 22:32
  • @user4894 I edited it to make more clear that the last part (now in a separate paragraph) was just an aside about Searle's larger argument, not directly relevant to the part of Searle's argument that the OP was asking about. – Hypnosifl Nov 12 '21 at 22:53
  • Ok I upvoted to cancel my downvote. What you're articulating is the "systems" response, which is that the human in the room doesn't understand Chinese but the "system of the room" does. I have never found this argument compelling or even coherent, but lots of others do. – user4894 Nov 13 '21 at 00:48
  • @user4894 - I talked about the systems response more in the other answer I linked to, but basically I favor the idea there are psychophysical laws that relate physical processes to conscious experiences (I don't think consciousness is a property of objects like my brain, only of *processes* consisting of causally related events spread out over some region of spacetime), and from this perspective it seems perfectly natural that there are patterns of causally related events created by the person jumping between cards which are different from any patterns of events within the person's brain. – Hypnosifl Nov 13 '21 at 04:32
  • I’ve only listened to Searle speak but I think he has established at least one strong anti-systems defense, largely because he’s a physicalist: Mental processes are to the brain like digestion to the stomach (he says), meaning we have no scientific reason to suspect digestion and subjective experience occur elsewhere. The brain is a necessary component, only it has the right physics. Chalmers wants a blurrier line between types of physical systems, yet Searle has much more cognitive science and biology at his disposal to characterize their differences. – J Kusin Sep 04 '22 at 16:24
  • @JKusin If he really believed mental processes were just like digestion, shouldn't he be an eliminative materialist? If on the other hand he believes there is some distinct metaphysical reality of experience, doesn't that make consciousness fundamentally different from digestion, assuming he doesn't believe in some metaphysical truth of what "digestion" is which is more than just a high-level description of certain physical processes? – Hypnosifl Sep 04 '22 at 17:11
  • @Hypnosifl boo me the ambiguity about digestion. *The brain causes consciousness like the stomach causes digestion*. Distinct physical processes and maybe distinct metaphysical being. From Searle, The Mystery of Conscious vol 1 pg 110, “we can make clear some of the differences between Dennett's approach to consciousness…I believe that the brain causes conscious experiences.” and “we do know that any other system capable of causing consciousness would have to have causal powers equivalent to the brain's to do it. This point follows trivially from the fact that brains do it causally.” – J Kusin Sep 04 '22 at 23:29
  • And his famous anti eliminative materialism quote (as far as I understand it) two pages later “No, you can't disprove the existence of conscious experiences by proving that they are only an appearance disguising the underlying reality, *because where consciousness is concerned the existence of the appearance is the reality.*” (emphasis his) – J Kusin Sep 04 '22 at 23:30
1

One can make the Chinese room rather simple by reducing it to a Turing machine, with a tape head and some kind of internal state that can be swapped, say physically. At any given point in time, each entry in the program looks like following:

  1. Erase the symbol at the cell the tape head is pointing to, or write a new symbol in
  2. Move the tape head to the left by one cell, to the right by one cell, or keep it at the current cell.
  3. Change the state of the machine to a new state (can be the same state)

These rules are obviously very simple to follow, and require no understanding.

Gabriel
  • 51
  • 3
1

It is important to remember that the whole point of the Chinese Room Argument is to criticize the position that a computer that can mimic human language has understanding. His opponents are the ones arguing that:

  1. A finite set of rules can encode a language.
  2. A computer with these rules has understanding.

Searle is arguing that even if we grant statement #1 is true, statement #2 does not follow. If you argue that the man in the room could never perfectly replicate a natural Chinese speaker because statement #1 is false, then you have to also reject the computer scientist position.

E Tam
  • 1,024
  • 5
  • 11
1

Searle's thought experiment was directly aimed at the pile of sand paradox. The point of it, is that knowledge is not achieved in any pile of sand accumulation.

There are a variety of things we know we need to have knowledge:

  1. Functional ability to do the translation. Searle puts that in the user manual. This manual, IS a general AI capable of passing the Turing test. However, the manual lacks each of the other features listed below.
  2. Awareness. Searle puts that in the operator.
  3. I/O capability. Searle puts that in the operator plus room structure.
  4. Agency. Searle puts that in the operator.

Awareness is an active capability, which includes conscious awaren3ess which involves understanding. Searle's room has capability, it has awareness in the operator, it has I/O, and agency. But the consciousness of the operator does not itself contain understanding.

To assert that there is understanding of Chinese exemplified by the room, one must do one of two things that most philosophers have balked at:

a) Explicitly toss out consciousness/awareness as irrelevant to understanding. This is to embrace behaviorism, which philosophy tried once, and the vast majority now consider to have been a long dreadful mistake.

b) Ascribe consciousness to the inert code of the manual, OR as an emergent property to the manual/room/person collective.

Searle is a fan of consciousness being an emergent property of our neurology, but he, and the vast majority of his colleagues, consider the room assemblage NOT to be the sort of thing that can have an emergent property of consciousness.

And few even dedicated algorithmecists would consider the manuals themselves to be conscious. Algorithmecists generally try to ascribe emergent consciousness to the act of stepping thru an algorithm, not actually to the algorithm itself. But that claim is much more psychologically plausible when one can picture the algorithm and stepping process to be embedded in the same "thing", such as a computer, or a brain. Decouple the stepping process form the algorithm, such as Searle did, and the plausibility of that person/room/algorithm complex somehow being collectively conscious -- is directly in the bullseye of the thought problem.

Searle's thought problem has significantly reduced the number of active philosophers who believe in a Functional Identity Theory of consciousness. Searle himself is an emergent physicalist, who holds that neurology has some unique properties OTHER than function, which somehow led to the emergence of consciousness. This is now the prevailing view of consciousness, basically because of the effectiveness of the Chinese Room thought problem to attack both the pile of sand approach functionalists, and the emergent functionalists.

Dcleve
  • 9,612
  • 1
  • 11
  • 44
0

I'd relate this to the Private Language Argument. And look to Hofstadter's strange loops for a structural difference between rule intelligence and agent intelligence.

Children raised by wolves, without human company past puberty, never learn to speak. They may perhaps make more subtle inferences than a wolf. But without language, and the tools that go with it (at least those beyond mimicry), a human is not a very special animal, & without creche-rearing and education likely less good at problem solving than corvids or cephalopods.

Wittgenstein analysed the intelligence of language as contained in modes of life. Contextual cues, lived practices, associated experiences, and the sharing of these. The power of language begins in it's intersubjectivity: if I were you, if you were me. It groups things, highlights things, allows continuities to be narrated - in salience landscapes, and enables cognitive grip for socially useful manipulation of concepts, like say mathematics elaborates subitism (a generalisation of the idea of sets of 'identical' things, another symmetry operation like intersubjectivity).

Language requires rules, time saving abstractions that facilitate learning. Infants intuitively grasp grammatical rules even when their vocabulary is minimal. The rules contain learning, many people have contributed to mathematics just like to all languages, and the education of children consists not just of learning the rules but of connecting them with the modes of life that make them intelligible, experiences & applications.

Rule following is not a private experience. The sand is not a pile or not privately, but socially, it's 'pileness' is not a direct experience, but a token we use to relate to it, functionally, for purposes. An intersubjective rule of a widespread community, seems like an 'objective' rule, one that's 'out there'. But, even though the rule-following to be communicable cannot be private, it still begins with subjective experiences, the ones that can be shared, transferred, abstracted.

The Dunbar Number points to humans developing their neocortex primarily for understanding the motivations of others, not problem solving. That is, to formulate our observations of community rule-following in the form 'if I was them', and the massive specialisation in this blinds us to the complexity involved.

Rule-creativity takes place in the mental 'global workspace' - we become conscious of the exceptions, the rule-failures, the challenges, and integrate them there through our sense of agency: in relation to our understanding of ourselves in relation to others (well understood familiar tasks go with the 'flow state' of unconscious competence).

That creative place, the global workspace of conscious working memory, is the intersubjective experience, that allows the rule-creativity to propagate, private playing with the rule, public framing of the change into a regularised structured abstraction (to be communicable): the joke, the new theorem, the sudden insight. Because the rules aren't fixed like objective rules, but are really intersubjective, affirmed or transformed by enactment of the community, in modes of life.

So language rules contain intelligence, just like DNA - it distils useful shorthands about our environment, as part of our extended phenotype, in a memesphere. The rules can be followed by a computer. But the rule-creativity requires a global workspace situated intersubjectively, which can watch modes of life alter update transmute and above all play, with meanings, in relation to the agents self, which interprets and conceptualises in relation to other agents who's modes of life the rules were drawn from.

We need a structural shift for AI to 'understand' Chinese, with not just rule following but pursuit of understanding of intentions, which language is only the expression of.

CriglCragl
  • 19,444
  • 4
  • 23
  • 65
  • When you talk about a program not being able to achieve understanding by rule-following in the last paragraph, are you just talking about high-level rules a la "symbolic AI", or are you including any rules whatsoever, including things like the computational rules governing simulated neurons in some type of artificial neural net? Searle intended his argument to apply to both types, see his comments on the "brain simulator reply" on [p. 420 of his 1980 paper](https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980pdf). – Hypnosifl Nov 14 '21 at 16:04
  • @Hypnosifl: Either system is *capable of understanding*, but not by rule following alone, there are also structural requirements - engagement with being an agent among other agents, & seeking to understand their intentions, intersubjectively. This model helps account for cephalopod intelligence even though they are mostly solitary, because they try to understand a range of predator & prey strategies, & have been in an arms-race doing so since before mammals existed – CriglCragl Nov 14 '21 at 17:45
  • Sounds like the 'robot reply' which is also on p. 420--Searle would say an input-output relation with the real world (and other intelligent beings) isn't enough, since a person could carry out all the internal computations between sensory inputs and bodily outputs without the person understanding them. Also, what about something like a giant simulated world including multiple simulated agents that can interact with each other, but where the whole simulation is self-contained and not receiving any input from the 'real world'? Do you think agents can have intersubjective understanding there? – Hypnosifl Nov 14 '21 at 18:03
  • @Hypnosifl: & I'd agree with him, until the intersubjective integration space criteria is met, that allows - rule-creativity in community. Yes to simulation, absolutely no problem. In fact that's how I expect Artificial General Intelligence to occur. A bunch of Alphazeros interacting, say, plus some kind of evolutionary selection among them. – CriglCragl Nov 14 '21 at 20:34
  • But in his response to the "robot reply", Searle is imagining a scenario where you have a robot that has a body and is capable of interacting with others--in your view would that allow for "intersubjectivity" if its interactions seemed like those of a meaningful agent to people around it, having conversations about things going on around them, making plans and doing stuff together in the world? Searle thinks such a robot still wouldn't have any meaningful understanding if there was a humanlike agent inside the robot doing the computations by hand, and that agent didn't have such understanding. – Hypnosifl Nov 14 '21 at 23:16
  • @Hypnosifl: As I said, having a body or not isn't important. It's about trying to understand motivations of other self-similar agents in a dynamic way, in community with them - ie also seeking to be understood – CriglCragl Nov 15 '21 at 09:55
  • Having a body obviously isn't *sufficient* for intersubjective understanding, but I think to develop such understanding, agents need to have some kind of shared environment that they can all sense and interact with, so that their words can have shared referents--virtual bodies in a virtual environment would be fine, but I think one needs *some* kind of body. Do you disagree? – Hypnosifl Nov 15 '21 at 13:42
  • Either way, if you want to address Searle's argument you need to address the question of whether there could be a program (perhaps one controlling a body, and taking sensory input from that body) that could exhibit the *behaviors* of "trying to understand motivations of other self-similar agents in a dynamic way, in community with them", and whether you think those behaviors would be sufficient to conclude it really did have inner understanding. And if so, what's your response to Searle's argument that any program, including that one, could be hand-calculated by a person without understanding? – Hypnosifl Nov 15 '21 at 13:45
  • @Hypnosifl: Yes, shared environment - there needs to be a way in which agents reflect & contain pictures of each other, like the jewels in Indra's net. Your ATP cycle could be 'hand cranked' mechanically, & not provide understanding, eg for a coma patient. I feel my case is fully made already, understanding is related to participating in community creativity, through intersubjective interactions. This is an experimentally testable hypothesis, & I expect it to get tested. It can account for a range of known data, & fits with the Private Language argument & the strange-loop picture. – CriglCragl Nov 15 '21 at 17:53
  • What can be experimentally tested is the behavioral aspects of intersubjectivity, but there's no way to test whether something has a subjective inner consciousness or is a [philosophical zombie](https://plato.stanford.edu/entries/zombies/) since both have identical behavior. If there's an algorithm we can put in a robot such that it will show these behaviors, Searle would argue it'd be a zombie in the case where the algorithm was carried out by hand by a human-like being w/ no understanding of what the robot was doing/seeing. I don't think the argument is correct but you haven't addressed it. – Hypnosifl Nov 15 '21 at 18:05
  • @Hypnosifl: I don't see philosophical zombies as about stating they exist or we can't know others are them, but methodologically questioning how we know other humans have whatever this property is, how we know we ourselves have it. I see the 'inner' nature as a red-herring, that's the beetle-in-a-box. Again my point is understanding is structural, emergent, a type of organisation involving agents, & the algorithms are irrelevant. Any formulation of rules, will fail, will become outdated, will have contradictions, & contextual cues. Language is emergent in networks, not recursively ennumerable. – CriglCragl Nov 15 '21 at 20:35
  • Would you say you favor some version of [eliminative materialism](https://plato.stanford.edu/entries/materialism-eliminative/), where there is nothing more to consciousness than the physical behaviors and computations associated with it, and similarly that semantic understanding *is just* the intersubjective behaviors that show that two agents functionally understand each other in communication? If so, that would be one perspective from which one could counter Searle's argument, you could put some explicit argument of this sort in your answer. – Hypnosifl Nov 15 '21 at 23:10
-1

The symbols of a system can be arbitrarily large, but the rules for manipulating those symbols are generally quite limited. For example, arithmetic may have an infinite number of possible equations, but the rules for constructing a valid equation are few enough in number that most people master them as children:

  • a term defined as a sequence of symbols that begins and ends with operands, sometimes encased in parentheses to show precedence
  • a term sequence defined as a set of terms separated — sometimes implicitly, as in 12x or 2y — by arithmetic operators
  • an equation defined as two term sequences separated by an equals sign

Those three rules define any valid arithmetic equation. They do not say whether an equation is true or false, much less whether an equation has any practical meaning or use. But they do tell us that (say):

  • z4m = 14c 12 12

... is improperly formed. And yes, I know we could adopt some of that weirdness as rules in other systems (e.g., 14c as notation for carbon-14 in chemistry, or z4m as a valid variable name in a programming language), but that's beside the point. In arithmetic it's nonsense, and easily identified as such.

The same holds true of language. Semantics is deeply complex and nuanced, but syntax is pretty simple. Our computers all have spell-checkers and grammar checkers that don't require deep-learning neural nets to work; they are simple algorithms that apply standardized rules and check a short list of irregular forms. There are ambiguities in meaning that make sentences confusing, but there aren't ambiguities in syntax. For instance, the classic test phrase:

  • Time flies like an arrow, but fruit flies like a banana

... will pass through your grammar-checker perfectly fine. The confusion comes because there are several different ways of organizing the subject-verb-object structure of the sentence, all of which are grammatically correct, and we have to choose between them. There's a whole lot of language theory behind this, but it's mostly found in linguistics or the social sciences, not in philosophy proper. It's a bit too technical for most philosophers to bother with.

Ted Wrigley
  • 17,769
  • 2
  • 20
  • 51
  • 1
    Isn't this simplifying the task for the man in the Chinese room by a lot? He is not only supposed to create grammatically correct Chinese sentences (this can be done fairly well with a small set of rules in different languages, afaik Chinese is especially easy) but he is supposed to give meaningful answers to questions indistinguishable from a Chinese speaking human. – quarague Nov 09 '21 at 19:59