5

Here's "Complete argument" from Wikipedia:

(A1) "Programs are formal (syntactic)." A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.

(A2) "Minds have mental contents (semantics)." Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.

(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics." This is what the Chinese room thought experiment is intended to prove: the Chinese room has syntax (because there is a man in there moving symbols around). The Chinese room has no semantics (because, according to Searle, there is no one or nothing in the room that understands what the symbols mean). Therefore, having syntax is not enough to generate semantics. Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds. This should follow without controversy from the first three: Programs don't have semantics. Programs have only syntax, and syntax is insufficient for semantics. Every mind has semantics. Therefore no programs are minds.

Substituting programs with human bodies and symbols with fundamental particles doesn't seem to change the argument. Therefore no humans have minds.

If you find it too extreme, let's recall what Chinese Room Argument does:

  1. Here's a room that impersonates itself as a Chinese fluent speaker.
  2. The room consists of parts (A, B, C, D, E, ...), one of which is capable of repeated conditional actions.
  3. Look at the part A. It doesn't (even try to) understand Chinese.
  4. Repeat for all the remaining parts and conclude that there's no understanding happening.

My question is why not apply the same for the actual Chinese human speaker? Imagine the human dissected into small enough parts and then notice that none of the pieces understands anything — mindless electro-chemical reactions only.

Does Wikipedia misrepresent what Searle actually mean? I gather that Searle's stance is that human-like mind requires some uncomputable process, but I don't understand how it follows from his argument.

Clarification

Thanks for the comments, but I feel that I wasn't clear enough that I'm not going to argue around the topic. What I'm looking for is a definitive reference that shows what Searle has publicly said or written about applying the argument to humans.

The reason is I read that Searle rejects virtual world -style explanations, which only makes sense if you require uncomputable mind. The argument for uncomputable mind must somehow follow from Searle's arguments, but from what I read I can't see how to come to the same conclusion (unless you implicitly include the conclusion into premises, as @causative said).

  • 4
    Yes, you are right. I suspect that the Chinese Room argument only appeals to people who already believe the mind has some irreducible part to do the understanding - i.e. a soul. They observe the Chinese Room lacks an irreducible part that understands, so they say it cannot understand at all. But they're just assuming the conclusion they wish to prove (i.e. that understanding requires an irreducible part). – causative Jul 06 '21 at 17:55
  • You're right about Searle. He does not go far enough. If the Chinese room argument is valid, it applies to any 3rd person mechanistic system, whether it's a .computer or the laws of physics. I personally think the CR argument is a valid argument, and therefore demonstrates that attempts to reduce understanding to 3rd person mechanistic systems will always fail. For the reasons you describe, imo, a conscious being cannot be purely described by 3rd person physical law. It's not that surprising to me... why should mental events be reducible to physical events? – Ameet Sharma Jul 06 '21 at 18:29
  • 2
    To address your questions: 1) It seems to me the wikipedia passage accurately reflects Searle's argument. 2) You say "Substituting programs with human bodies and symbols with fundamental particles doesn't seem to change the argument", this is wrong. Substituting humans does change things drastically, in fact Searle wants to argue humans can have mind because we have semantics (meaning to our thoughts). 3) You couldn't apply the CR argument to a Chinese human speaker because (Searle would argue) the person does have semantics, whereas the room doesn't. (1/2) – Fox Mulder Jul 06 '21 at 18:36
  • The distinction you are referring to, that the individual constituent pieces don't have semantics, but that they are found at a higher level (or that the parts of the CR don't understand Chinese, but that the whole system does) sometimes goes by the name "emergent property", and there's lots of literature there. At the end of the day, it seems Searle doesn't believe semantic is an emergent property, but that it requires another component which he argues can simply not come from syntax only. As far as I know, he doesn't specify the nature of this component, or who can have it, etc. (2/2) – Fox Mulder Jul 06 '21 at 18:40
  • This question seems to be on shaky ground, with no disrespect to the OP. Firstly, without a clear definition of what understanding means in the human sense it’s meaningless to ask whether a program can emulate it. Secondly, it’s evident that electronic computers can understand natural language in the sense that I can say “Alexa, order me a pizza” and one turns up at the door. If one were to argue that this doesn’t really correspond to understanding then one would have to make a case as to why not. – Frog Jul 06 '21 at 20:34
  • @Frog, "understanding" is difficult or impossible to define. It's like trying to define pain or knowledge. We know somewhat when we understand something or don't. It comes from a first person experience. The point of the CR argument is to demonstrate that the actions of a computer do not amount to what we usually mean by understanding. – Ameet Sharma Jul 06 '21 at 21:16
  • @Frog my question is "what we know about Searle's opinion on applicability of his argument to humans?" What's shaky about such a question? – Dmitri Urbanowicz Jul 07 '21 at 05:39
  • In response to both of the above, without any meaningful definition of what understanding is, firstly a claim that a computer program can’t do it is a claim made without evidence, which can therefore be struck down without evidence. Secondly, since Searle’s opinion is centred around the meaning of understanding, any question about it will necessarily come back around to the absence of a definition. – Frog Jul 07 '21 at 07:04
  • 1
    @Frog ironically, the question about what Searle has said publicly is purely syntactical. Questions about his opinion will follow, but having just a formal reference will settle this SE question. – Dmitri Urbanowicz Jul 07 '21 at 07:38
  • @frog: i'd say 'understand' means integrated into a cognitive map that includes the subjectivity as an element, so as to be able to predict outcomes of different ways of interacting with the thing understood. Made into part of a strange loop, in short. – CriglCragl Jul 08 '21 at 12:28
  • @CriglCragl - that’s probably the best definition I’ve seen, but in practice it says little about how to determine whether understanding is taking place in any particular situation. Indeed, any formal definition is, I suggest, a recipe for how to implement understanding in a conventional computer. – Frog Jul 08 '21 at 19:57
  • Maybe we need a sort of Turing test that computers can use to prove they have understanding. Any computer that spontaneously takes it up, wins. Understanding is proven by actions that cannot otherwise be explained. – Scott Rowe Aug 31 '22 at 01:39

4 Answers4

4

You cannot substitute programs with the human body. A program is a set of rules to be executed parallel or in series on a set of parallel or in series presented data. Both are implemented on physical matter. The process of the program acting on the data (whatever kind of computer iis used) is inherently different from the processes finding place in the brain. There is no splitting up between a program and data. It's all included in a non -separable process. The program and the data in a computer can ba separated a million miles from each other but this will drastically reduce its computing speed.

So what does this mean. It means that meaning can't exist for a computer. Only in the brain there is no separation between program and data, which is necessary for true understanding. Why dies true understanding arise only if these are not separated? Because they are not separated in the real world too. Understanding the real world requires similarity. Thinking about a falling stone shows similarity with a real falling stone. One can in fact see a falling stone when looking at working neurons representing the falling stone.

You write:

My question is why not apply the same for the actual Chinese human speaker? Imagine the human dissected into small enough parts and then notice that none of the pieces understands anything — mindless electro-chemical reactions only.

It is here that you make a basic failure. Brain processes can't be dissected in smaller units like it can be done in a computer. The massive, parallel, and oordinated neural firings are not directed by an outside program, as in a computer. The program lies in the pathways of information themselves. The connection strengths between neurons determine this flow and not a set of instructions somewhere in the brain separated from the information. There are as many different patterns possible for the same neurons. To represent all physical patterns in the real world. Of course do single neurons not know anything but when they are part of a bigger process they contribute to understanding. A computer will never be able to understand in this way. By the very fact that they compute and not represent externals in a faithfull way. Zeros and ones flowing ìn a circuit controlled by an external program is just not the way real physical processes unfold. Real physical processes unfold due to internal programs (lawas of Nature) just as processes in the brain unfold.

  • I'm pretty sure that by program that Wikipedia article means a running process, not a static thing (which can't do anything). As such, a process is not necessarily restricted by a predefined set of rules (other than the requirement of being computable) because it can evolve interactively. But just as a computer process is governed by some rules, so are the processes inside human bodies — by the laws of the universe. The distinction between program, data, process and running computer is also artificial and exists purely for engineering convenience. – Dmitri Urbanowicz Jul 06 '21 at 13:42
  • 2
    So, what's the inherent difference between programs and fundamental laws of nature? – Dmitri Urbanowicz Jul 06 '21 at 13:46
  • 1
    @DmitryUrbanowicz A program is written in the stuff of Nature. The laws of Nature are not written anywhere (except in the physics books or in our minds that is). – Deschele Schilder Jul 06 '21 at 14:07
  • 1
    As I said, there are other ways to bootstrap a computation process which do not require a program to be written. The important thing is that the laws are fixed, not that we have written them down. And that they don't care about human meaning. – Dmitri Urbanowicz Jul 06 '21 at 14:22
  • @DmitryUrbanowicz If the computation process doesnt need a program then there is nothing computed. Every computation requires a physical implementation. Can you give an example of non programmed computation? I dont mean,say, a falling stone that calculates its path. – Deschele Schilder Jul 06 '21 at 15:05
  • @DescheleSchilder According to your answer, me using pen and paper and a printed out instruction set is a program. And to a third party behind a wall, the outputs of an electronic computer and me with pen and paper give the exact same answer. Searle's point is that only minds have meaning. You did not give an account of meaning behind the actions. – J Kusin Jul 06 '21 at 15:11
  • @DescheleSchilder I'm sorry, but you last response doesn't make sense to me. Why do you think prewritten program is required for interpretation? Who said anything about evading physical implementation? – Dmitri Urbanowicz Jul 06 '21 at 15:17
  • @JKusin You using pen and paper is just you using pen and paper. That is something different than a program. A program is a set of instructions to be serial or parallel applied to serial or parallel data. The program is physically implemented in a computer chip. There is nothing like this implemented in the physical brain. There are no chips in the brain so to speak. The hardware of the brain is such that it can function without a program. Of course the processes must conform to physical law. – Deschele Schilder Jul 06 '21 at 15:22
  • An example of a computation is Game of Life. You may think that cell rules are the program, but there are many possible rules that allow to implement arbitrary computations (thus, not much different in spirit from the laws of nature). You may think that giving initial state is a program, but you could say the same about human's body growth. Also, it is not hard to imagine an adaptive computation that is a result of interaction with a user, who didn't write a single line of code. – Dmitri Urbanowicz Jul 06 '21 at 15:23
  • @ DmitriUrbanowicz But virtually all external processes can be represented by a process in the brain (a stone falling, a bird flying, in fact a whole dreamworld). When you programmize this feature is lost. – Deschele Schilder Jul 06 '21 at 15:24
  • @DescheleSchilder this is either ungrounded or outright false. – Dmitri Urbanowicz Jul 06 '21 at 15:27
  • @DescheleSchilder "A program is a set of instructions" -> so is a paper with instructions. A program is a set of instructions (as you say) for some physical object to carry out. Why isn't me reading a paper with instructions carrying out a program? – J Kusin Jul 06 '21 at 15:30
  • @Jkusin There is no deny that you will *act* like a program when you order whatever stuff according to the written instructions. But that doesnt make you a progran yourself. You can act like a computer but a comouter cant act like you. – Deschele Schilder Jul 06 '21 at 15:48
  • "The process of the program acting on the data is inherently different from the processes finding place in the brain" Is it? What if the architecture is mimicked? Is there a reason to think the brain can't be simulated by a Turing machine? Aren't sense-data, exactly, data? Considering https://en.wikipedia.org/wiki/Human_Brain_Project it seems these questions are answerable, but not answered. Yet you seem sure of the answers.. – CriglCragl Jul 08 '21 at 15:39
  • "Can you give an example of non programmed computation?" How about the adaptation of https://en.wikipedia.org/wiki/AlphaZero to a specific game? It learns. – CriglCragl Jul 14 '21 at 12:21
1

I think you make an excellent point. We have an internal experience of semantics, but judging other humans from outside, aren't they 'Chinese rooms'? That is exactly the Philosophical Zombies issue. So it's absurd to deny it can be dismissed as an issue, you 'just can't' compare the human body from outside, because we know somehow that we don't follow rules. The rules may be complex, even have elements of randomness, but they are there.

How do we 'know' other humans really have experiences like ourselves, and aren't just simulating them? Because linguistic intelligence arises from mirror neurons: if I do this action I mean thus, so when you do this action, mind-to-mind transfer of intentionality happens. We know from the Dunbar number that the human neocortex didn't evolve for problem-solving, but to navigate our social landscape. Language is the product of that heightened intersubjectivity. And with it, we have built a distributed collaborative intelligence, as we inherit mental tools, and pass them on working better. Salience landscapes that order experience in useful ways are structured by language, and embody knowledge.

There's a great Feynman lecture on computer heurustics. He frames what computers do as all processes of sorting. One perspective is that minds explore topological transformations, to explore possibility or phase space - this is how I interpret Universal Constructor theory.

causative
  • 10,452
  • 1
  • 13
  • 45
CriglCragl
  • 19,444
  • 4
  • 23
  • 65
  • "Linguistic intelligence arises from mirror neurons"? https://en.wikipedia.org/wiki/Mirror_neuron "To date, no widely accepted neural or computational models have been put forward to describe how mirror neuron activity supports cognitive functions." – causative Jul 06 '21 at 20:16
  • @causative: Wikipedia is being excessively cautious there, as it is on many science topics. Working from the connectome of c elegans, self-other distinction and proprioception we can see how this works. The point anyway regardless, is that we extend the minds of individual humans, through treating others as like us. Intersubjectivity is how we participate in the experiences of others, and invite them into our minds. – CriglCragl Jul 06 '21 at 21:22
  • Well, it is true that we communicate that way, but I'd caution against ascribing too much importance to *linguistic* intelligence as opposed to other kinds. A person can be conscious - in fact more conscious - when meditating and clearing away verbal thoughts. – causative Jul 07 '21 at 06:39
  • @causative: Absolutely. And when discussing this I am careful to point to solitary intelligences like cephalopods, and relatively solitary like bears & corvids. But the neocortex seeming to be focused on reading the intentions of others, suggests the leap in human intelligence is a social one. Plus a single human is not dramatically more capable than other animals, it is the memesphere, rapid sharing of learning, that does that. – CriglCragl Jul 07 '21 at 07:14
  • I might add that people are not chinese rooms. That is the only reason I downvote this question. The very assumption is wrong. – Deschele Schilder Jul 08 '21 at 13:04
  • @DescheleSchilder: I did not claim that. I only pointed to the https://en.wikipedia.org/wiki/Philosophical_zombie https://plato.stanford.edu/entries/zombies/ to say, "You cannot substitute programs with the human body" doesn't wash. That is not advocating that people are or aren't zombies, but saying there is philosophical work to do to decide - you can't say people are just special and we don't have to consider the issue. – CriglCragl Jul 08 '21 at 15:33
  • Ah yes. My mistake in reading. But still, bodies are no programs. These ma evolve according to physical laws but thats another issue. – Deschele Schilder Jul 08 '21 at 15:51
  • All physical pricesses that are sumulated in a compter (by a program acting on data) are not resembling the physical processes themselves. Only the noncomputational processes in the brain can make this resemblence. There is a direct one to one correspondence. The brain is a physical process itself and cannot correspond one to one with a computer process. People among one another can correspond to one another just like animals and people with animals. – Deschele Schilder Jul 08 '21 at 16:08
  • "simulated physical processes are not resembling the physical processes themselves" Why would that be impossible? If we understand them properly, physical processes like hormones and proprioception should be as simulatable as neurons. "The brain is a physical process itself and cannot correspond one to one with a computer process" Why? Seems like superstition. – CriglCragl Jul 08 '21 at 18:01
1

Quick Summary

No, humans are not obviously just a neuro-mechanical structure (or algorithmic structure, both have been proposed). That is explicitly the assumption of reductionism.

But yes, Searle's argument is as effective against neural reductionism as it is against the algorithmic reductionism it was originally aimed at.

Elaboration

Note this is a common realization. Searle's thought problem, along with Mary the Color Scientist, What is it Like to Be a Bat, and the Inverted Rainbow, etc, have convinced the majority of physicalist philosophers to abandon reductive physicalism relative to consciousness, and instead adopt a non-reductive physicalism.

And this has not just happened in philosophy of mind -- science has mostly abandoned wholesale reductionism as a programme (reduction is still highly useful, but it must be coupled with holism and emergence, as they are also useful strategies for characterizing the world). See section 5 of this SEP article https://plato.stanford.edu/entries/scientific-reduction

Dcleve
  • 9,612
  • 1
  • 11
  • 44
0

It only applies if you assume brains are mechanical and their operations can be reduced to a syntactical program.

If you make no such assumption, then the argument forces you to admit one of two things:

  • The brain is not what gives rise to consciousness or intelligence
  • The brain gives rise to consciousness and intelligence using non-mechanical physical principles.

The latter is John Searle's interpretation (I'm a bit fuzzy on if he thinks intelligence is mechanical or not, he seems to think it's only consciousness itself that isn't mechanical).

Gabriel
  • 51
  • 3