18

There are many refutations of John Searle's Chinese Room argument against Strong AI. But they seem to be addressing the structure of the thought experiment itself, as opposed to the underlying epistemic principle that it is trying to illustrate.

This principle is "syntax is not semantics" (See these lectures by John Searle): At the end of the day, computer software, even the most advanced AI conceivable, manipulates symbols according to a set of syntactic rules, regardless of their meaning.

Anybody who has studied formal logic knows that rules like De Morgans laws or the laws of idempotency ( e.g. A ^ A = A ) are independent of the meaning of the symbols being processed.

This idea, that syntax is independent of semantics, and therefore a computer can function perfectly without ever knowing the meaning of what it is computing seems like a much stronger argument against AI, and Mind-Body functionalism in general, than Searle's original Chinese Argument.

What are the main refutations advanced by proponents of functionalism and strong AI specifically of the "syntax is not semantics" argument?

Conifold
  • 42,225
  • 4
  • 92
  • 180
Alexander S King
  • 26,984
  • 5
  • 64
  • 187
  • 1
    Doesn't this answer do this: http://philosophy.stackexchange.com/a/1109/3733 ? -- If you can construct a Chinese room, then you've already reduced semantics to syntax by constructing the universal Chinese ruleset. – Dave May 20 '16 at 17:27
  • @Dave I am looking specifically for refutations of the syntax is not semantics argument, not the other refutations like the systems refutation or the robot refutation. – Alexander S King May 20 '16 at 17:33
  • 1
    Since you mention computer science in this context, are you familiar with domain theory and denotational semantics, e.g., https://en.wikipedia.org/wiki/Denotational_semantics For example, the set of all functions {N-->N} comprises one possible domain of meanings (semantics), and denotational semantics is a formal, rigorous way to map the syntax of a computer program (in a given language) to its meaning as one of these functions. You do have to start off by providing meanings for the simplest (Backus-Naur form) syntax, and then denotational semantics compositionally gives more complex meanings –  May 21 '16 at 10:59
  • Searles' argument fails the same way every other argument against AI that I've seen, fails: Humans are conscious, and unless you're a mystic, you must accept that we follow the same rules as computers. – Ask About Monica Jun 13 '16 at 18:08
  • It is worth pointing out that Searle's Chinese Room argument is a refutation of the computational theory of mind and he posits what he identifies as "strong AI" as an example of behaviorism, "[specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition.](http://cogprints.org/7150/1/10.1.1.83.5248.pdf)" – MmmHmm Mar 07 '17 at 21:28
  • The statement 'syntax is not semantics' is a proposition, not an argument. The Chinese Room thought experiment is not an argument for this proposition - that would be circular, as his insistence that the room's operator is the only possible candidate for the entity that understands the questions tacitly depends on it. Therefore, before the argument can be refuted, we need to see an argument for the proposition, that is more than just reducible to a statement of what Searle does and does not consider to be plausible. – sdenham Apr 05 '18 at 22:25
  • @sdenham OK, proposed argument: suppose the premiss "syntax is not semantics" is false – and that syntax *is* semantics. If syntax is sufficient for semantics, then presumably having the syntax means we also have the semantics. But if so, why did we need the Rosetta Stone? Moreover, why do we need to *learn* foreign languages? Having the shapes of the foreign words would mean, by virtue of possessing the shapes (the syntax), that we would also possess their meanings. But we don't also possess their meanings. Hence, syntax is not semantics. Comments? – Roddus Aug 15 '18 at 09:24
  • @Roddus If it is so, then your argument has no meaning, and so may be dismissed. – sdenham Aug 31 '18 at 17:04
  • @sdenham Searle does argue that syntax is insufficient for semantics. He argues that meanings are observer relative - some observer interprets the symbol's shape (gives it meaning). The shape itself is literally meaningless . E.g., in his 1990 scientific American article, *Is the brain's mind a computer program* and New York Review of Boos article, *What your computer can't know*. – Roddus Sep 02 '18 at 07:11

5 Answers5

15

Wittgenstein in his intermediate period provided a response, before the age of AI research and Searle's objections. In a nutshell: semantics is another syntax. Words only mean as role players in a linguistic calculus, and their meaning reduces to the collection of rules governing their use in the calculus. Of course, he was thinking of mathematics and language at large rather than computers. Here is Wittgenstein on metamathematics as "semantics" of mathematics:

"What Hilbert does is mathematics and not metamathematics. It is another calculus just like any other. I can play with chessmen, according to certain rules. But I can also invent a game in which I play with the rules themselves. The pieces of my game are now the rules of chess, and the rules of the game are, say, the laws of logic. In that case I have yet another game and not a metagame... What is known as the ‘theory of chess’ isn’t a theory describing something, it’s a kind of geometry. It is of course in its turn a calculus and not a theory".

What Wittgenstein came to appreciate later, in Philosophical Investigations, is that realistic "language games" are not reducible to calculi, they are far too nuanced for that. But that did not mean reinstatement of "intentionality" and "meanings" as entities, it meant that even the "rules" are not entities that can be spelled out. "Meaning" is acquired in activity, linguistic practice. In a very different form and by a very different route the same conclusion was arrived at by others, who came to play an unexpectedly prominent role in AI research. Dreyfus, the perennial critic of what computers can do since 1960s, gives a very interesting account in Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian:

"Using Heidegger as a guide, I began to look for signs that the whole AI research program was degenerating. I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing significance and relevance – a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values and John Searle now calls function predicates. But, Heidegger warned, values are just more meaningless facts... One version of this relevance problem is called the frame problem. If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated?

Merleau-Ponty’s work, on the contrary, offers a nonrepresentational account of the way the body and the world are coupled that suggests a way of avoiding the frame problem. According to Merleau-Ponty, as an agent acquires skills, those skills are “stored”, not as representations in the mind, but as a bodily readiness to respond to the solicitations of situations in the world. What the learner acquires through experience is not represented at all but is presented to the learner as more and more finely discriminated situations..."

Agre, Brooks, Wheeler, Winograd and other big names in AI eventually came to assimilate what Dreyfus was selling on behalf of Heidegger and Merleau-Ponty. This is now called "embodied-embedded cognition", and "Heideggerian AI" is a term of art too. Even Dennett's Cog incorporated some of these insights, although that did not save it. So Searle is right, semantics is not syntax, but he is unlikely to like the conclusions AI researchers drew from it. Namely, that meanings are not representational entities mysteriously connected to the real world, as Descartes would have it, and intentionality is not a special goo exuded by organic systems, as Searle would have it, but dynamic effects emerging in the process of interacting with environment, including other actors. Semantics is not syntax because meaning and intentionality are prerogatives of active players. Computer can not be such a player, it has to be an AI robot of some sort. In a way, this attitude shows in more recent versions of the Systems and Robot replies to the Chinese room.

Whether this "embodied-embedded intentionality" works out remains to be seen. As Dreyfus's title suggests, it is a work in progress, and he charges that all existing implementations are still too representational.

Conifold
  • 42,225
  • 4
  • 92
  • 180
  • Had you not explained it, I would have put "Heideggerian AI" in the same basket as "Quantum Hermeneutics". – Alexander S King May 20 '16 at 20:37
  • Hmmm: "Computer can not be such a player, it has to be an AI robot of some sort" A player (at a political level) can be a proxy for a coalition. At some level, we are programmed proxies for our selfish genes. And we are the primary players. So why would proxies for groups of humans not also qualify as players? Why is intention invested in something else and deployed independently no longer intention? –  May 20 '16 at 21:35
  • 1
    @Alexander That was my initial reaction too. I was incredulous even reading through Dreyfus until he quoted enough AI people I knew of mentioning Heidegger by name, and later looked through links on "embodied artificial intelligence". But I think Dreyfus is one of those "analytic translators" you once asked about, I seriously doubt that even philosophically inclined AI researchers could connect Heidegger to what they were doing. Never occurred to me before that either. – Conifold May 20 '16 at 21:38
  • Winograd's claim to motivated by Heidegger arises in http://www.amazon.com/Understanding-Computers-Cognition-Foundation-Design/dp/0201112973/ref=asap_bc?ie=UTF8 which is an undergrad text normal folks should be able to read, if you want to see how genuine it is. –  May 20 '16 at 21:40
  • @jobermark It may well be that computers deploy human intentions the same way humans deploy "intentions" of their genes, the difference is that computers do not play. Their repertoire of moves is too poor, they are too inert and static, and they depend on their "deployers" too much in interacting with the environment for any separate "intelligence" to manifest. Perhaps, with enough play you need no pre-existing intentionality at all, borrowed or otherwise, it emerges. Dennett complained that with Chinese room Searle picked instead a degenerate case that masks the effect he uses it to deny. – Conifold May 20 '16 at 23:22
  • Good answer. I'm looking forward to reading those references. However, the thing that stands out the most to me is the idea that representation can somehow be avoided. I can't imagine how that could possibly be avoided, because the so-called dynamic effects from interacting with the environment can only be communicated by means of representation. –  May 21 '16 at 00:46
  • @PédeLeão Representational view of ourselves is deeply ingrained. Fortunately, we now have models for avoiding it. Neural network "learns" to recognize a face after being presented with many exemplars. It does not do so by forming a representation of face-in-general, but by adjusting weights regulating its firings. Perhaps this can be *converted* into a representation (we do not know exactly how), and perhaps our brains do so on occasion, but learning and then communication do not require such conversion. Many concluded that representational operation of our minds is also empirically doubtful. – Conifold May 21 '16 at 20:43
  • @Conifold. I understand how neural networks work, and computers in general represent information by means of data similar to the way that words do. It's also assumed that neuron function in a similar manner. For that reason, I don't see how any physical system can avoid representation, because that would involve transferring pure information independent of any digital medium. Thus, the problem of functionalism is that such representation is epistemically isolated from what it represents. –  May 21 '16 at 21:24
  • 1
    @PédeLeão Neural networks do not function like digital computers, they can be simulated by them, but unlike them they do not represent anything. Which is why we can not "download" what neural network "knows" into a file, or "upload" digital information into it. For example, neural network can be "taught" to distinguish male and female faces, but even most humans can not produce a representational description of the difference. Knowledge-how (skill, ability) does not require digitizable knowledge-that to function, so acting in and responding to environment does not require representing it. – Conifold May 21 '16 at 23:23
  • @Conifold. I never said anything about digitizable knowledge, but what you are describing is precisely what I would call representation. If you don't like my choice of words, we can speak of the data having some potentially inferential connection with its causes. In the case of the neural network, that connection leads to some programmed response. However, the point I'm making is that the connection is epistemically isolated from the cause. The computer has no ability to draw any inferences, so it responds without "knowing" that it has anything to do with distinguishing faces. –  May 22 '16 at 00:39
  • @PédeLeão I am afraid I do not follow "inferential connection with causes" and "epistemic isolation". You also mentioned information flowing without digital medium, could you explain how this is connected? – Conifold May 22 '16 at 02:09
  • Data has epistemological value in virtue of X (I don't know what word you would find agreeable for X), and this X is epistemically isolated from it's causes in the same way that the people in the Chinese room are epistemically isolated from the Chinese message. The data they are processing doesn't provide the means for them to know anything about the message, nor is that knowledge necessary for them to process the information. If a neural network could "know" that it's distinguishing faces, then such isolation wouldn't exist, but there exists no means for such knowledge in a physical system. –  May 22 '16 at 02:29
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/40118/discussion-between-conifold-and-pe-de-leao). – Conifold May 22 '16 at 03:09
  • The 'they don't know how to play' argument is begging the question. Of course they don't *now*, but can they be improved to that degree, or not? That is the question, of hard AI to begin with, right? –  May 23 '16 at 20:30
  • @jobermark It is not an argument, just an observation. Computers that learned how to play I called "some sort of AI robot". I take no position on whether such a thing is achievable. – Conifold May 24 '16 at 01:34
3

The "syntax is not semantics" principle in the Chinese Room Argument (CRA) is based on the relationship between the Searle-computer and the Chinese symbols. Searle correctly characterizes this as a formal symbol processing relationship wherein the Searle-computer manipulates the symbols purely syntactically, according their shapes alone, without doing any subjective interpretation of them. This formal relationship is the linchpin of the CRA and Searle's rebuttal of computationalism (aka, computational functionalism, Strong AI).

Turing machine (TM) theory explains why this "linchpin" is merely a special case, and it exposes the huge gap in reasoning that comes from ignoring the most important part of the picture: the program. For example, the theory highlights this telling discrepancy:

If computers lack internal semantics, then why must the Searle-computer's programs be in English?

Searle never addresses this significant inconsistency in his position.

The Searle-computer is fully programmable, hence it is a universal TM (UTM). Every UTM has a two-part input: (1) a program and (2) a "nominal input" for the program to process. For example, if given program ADD, for addition, and nominal input "3, 4", the Searle-UTM would output "7". Because the digits "0-9" are just formal symbols to the Searle-UTM, they could be encoded as Chinese characters, and the Searle-UTM would still perform addition--just like the CRA. However, the same is not true for the Searle-UTM's other input, the ADD program. If it were written in Chinese, for example, then the Searle-UTM would fail.

Notice that the Searle-UTM can correctly process the Chinese symbols (#2) on a purely formal (syntactic) basis only because it also has a program input (#1) that is actually responsible for determining what to do with them. The program--not the Searle-UTM--determines how the Chinese symbols are actually processed, so the Searle-UTM need only manipulate them formally, acting as the program's vehicle or "middleman".

On the other hand, the Searle-UTM is the only thing responsible for correctly processing the program itself. The Searle-UTM must causally connect the program symbols with the physical entities and processes that they represent—a non-formal process that realizes symbolic representations as specific real events. Thus, the formal symbol processing that is Searle's linchpin, is just a consequence of the special relationship a UTM has to its nominal (formal) input, which is mediated by a program input that is processed non-formally by the UTM.

"It's the program, stupid!"

Q: What is a program?
A: It is the specification of how some TM works--a kind of blueprint for instantiating a TM that typically is non-universal.

Q: What happens when a UTM runs a program?
A: Two significant TM computations occur: (1) the universal computation instantiates (2) the computation of the program's TM. (The Searle-UTM can only introspect on the first one, his own universal computation, which entails reading the program instructions in English and executing them.)

Q: What, if anything, is happening semantically inside the Chinese Room?
A: We don't know because we don't know how the program works. It is useless to ask the Searle-UTM because he doesn't know either. He doesn't know if his program is doing a Chinese Turing Test or tic-tac-toe. He only knows about his own universal algorithm: "read the program and execute its steps on the nominal input". To know the nature of the computation responsible for the externally observed behavior, the only thing that matters is the program, and it is left unspecified.

Searle completely ignores or dismisses the second TM computation, which arises from the program. Nevertheless, its existence is a mathematical fact, not just some philosophical assertion. It does not depend on anyone's subjective opinions or intuitions. It only requires an objective understanding of how UTMs work. This also explains why the Systems/Virtual-Mind reply has been the most popular kind of CRA rebuttal: http://www.scholarpedia.org/article/Chinese_room_argument#The_systems_reply

Searle's obsessive focus on a quirk in the nature of UTMs is somewhat understandable because UTM-computers and programs are so iconic in our culture. In philosophical discussions, however, it is crucial to focus instead on TM theory itself and on general TMs, not just UTMs. The failure to do so means that the CRA fails miserably as a refutation of computationalism while spawning decades of fruitless debate in the process.

UTMs vs. General TMs

Q: Aren't UTMs "universal" (i.e., representative of all TMs)?
A: While a UTM can instantiate any other TM via its program, a UTM's own internal algorithm and specialized input for this universal programmability is highly specific and not at all representative of TM computation in general. Focusing on this as the CRA does is an unhealthy distraction.

Q: What can a general TM do that a UTM can't?
A: A UTM must always act as a rote machine. It must faithfully ensure that the same given program will function the same way every time. In general, a non-universal TM could change it's own behavior over time based on its input-output history.

For a more complete explanation and discussion of this topic, see the articles provided here: http://www.chineseroom.info/

EDIT: A later version of this explanation is here, several reply-levels down: https://www.reddit.com/r/askphilosophy/comments/50igj8/if_you_could_chat_with_john_searle/

Phil_132
  • 114
  • 6
  • 5
    I found this incoherent, checkmark notwithstanding. – user4894 Jun 12 '16 at 21:50
  • 4
    ps -- "The Searle-computer is fully programmable, hence it is a universal TM (UTM)." -- That's just false. The Chinese room is a big lookup table. It's not programmable and it's certainly not a UTM. Nor does the distinction between a TM and a UTM bear on the CRA. TMs and UTMs alike flip bits, that's all they can do. You haven't explained how semantics arises from syntax. – user4894 Jun 12 '16 at 22:02
  • Searle couldn't be more clear about placing himself in the shoes of a computer instantiating a program. _Minds, brains, and programs_: "...show how a human agent could instantiate the program..." ([Searle, 1980](http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6573580)). "Imagine that I am locked in a room ... with a book of instructions in English for manipulating the symbols (the program). " ([Searle, 2009](http://www.scholarpedia.org/article/Chinese_room_argument)). Programmable computers are physical approximations of UTMs. – Phil_132 Jun 12 '16 at 23:47
  • To refute the CRA, it is sufficient to show the gaping hole in its logic: the failure to account for the 2nd TM computation (not to mention its inconsistency regarding program interpretation). Explaining how semantics arises from syntax would be a different discussion. – Phil_132 Jun 12 '16 at 23:48
  • 5
    Right. "A book of instructions." An algorithm. A TM in fact. Perhaps I did not follow your point but a "book of instructions" can only be a TM. Are we in agreement or disagreement here? But TMs do syntax, not semantics. Let me reread your response, maybe there's something in there I missed. "Explaining how semantics arises from syntax would be a different discussion." -- That's the ONLY discussion. That's Searle's entire point!!!! – user4894 Jun 13 '16 at 00:10
  • I'm re-reading. I agree with your first paragraph. The first place you lost me was with the program "written in English." And then claiming Searle somehow doesn't come to terms with that. I don't understand this at all. The program can be in any language whatever. It's best to think of it as a bitstring that encodes the program. After all, that's what programs are. Of course it has an English-language input/output module to communicate with Searle; but the program isn't in English and doesn't need to be in English. Can you help me understand your intention here? – user4894 Jun 13 '16 at 02:31
  • 1
    Next point. "The Searle-computer is fully programmable, hence it is a universal TM (UTM)" -- That's not necessarily true. The program that runs the Chines room may well be a single-purpose custom computer that has its program etched into its chips at the factory and can't do anything else. That would be a TM. Or it might be a programmable general purpose computer, a UTM. Either would get the job done; and there's nothing about the CRA that indicates one way or another whether the algorithm is hardcoded or not. **It makes absolutely no difference to Searle's argument.** I shall keep reading. – user4894 Jun 13 '16 at 02:48
  • 1
    Are you trying to say that the intentionality lies with the program? In other words the meaning of the room lies in the program that happens to be loaded in the hardware? If so then at least I can say that I understand your point. However I don't agree with it. Say you have some program. What is a program? It's a string of bits. Say I give you a long string of bits and the manual for the machine code. *You can not tell me what the program is about.*. That's because **it's not about anything**. It's just a set of instructions for flipping bits. – user4894 Jun 13 '16 at 03:00
  • 1
    How do you make the distinction between the part of the input that is the "program" and the part of the input that is the "data"? Searle would say that all the symbols on the input tape (program and data) are observer relative. – nir Jun 13 '16 at 07:22
  • @nir Are you asking me or the OP? A Turing machine consists of a program, which is a finite set of instructions; and a tape initialized with some data. The instructions act on the tape. This model is well understood and is what we mean by a computation. If you have a program at all, it's understood that the program has instructions along with initial data. Or you can give it new data as it executes, I don't think that makes any difference computationally. However the **meaning** of the Turing machine resides outside the TM, in the mind of the human programmer. TMs have no intrinsic meaning. – user4894 Jun 14 '16 at 03:18
  • @nir Any given UTM definition will specify how the TM-program and input data are distinguished, e.g., via a separator symbol. I think agree that computations are observer relative--like triangles, really really complicated triangles. Could different UTM definitions be overlaid on the same physical system? Probably. – Phil_132 Jun 14 '16 at 03:34
  • 2
    @user4894 Regarding English. What Searle DOES say: (1) "Computers can **never** understand their input. PROOF: I could be a programmable computer, processing Chinese input, but I would never understand it!" What Searle DOES NOT say: (2) "Programmable computers must **always** understand their **primary** input: the *program!*" And yet, by requiring his programs to be written in English, he is tacitly acknowledging the truth of the second statement--which completely contradicts his thesis. – Phil_132 Jun 14 '16 at 04:38
  • 3
    @Phil_132 There is no requirement for the program to be in English. A program is a sequence of instructions in a formal language. You keep claiming this but it's not only wrong, it's absurd. Searle didn't mention TMs but we can put his argument in context of TMs and it's clear that a program need not be in English and in fact IS not in English. – user4894 Jun 14 '16 at 05:18
  • I don't understand why you are so sure, that the Searle-computer's program must be in English (assuming he only speaks English)? We could also just, like a cruel version of the [Mao card game](http://bit.ly/2ayTnTD]), by Pavlovian conditioning with whiplashes make John Searle follow the right rules. – viuser Sep 11 '16 at 01:32
  • Searle's knowing the English program but not the Chinese is the very heart of the CRA's logical structure, and--again--Searle has repeatedly made this abundantly clear: His original [1980 article](https://web.archive.org/web/20071210043312/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html): _"...after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, ...the set of rules in English that they gave me, they call the 'program.' "_ – Phil_132 Sep 11 '16 at 03:11
  • And 29 years later, ([Searle, 2009](http://www.scholarpedia.org/article/Chinese_room_argument#Statement_of_the_argument)) makes exactly the same point: _"Statement of the argument: ... Imagine that I am locked in a room with boxes of Chinese symbols (the database) together with a book of instructions in English for manipulating the symbols (the program)."_ Obviously, the CRA's programs cannot be in encoded arbitrarily because the computer is Searle himself, and he must understand the program rules so that he can execute them correctly. – Phil_132 Sep 11 '16 at 03:12
  • 2
    The question is not if in Searle's original formulation of the CRA the rules are in English. I don't deny that. It's about your claim that the rules *have* to be in English, that the CRA cannot easily be reformulated so no English rules are involved (as I've proposed in my last comment). You haven't shown why this can't work. – viuser Sep 12 '16 at 00:52
  • The analog to the instructions being in English is the actual electrical charges flowing through the processor mechanically and without semantic content nor the capacity for semantics. Note that Searle also makes the point that "computer" used to commonly be understood as "the person that computes" – MmmHmm Feb 18 '17 at 18:16
  • Yes, I agree that the CR's usage of English corresponds to circuitry design & function (CPU, memory, etc.). Is that not "semantic"? Clearly, it's short of human-level self-conscious understanding. Therefore, is it purely formal symbol maniplation (FSM)? No. FSM is *non-representational*: tokens identify their symbol-type **and nothing more**. But program symbols go beyond type-identities to represent things in the world. "+" and "-" represent particular physical machine processes, which the computer must correctly instantiate. While not human, it goes beyond FSM to real-world representation. – Phil_132 Feb 18 '17 at 19:46
2

You ask the obvious question. "If semantics is not syntax, then what is it?"

If they two are truly separate, you have a terrible difficulty explaining how semantics is teachable. You either end up in some kind of mandatory idealism where the basics of meaning needed to bootstrap semantics are already 'there', or with a functionalist model like Wittgenstein, Desassure, or Lacan.

In the latter case there is not syntax and semantics, there is just a continuum of semiotics with two unreachable ends. (A 'cuter' way to put the case @Conifold makes. So I am not going to bother repeating the reasoning here.) Semantics is just the syntax of behavior in general, rather than the syntax of a specific narrow range of behaviors you do with your vocal apparatus, with text, or with gestures. If your distinction is a spectrum, and pure forms of neither extreme are real, the argument can no longer be made.

In the former case, the one more consonant with Searle, the argument becomes much larger, and takes too many forms to kill them all off at once. But in most forms of idealism that allow for a basic internal structure to the mind independent of its function in reality, semantics is not real, either.

The meaning in the form it can occupy the mind is real, and it is meaning, not semantics. And the connection between minds that transfers meaning through behavior is real. And it is, being made of behavior, syntax, not semantics.

Intelligence, then, as it is functionally displayed through behavior by humans, is just this generalized syntactic wrapper around an essentially different process. Maybe one cannot artificially reproduce that process, but that is a different statement. There is no reason why the wrapper itself, intelligence, cannot exist without the customary stuffing of mind and will.

  • "There is no reason why the wrapper itself, intelligence, cannot exist without the customary stuffing of mind and will." I wonder why no one threw that in Searle's court. Or have they and I have missed it? – Alexander S King May 20 '16 at 22:01
  • 1
    I think it falls under the same complaint I made to @Conifold's answer. There is an implicit assumption that intelligence without a mind behind it is not intelligence -- that *borrowed* purpose is not purposeful enough. But to me, that implicit assumption denies that we, as animals, are bundles of borrowed intention, driven by our drives, borrowed from our genes. Basically, if you follow it down, you have to attribute intelligence to cultures, to species and ultimately to genes, which people find ludicrous. (But I don't) –  May 20 '16 at 22:15
2

I believe the exact sentence by Searle in the Chinese Room paper was 'syntax is not enough for semantics'. He made the meaning of the sentence more precise by proposing a case in which the syntax of a language is perfectly operated without any semantical comprehension arising from the process. Now, semantics in Searle's sense is some mental comprehension and it is rather obvious that it is possible to operate the syntax of a language without a mental glimpse of its meaning emerging from the operation. Therefore, in Searle's sense, syntax is indeed not enough for semantics.

1

This idea, that syntax is independent of semantics, and therefore a computer can function perfectly without ever knowing the meaning of what it is computing seems like a much stronger argument against AI, and Mind-Body functionalism in general, than Searle's original Chinese Argument.

there's mysticism here. a simulation of a human is a human. it's the same thing. human minds are software running in a classical computer. that's what you already are – a computer running software. the hardware details don't matter to the computations. no soul or organic molecules required.

an intelligence software program has to do certain things. included on the list is create knowledge. the only known knowledge-creating process is evolution. knowledge can be created through replication with variation and selection. (in the case of ideas this would more normally be called brainstorming and critical thinking to eliminate errors). software that does this within various parameters, and does a few other things, would be a thinking person. that's all there is to it. stuff like emotions are emergent properties of software, they aren't tied to souls, hardware made of organic molecules instead of silicon, etc

curi
  • 168
  • 1
  • 8