1

The Chinese room reacts just to syntax, or shape of symbols (is purely syntactic). But brains are full of structure. In the room, Chinese symbols sit scattered in "piles" on the floor or are moved around in "batches" or "bunches", or are stored jumbled up in "baskets" with no structural connections between the symbols.

The things computers process are called "symbols". Computers can build structure between symbols and react to, or follow, it, and often do. Virtual connections between memory locations can be established using pointers, and algorithms can follow the connections using the methods of direct memory addressing and indirection.

This structural, or relational, ability of the computer program can be mirrored in the Chinese room by adding to the room's ontology a new object type: string. Instances of string in the room can then connect tokenised Chinese symbols. Every piece of string has the same characteristics including length. They are the embodiment of structure, are relational elements of structure.

In the room, if the connections established between symbols are a causal consequent of temporal contiguity at the sensory surface resulting in contiguous sensory symbols exiting the sensor then entering the room, the connections between the sensory symbols record as internal structure the external instances of temporal contiguity at the sensory surface. Is such an internal structure an element of semantic content?

In the computer, if the internal memory structures built with pointers are trees, a program can walk the trees and emit as output copies of the leaves (symbols), without reacting to (identifying) the shapes of the symbols. The program merely copies and emits whatever it arrives at that has no children. The program contains no conditionals indexed on symbol shape.

Suppose Searle is blindfolded then walks a tree by following the string with his hands. When he arrives at a leaf (a card inscribed with a Chinese ideogram which card has no downward strings attached) he emits the card then continues on his tactile tree walk. Since the rules he is following do not instruct a reaction to the shape of any Chinese symbol (and hence do not contain an example or description of any Chinese symbol shape), does this mean the program in the rule book is non-syntactic with respect to Chinese symbols, and Searle manipulates the symbols non-syntactically?

In 2014, Searle says (his emphasis): "...a digital computer is a syntactical machine. It manipulates symbols and does nothing else" ("What Your computer Can't Know", in The New York Review of Books, October 9, 2014, section 2, para 7). String is not symbols. Is his careful avoidance of structure his fundamental mistake?

Roddus
  • 629
  • 3
  • 14
  • In "Mind, Brains, and Programs" Searle is given "instructions" and "rules" so he can give back answers. He doesn't specify what these are so that they are general. In the end he doesn't understand Chinese and so the program he is imitating does not understand Chinese either. I don't think he is avoiding anything. If he did understand Chinese then a strong AI mind-body dualism would be justified and mind could be separated from the body. However, he did not understand Chinese. – Frank Hubeny Dec 27 '17 at 00:29
  • 3
    Strings of what? Strings are strings of symbols. They have no more semantic content than individual symbols do. The string "cat" has no more semantic content than the individual strings 'c', 'a', and 't'. It's the humans who assign meaning to that string. And for that matter, isn't an individual symbol just a string of length one? I don't follow the point you are trying to make. Symbols or strings of symbols are the same thing. Blindfolded walking of a data structure is nothing more than syntactic processing of symbols. – user4894 Dec 27 '17 at 01:24
  • @user4894. With text, sure, the relationship between the symbols, eg c,a,t (of temporal contiguity (TC) as they pass through a surface or spatial contiguity when stored), has no semantic properties. But for sensory symbols, the fact that one follows another into the computer mirrors (not denotes, not means) TC at the sensory surface between what caused the sensor to create the symbols. It's the same relation: TC in the environment, TC between sensory symbols. It's the same thing on the inside as on the outside. Isn't this a semantic element? (That might even be a component of representations) – Roddus Dec 28 '17 at 01:26
  • @Frank Hubeny To me, the program doesn't understand the Chinese answers because all it contains is conditionals about the shapes of Chinese symbols. If Searle understood Chinese merely by virtue of identifying the Chinese symbol shapes, I don't quite see how this might imply dualism. The mind would still be a resident of the physical, not spiritual, plane. The mind (the program) could be separated from the body (the computer), but the program would still be a physical object. Do I understand you comment properly? – Roddus Dec 28 '17 at 01:40
  • @Roddus I'm not sure what you mean by "sensory symbols." If you mean the sequence of symbols generated by a sensor connected to the outside, how does the computer know anything about that? If a cpu sees a stream of bits, a human may know that those are the output of a physical sensor, but the computer has no such knowledge. It's just another bitstring to be manipulated according to rules. That's a perfect example of a human supplying the semantics. The humans know that the bitstring represents a temperature in the real world. The cpu only sees the bitstring and has no idea what it means. – user4894 Dec 28 '17 at 02:16
  • @user4894 The computer doesn't know. But then I don't know that pulses in certain fibres come from my eyes or ears. I don't know where these fibres are, I don't know if anything is pulsing along them, my mind is totally ignorant of the physics of the connections between my eyes and my brain. But in early learning, structures are created from each sensor bitstream in their own brain areas. Further leaning connects the single-sense structures together by "binding". The connected structures could pass for representations of external objects. This is a really different picture cf the Chinese room. – Roddus Dec 28 '17 at 03:19
  • @Roddus. But the structures *can't* pass for representation of external objects, as you claim, because they are in no way accessible as such by the computer. So, although it's a different picture from the Chinese room, it's not a realistic one. –  Dec 28 '17 at 14:18
  • @Pé de Leão By "*are in no way accessible as such*" do you mean that the computer can't access the structure? Or do you mean that the computer can access the structure but can't recognize it as a representation of an external object? I.e. the computer can't understand what the structure means? – Roddus Dec 29 '17 at 01:50
  • @Roddus. Structure is an abstract concept that exists in our minds, so the computer can neither understand it nor access it. As I said before, every bit in a computer is epistemically isolated, so the concept of relation is meaningless for the computer. It can't perceive a single bit much less any relation between bits. –  Dec 29 '17 at 02:21
  • @Pé de Leão But a computer can realize the abstract concept of structure. Brains realize the abstract concept of structure by containing structures. Computers can realize the abstract concept, too. I agree that every bit (voltage pulse, magnetic domain, switch state) in a computer is epistemically isolated and we can't perceive bits or relations between them. But then I can't perceive neural pulses in biological brains either. I can perceive dendrites, neural connections, but not computer memory connections. But that's because memory connections are "virtual" and created by pointers... – Roddus Dec 29 '17 at 19:49
  • @Pé de Leão Cont... Algorithms can follow computer virtual connections by using pointers, which allow the algorithms to move from one memory location directly to another in linear computer memory. To me this seems the same as a process moving along an actual physical tube from one x-y-z 3-D coordinate location to another. Pointers have to be used in a computer because computer memory is linear, 1-dimentional, and brain structure is 3-dimentional. But I think that pointers do allow 3-D structures to be "mimicked" (if that's the right word) in 1-D linear memory in the needed respects. – Roddus Dec 29 '17 at 20:03
  • @Roddus. Brain states somehow get mapped to consciousness which serves as an "output device" that is characterized by some unifying principle, making thinking and abstraction possible. However, the same isn't true for computers because there's nothing like consciousness to map the data to, so there's no way to decode it, and thus it remains forever inaccessible — kind of like the symbols in the Chinese room. You can *claim* that computers can realize abstract concepts, but you can't even begin to propose any mechanism as how that might be possible. –  Dec 29 '17 at 20:35

2 Answers2

1

The only way that adding structure could make Searle’s Chinese Room Argument (CRA) semantic is if one could imagine Searle understanding Chinese by going through the programmatic process with this additional structure, whatever it is, included. Searle does not specify what a program might be asking him to do. It may be so advanced it is beyond our imagination today. It may be highly successful and convince everyone it understands Chinese. Even with all this, Searle claims, and I would agree, he would not understand Chinese after imitating the process. So, I conclude that “adding structure” does not help. Searle has already implicitly added it.

Consider the final question: “Is his [Searle’s] careful avoidance of structure his fundamental mistake?” I don’t think Searle is making any mistake with the CRA. However, he may be making a mistake with his physicalism, but that is independent of the CRA. An idealist or a traditional mind-body dualist could use the CRA to get the same two results Searle does in his “Minds, Brains, and Programs”, namely, that machines cannot understand and the machine and its programs do not explain our human ability to understand. There may be many ways to explain our ability to understand besides Searle's preferred “certain brain processes”, but AI programs are not one of them.

Frank Hubeny
  • 19,136
  • 7
  • 28
  • 89
  • 2
    *It may be so advanced it is beyond our imagination today.* -- Actually we have a working implementation of the Chinese room in Google Translate. Does anyone think Google Translate understands Chinese? – user4894 Dec 27 '17 at 03:10
  • @user4894 Since we know there is a program underlying Google Translate we would not be tempted to anthropomorphize it. The same could be said for when our ebook reader opens up a file. Do we think this software reads with understanding what is in the book it opens for us to actually read? Or consider a physical book. Do we think the physical book understands the text it is presenting to us when we read it? – Frank Hubeny Dec 27 '17 at 15:32
  • "* Even with all this [added programmatic complexity and structure], Searle claims, and I would agree, he would not understand Chinese after imitating the process*". I haven't seen him talk about structures in the room. He seems to focus solely on the extrinsic meaning and intrinsic syntax of symbols. He uses the term "data base" and "database" (no space) but he is talking about "baskets" and "boxes" and there is no structure of symbols inside these (and in fact they are not databases). Just as he talks of "bunches" and "batches", there is no structure. – Roddus Dec 28 '17 at 02:34
  • Cont... I think structure plus syntax can explain semantics but syntax alone can't. Which is an interesting point. If computer programs can create structure (which they can) then the Chinese room fails to take into account all the things a computer can do. And the Chinese room argument also fails to take into account all the things a computer can do. My argument is that structure is not syntax. If true, then Searle's premiss "computers are purely syntactic devices" is false and the CRA is unsound. So a good question seems to be: *Is structure syntax?* – Roddus Dec 28 '17 at 02:41
  • @Roddus "Is structure syntax?" suggests the "systems reply" that Searle addresses in "Minds, Brains, and Programs". To eliminate any influence from outside the individual (which is what I assume you mean by "structure"), he lets the individual internalize all the elements of the system and even moves the person outside. Regardless whether structure is syntax or not, Searle takes it into account. A question he asks is how would strong AI distinguish the mental from the non-mental should one accept strong AI? For example, does a thermostat understand temperature? – Frank Hubeny Dec 28 '17 at 15:29
  • @Frank Hubeny By structure I mean a program's ability to create (and follow) structure. If syntax is reaction to symbol shape, then the ability to build structure (relate symbols together without reacting to their shapes) is an extra super power that programs have, and Searle's premiss that computers are purely syntactic devices is false (and the CRA unsound). The internalized Chinese room is still a purely syntactic system according to Searle, but if it can create structure then the internalized room is not purely syntactic and Searle's premiss is false. That's the idea I'd like to offer. – Roddus Dec 29 '17 at 01:44
  • @Roddus I don't see how Searle hasn't covered "structure" in his answer to the systems reply to the CRA. From my viewpoint I see this structure as adding nothing new to syntax and those supporting strong AI see semantics as adding nothing new to syntax. Anything a program could do Searle could imitate and not reach understanding of Chinese. I suppose one way for strong AI to get around this is to claim that semantics doesn't exist, that is, that it is an illusion of some sort, however, I don't think that is true. – Frank Hubeny Dec 29 '17 at 15:19
  • @Frank Hubeny As I understand Searle (and Stevan Harnad, etc.), syntax is shape. But more generally it could be regarded as a property (different shapes being different values of the property of shape). The program is purely syntactic because its conditionals act only on the shapes of the Chinese symbols - an intrinsic *property* - with no reference to the meanings of the shapes - a 2-term relationship one term of which is the shape, the other a meaning (the relation is actually more complex than this since shape is a universal). Structure is a *relationship*, not a property. – Roddus Dec 29 '17 at 19:24
  • @Frank Hubeny Cont... So my argument is that since syntax is a property, and since structure is a 2-term relationship, and since properties are not 2-term relationships, structure is not syntax. And since computers (their programs) can create and follow structure, Searle's CRA premiss "computers are purely syntactic" is false and the CRA unsound. This opens up the possibility that though syntax is insufficient for semantics, syntax + structure might be sufficient, and computers might be able build internal semantic structures. – Roddus Dec 29 '17 at 19:33
  • @Roddus Regarding "shapes" Searle mentions this: "these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch" (Minds, Brains, and Programs"). He used "shape" to emphasize that he does not know what the Chinese characters mean nor their syntax. They are just shapes *to him*. The *program* is able to construct an output string. It doesn't matter how the program does this. When Searle imitates the program, he does not understand. – Frank Hubeny Dec 30 '17 at 02:08
  • @Frank Hubeny (Steven Harnad talks a lot about shape, too, when talking about what computers do.) By "nor their syntax" do you mean nor their semantics? As I understand it, to Searle shape is syntax, so by seeing the symbol shape he would know the syntax. It might matter how the program constructs the output? I think the program needs to manipulate the symbols only on the basis of their shapes. I agree that the whole point of the Chinese room is that Searle doesn't understand what the shapes mean. – Roddus Dec 30 '17 at 08:21
  • @Roddus As I read it, all Searle knows is the "shape". He knows neither the semantics nor the syntax of Chinese. He knows just enough to pick out an individual character by its shape. The program has a way to generate correct output that Searle can imitate. Where I might disagree with Searle is that the program doesn't really know the syntax any more than the semantics either, but the program does have a method to manipulate the symbols to get the right answer. – Frank Hubeny Dec 30 '17 at 13:28
  • @ Frank Hubeny yep I agree the program doesn't know the syntax (because it doesn't know anything). But when Searle talks about syntax, what do do think he means exactly? does he mean just shape or is there more to syntax in the Chinese room (or computer) than just shape? – Roddus Jan 10 '18 at 00:13
  • @Roddus I think Searle associates "syntax" of the human language with the program's formal rules, but I am not sure. The idea of "shape" only applies to the human being in the Chinese Room. All that human knows is the shape of the characters because he sees them, nothing else. That person follows the formal rules of the program, not the syntax of Chinese which may be different. Humans break syntactical rules in colloquial speech which makes me think the syntax-semantic divide is not clear for us. But a programming language would only have formal rules, just its syntax. – Frank Hubeny Jan 10 '18 at 12:52
1

I am aware that this response is slightly off topic, however I hope it still helps.

I think viewing Searls Chines room as "Intuition Pump", a concept introduced by Daniel Denett, is a usefull approach. Where thought experiments are entities that give us better or worse intuition of a certain phenomena. By slightly changing parts of the thought experiment in question one sees if, it is a good intuition pump or not. By analyzing if the changed thought experiment sustains the same intuition.

My conclusion is that the CRA is dependending strongly on it's intial form to create the demanded intuition. Meaning adding new entities like you suggest f.e. "strings" shows the limitied validity of the CRA for the analog phenomena it tries to describe.

I disagree with your statement that there are:

no structural connections between the symbols in the CRA.

Since the ordering them, guided by the rulebook, creates a structure that contains meaning for the reciever. The key point seems rather to be an unawareness/unintrestedness of/in the structure by the person in the room. This creates the clear cut between syntax and semantics. This clear cut also is caused by the rulebook containing 2 languages, which are superimposed by someone who isn't the person in the room that just understands one and shuffles expressions of the other language around.

This unintrestedness poses the question, that given the temporal structure of the sensory input, does the person in the CRA have the desire to derive the semantic property? Seemingly not he just does his work.

Note that the part where you discuss software you seem to distance yourself from what Searle seems to mean since you are arguing about the structures used in the rulebook to transmit the desired semantic properties. Not the CRA itself.

To me it seems as if the CRA would mainly focus on the analogy of a single CPU core. So demanding an intresed for the mechanism flipping bits seems problematic.

Due to the mentioned above intuition pump your approach seems appropriate but inappropriate aswell. Appropriate since you restructure the intial CRA to make it give better intuitions for possibly more complex computers. However the intial CRA still holds for sympler systems like normal calculators.

Others have chosen simular approaches f.e. trying to identify the overall system as relevant, laying more importance on the structure of the rulebook(software). I myself tryed this by reformulating the CRA to appear more like a nervcell and adding it with other modiefied CRA's together to get a 3D brain like structure.

My conclusion is that the CRA illustrates the wrong level of analysis for complex systems. Therefore I view your approach as inappropriat since the choosing the CRA as model seems unnecessary to general questions you seem to express. Like how does a semantic in a system arise. Or what exactly is semantics, how does complexity affect semantics ect.

CaZaNOx
  • 154
  • 6
  • Thanks. Yep, the input questions and output answers comprise temporally contiguous symbols as the strings enter/leave the room (hence the symbols as they enter/exit are terms of instances of the relation of temporal contiguity). But once inside and before they leave, and the spares in the "boxes" or "baskets" , don't seem to be parts of any structure, and are just thought of as individual tokens. There are no tree structures, for example, but brains are full of these (and the room is supposed to be a computer trying to be a brain - and programs can easily create tree structures). – Roddus Dec 28 '17 at 01:57
  • Cont... What exactly is semantics is a huge problem, of course. Linguists and philosophers are fairly clear about it - as per the concepts of linguistics and philosophy of language, but *not* of computer science. One idea I wanted to discuss is that a semantics is a forest of trees in which syntax (symbols or equivalents) alone is insufficient for semantics but when the structural element of tree connections ("arcs") are added, the two types of component, symbols and connections, combine to yield semantic properties. Though there's quite a lot of resistance to such a simple idea. – Roddus Dec 28 '17 at 02:19
  • @Roddus Why do you focus your view on tree structures. Is it only because "brains are full of them"? Why not lists, arrays, ect.? Isn't it percievable that the rulebook instanciates a program that creates a working of the person in the room that resembels a tree structure. Doesn't this just increase the speed with wich output is generated due to more efficent structuring of action and tokens rather then ascribe semantic properties on this level of analysis? Even if one is sympathetic to your idea it's unclear why the mechanism adding the components together should be aware of this semantics. – CaZaNOx Dec 28 '17 at 08:23
  • Trees seem interesting for several reasons, but linked lists, etc. could be built too. Presumably the rule book could instruct Searle to create tree structures, if there were a way to relate symbols (and created nodes) together. When you say *the mechanism adding the components* do you mean the program and/or the CPU is the mechanism? I wasn't thinking that these might have semantic properties, but rather that the built structure itself is the semantics. Awareness is a high-level feature. I was looking more at the low level problem of making inner representations of external objects. – Roddus Dec 28 '17 at 09:01
  • @Roddus. By calling awareness a "high-level feature," are you suggesting that it's something that could just happen all by itself without a programmer intending having any idea as to how to bring it about? –  Dec 28 '17 at 15:27
  • @ Pé de Leão I think awareness is an algorithm which would be intentionally designed as an algorithm of awareness (i.e., not an emergent property, 'genetically evolved' algorithm or accident). Given that so little's known about really basic mental phenomena, it seems good to start pretty near the bottom with issues like: what is an internal representation of an external object? what sort of computer memory structure would realize a representation? and How do these structures develop through learning from experience? To have awareness, presumably many inner representations have to already exist – Roddus Dec 29 '17 at 01:31
  • @Roddus Yes I do. I understand CRA = Processing unit of a single core whith the controlunit(Rulebook), ALU(human in the CRA) and registers(heaps). If you think the build structure itself (the entire CRA or resulting output as for us understandable software) is semantics why are you adding new datatypes to the CRA? With aware I wasn't necessary meaning awareness. I ment to ask where the semantic meaning becomes relevant and to whom. In the case of the CPU we can say the programers, but isnt the interpretation used by them arbitrary? – CaZaNOx Dec 29 '17 at 21:11