28

relatively new to philosophy.

This question is based on John Searle's Chinese Room Argument.

I find it odd that his main argument for why programs could not think was that because programs could only follow syntax rules but could not associate any understanding or semantics to words( or any object/symbol).

This point seems contestable to me (although I can't quite word it well enough). How is John so certain that it would be impossible for a program to understand semantics? Is mimicking semantic understanding actually different from genuine semantic understanding?

What does philosophy say about whether it is truly impossible for mankind to one day develop a program capable of semantic understanding? According to Turing's Same-Evidence Argument, if a computer can pass the test, we would have to assume that it is capable of understanding. Could we even distinguish between simulated mimicking understanding and actual understanding?

Edit: wow this really blew up. Thank you for the answers!

Sidenote:

I posited this question partly because John's visualization of what a program looks like seems flawed to me. He is able to clearly break-down and visualize a program (the room) with a 'core' (the man in the room) in the middle that handles the inputs and produces the outputs.

However, complex algorithmic programs aren't designed in such a simple manner. Take Artifical Neural Networks for example, which are said to be "black-boxes" due to the fact that we can't break down a neural network into components to deduce how it decides to give certain outputs. John's arguement seems built on the fact that we could 'peer' into how programs/algorithms make decisions when that isn't necessarily true.

Chess algorithms such as the infamous Deep Blue and Alpha Zero sometimes produce moves that professional chess players fail to consider. Would John argue that these algorithms "fail to understand Chess?". It seems flawed to say that the program fails in semantic understanding when it can display creativity which human chess players themselves may lack.

J D
  • 19,541
  • 3
  • 18
  • 83
Abraham
  • 473
  • 1
  • 3
  • 6
  • 7
    " Is mimicking semantic understanding actually different from genuine semantic understanding?"... that's what the Chinese Room argument is supposed to show. Where do you disagree with the argument? – Ameet Sharma Nov 08 '21 at 16:52
  • 1
    Here's a related question that might interest you [Computers, Artificial Intelligence, and Epistemology](https://philosophy.stackexchange.com/questions/68915/computers-artificial-intelligence-and-epistemology/). – J D Nov 08 '21 at 18:44
  • Welcome to SE Philosophy! Thanks for your contribution. Please take a quick moment to take the [tour](https://philosophy.stackexchange.com/tour) or find [help](https://philosophy.stackexchange.com/help). You can perform [searches here](https://philosophy.stackexchange.com/search) or seek additional clarification at the [meta site](https://philosophy.meta.stackexchange.com/). Don't forget, when someone has answered your question, you can click on the arrow to reward the contributor and the checkmark to select what you feel is the best answer. – J D Nov 08 '21 at 18:44
  • 5
    "What does philosophy say" is too broad a question for this site, encyclopedias are better for getting some general background. See e.g. [SEP, The Chinese Room Argument](https://plato.stanford.edu/entries/chinese-room/) on what Searle means by "semantic understanding" and how it is disputed by AI oriented philosophers. – Conifold Nov 09 '21 at 00:03
  • 4
    It's not impossible. Semantics is just establishing a correspondence between a symbol (works like "apple" or "liberty", or signs like a "!" or a red triangle) with other symbols (a definition) or sensory input (the various contexts in which "apple" or "liberty" were uttered around you). There is no magic behind it, and an AI is perfectly able to do that. Current applications remain very specialized compared to humans but the difference is in degree, not quality. – armand Nov 09 '21 at 04:17
  • 4
    Those arguments were made before a modern understanding of artificial neural networks, when computers were assumed to always be algorithmic. Any amount of interaction with GPT-3 will demonstrate a fair bit of semantic understanding and GPT-4 etc. will only get better. – Eugene Nov 09 '21 at 05:21
  • 6
    @Eugene On the contrary, interaction with GPT-3 demonstrates the ability to *mimic* semantic understanding. Searle's (highly controversial) conclusion is that even perfect mimicry of semantic understanding does not imply actual semantic understanding. – Charles Staats Nov 09 '21 at 17:04
  • 3
    @CharlesStaats When he was writing, Searle could only conceive of a pre-defined set of inputs/outputs, so any semantic understanding was mimicry. Neural nets have demonstrated the ability to produce creative results beyond their training data-sets and beyond the human best. We now know our brains are neural nets that we only know the subjectivity of because we can describe it from the inside(unlike artificial neural nets, so far), so how can one argue that one type of neural net has semantic understanding, while the other one is only mimicking it? – Eugene Nov 09 '21 at 23:00
  • 2
    @Eugene - Searle did not restrict his argument to programs with "a pre-defined set of inputs/outputs", if you read the section on "replies" in [his original 1980 paper](http://www.cs.tufts.edu/comp/50cog/readings/searle.html), under heading III he discusses the "brain simulator reply" involving a detailed simulation of an actual human brain. He doesn't suggest any skepticism that such a simulation could *behave* just like a human brain, but he still thinks his argument shows it would have no understanding, since a conscious being could execute the program without understanding Chinese. – Hypnosifl Nov 10 '21 at 14:48
  • @Eugene In what way are neural nets not algorithmic? They run on perfectly conventional hardware. Of course they're algorithmic. You could execute a neural net using paper and pencil, albeit slowly. One instruction at a time. Weighting nodes is a pretty old technique, and the McCulloch-Pitts neuron dates to the 1940s. – user4894 Sep 16 '22 at 00:12
  • 1
    @user4894 -- There are no algorithms that determine what sort of binning the neural nets will do, whether in our heads, or in one of our computers. We have written algorithms to do neural net processing, but the neural net processing itself is non-algorithmic. Note also, if we built an actual neural net, which did analog processing like our brains do, rather than simulating a neural net as we currently do, then we could not identify the "one instruction at a time" that actual neural net processing involves. – Dcleve Sep 16 '22 at 13:18
  • 1
    "*It's **impossible**... to put a Cadillac up your nose, it's just impossible...*" - Steve Martin – Scott Rowe Sep 16 '22 at 15:09
  • 1
    @Dcleve You could in principle execute a neural net one instruction at a time with pencil and paper. Of course it's algorithmic. It runs on conventional hardware. Neural nets aren't magic, they're computer programs running on off-the-shelf conventional computer hardware. Secondly, if you had an "actual neural net" as you call it that does analog processing, it would not be a digital neural net at all, so you're talking about something entirely different. And you haven't got one of those except between your ears, and we don't know how it works. – user4894 Sep 16 '22 at 15:26
  • @user4894 -- There are clear differences between an algorithmic approach to computing, where functions are identifiable, characterizable, and can be extracted and analyzed separate from the implementation, and "neural net" simulations, where the output functions are NOT characterizable, etc. Those differences are well understood in the AI community, and as the limitations of Deep Learning AI have become apparent, the latest research area in AI is focussed on how to do processing fusion between these two DIFFERENT methods of implementing AI. Declaring them to be the same, is silly dogmatism. – Dcleve Sep 16 '22 at 15:40
  • 1
    @Dcleve I'm sure undergrad CS majors intending to specialize in AI are relieved to find out they won't be using algorithms. LOL. I'm not engaging in silly dogmatism. I'm pushing back against AI mysticism that claims that neural nets don't use algorithms, aren't computer programs, "do things that we don't understand," which is true of 99% of the large commercial programs out there. To think clearly about AI you need to use words precisely. (ctd ...) – user4894 Sep 16 '22 at 16:52
  • 1
    @Dcleve ... Neural nets are indeed algorithms, as a glance at their source code would reveal. They're complex as hell, but so is the global supply chain. Another collection of local algorithms whose global behavior nobody understands, but that nobody endows with mysticism and hype. – user4894 Sep 16 '22 at 16:52
  • @user4894 There are algorithms that describe the training and execution layer, but the randomness involved makes the result completely unpredictable and non deterministic, especially if the common technique of 2 adversarial nets working against each other is used. The original Chinese room thought experiment supposed an extremely large, yet still deterministic rule set, i.e. algorithmic – Eugene Sep 16 '22 at 21:21
  • 1
    @Eugene Neural nets are simply not random and not nondeterministic. They run on conventional hardware. At the cpu core level they execute one machine instruction at a time, one after another, no different in principle than "Hello world." Surely you must know this. Unpredictable is not the same as nondeterministic. A coin flip is unpredictable but perfectly deterministic as a function of flip force and angle, air pressure, etc. Nothing to do with the Chinese room. Just pushing back on this mysticism that neural nets are not algorithms and not deterministic. – user4894 Sep 16 '22 at 21:38
  • 1
    @Eugene A casual search gave this: https://phoenixite.com/are-neural-networks-stochastic-or-deterministic/ *Are neural networks deterministic? The answer to this question is pretty much straightforward; once trained, the internal working of a neural network becomes deterministic and not stochastic. Neural networks are stochastic before they are trained. They become deterministic after they have been trained. Training installs rules into a network that prescribe its behaviors, so an untrained model shows inconsistent behaviors. Training creates clear decision patterns within the network.* – user4894 Sep 16 '22 at 21:54
  • 1
    @Eugene *Neural networks are **a series of algorithms** [my emphasis] with the incredible ability to extract meaning from imprecise or complex data and find patterns and detect trends convoluted for several computer techniques.* I'm sure other sources would back this up, since all computer programs running on conventional (non-quantum) hardware are deterministic. Who on earth can possibly claim otherwise? There are no ghosts in our machines. – user4894 Sep 16 '22 at 21:54
  • @user4894 a good test of whether something is deterministic, is if you can predict the outcome given the inputs. If you ever try image or text generation neutral nets, you'll see that your can repeat the same input literally 1000+ times(tried out of curiosity) and get related but still completely different results. If you have a copy of a neutral net, short of forcing non random generation, you cannot predict what output it will generate for a given input at all. That's as stochastic as you get. – Eugene Sep 17 '22 at 04:30
  • @user4894 I looked up the author of the article you linked on LinkedIn ,https://www.linkedin.com/mwlite/in/zachary-gene-botkin-a05b341b2, he's an image analyst at a startup supplying data for AI image classification, so he's missed the forest for the trees. Image classification neural nets ARE deterministic, of you feed it one input 1000 times, it will tell you it's a cat with 97.86÷ certainty each time, but he's generalized that to all metal nets, including generative ones, which is completely wrong – Eugene Sep 17 '22 at 04:37
  • 1
    @Eugene "a good test of whether something is deterministic, is if you can predict the outcome given the inputs." Completely false. A coin flip is deterministic but unpredictable. All of chaos theory is about things that are deterministic but unpredictable. This isn't the right venue to get into all the rest of this but you are simply wrong. A neural net ultimately executes machine instructions one by one, deterministically. If you think a cpu core decides on its own to do something other than what it's programmed to do based on the high-level code it's executing, it's back to CS 101 for you. – user4894 Sep 17 '22 at 04:47
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/139272/discussion-between-eugene-and-user4894). – Eugene Sep 17 '22 at 16:45

18 Answers18

36

There is a blatant problem with Searle’s argument and it’s quite hard to understand why it hasn’t been pointed out before: None of Mr. Searle’s brain cells understands English, yet he claims that he can? What argument can he make that an AI can’t reverse and throw right into his face?

gnasher729
  • 5,048
  • 11
  • 15
  • 2
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/131350/discussion-on-answer-by-gnasher729-why-is-it-impossible-for-a-program-or-ai-to-h). – Philip Klöcking Nov 11 '21 at 17:18
  • 2
    Yes, perplexing why we let Mr. Searle get away with that. I saw him give this argument in person about 40 years ago. – Scott Rowe Aug 29 '22 at 10:46
22

I find it odd that his main argument for why programs could not think was that because programs could only follow syntax rules but could not associate any understanding or semantics to words( or any object/symbol).

That was more his conclusion than his argument. His actual argument about the Chinese Room thought-experiment was that if the room was occupied by a conscious agent who is perfectly capable of semantic understanding, like a person, and they were to execute the syntactical rules of a Chinese-speaking program by hand (or from memory), they would nevertheless lack any semantic understanding of Chinese. For example, in the SEP article on the Chinese Room that you linked to, it quotes Searle giving a summary of the argument in 1999 where he says (emphasis mine):

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

And it quotes a later 2010 statement where he said:

A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker

I also found his original 1980 paper on the subject online here, where he imagined that he was a native English speaker in the room responding to English questions in a natural way, and responding to Chinese questions based on hand-simulating an elaborate computer program, and his argument was based on the contrast between his own understanding in the first case with his lack of understanding in the second:

Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment.

  1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

  2. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't.

There have been various responses to the argument by philosophers who don't find it convincing, see section 4 of the SEP article. The one I think is the most convincing refutation is the "systems reply", which basically says that the boundaries of "systems" are somewhat arbitrary and that a given named physical system may have multiple computational sub-processes going on within it that could be sufficiently independent that each might individually have semantic understanding of certain things and yet lack understanding of things that the other sub-process does understand. To pick an extreme case, imagine some alien species that is naturally two-headed, with independent brains that have no neural connections between them--although both brains might be considered to be part of a single biological "system" we wouldn't be surprised if one brain could understand something (say, the Chinese language) that the other was ignorant of. And even if there were some neural connections between them, they might not be of the right configuration to ensure that high-level conceptual understanding of any arbitrary topic would necessarily be shared by both brains.

Here is David Chalmers giving this type of argument on p. 326 of his book The Conscious Mind, where the agent inside the room is a "demon" who may have memorization capabilities far beyond those of real-life human:

Searle also gives a version of the argument in which the demon memorizes the rules of the computation, and implements the program internally. Of course, in practice people cannot memorize even one hundred rules and symbols, let alone many billions, but we can imagine that a demon with a supermemory module might be able to memorize all the rules and the states of all the symbols. In this case, we can again expect the system to give rise to conscious experiences that are not the demon's experiences. Searle argues that the demon must have the experiences if anyone does, as all the processing is internal to the demon, but this should instead be regarded as an example of two mental systems realized within the same physical space. The organization that gives rise to the Chinese experiences is quite distinct from the organization that gives rise to the demon's experiences. The Chinese-understanding organization lies in the causal relations between billions of locations in the supermemory module; once again, the demon only acts as a kind of causal facilitator. This is made clear if we consider a spectrum of cases in which the demon scurrying around the skull gradually memorizes the rules and symbols, until everything is internalized. The relevant structure is gradually moved from the skull to the demon's supermemory, but experience remains constant throughout, and entirely separate from the experiences of the demon.

Hypnosifl
  • 2,837
  • 1
  • 16
  • 18
  • 5
    *in practice people cannot memorize even one hundred .. symbols* - I must have misunderstood this because we don't even need to be discussing a reasonably well educated Chinese person knowing thousands of symbols; I'd say the average two year old could identify more than 100 different icons of basic animals, household objects etc – Caius Jard Nov 09 '21 at 09:26
  • 6
    _in practice people cannot memorize even one hundred rules and symbols_ I'm not a well educated Chinese person, and yet I've memorized upper and lower case Latin characters, numbers, math symbols, most upper and lower case Greek letters, a few Cyrillic letters, some traffic signs and other abstract icons, all of hiragana and katakana, a few hundred kanji, and dozens of grammar rules, traffic rules, courtesy rules, and rules in several programming languages. Surely several well educated Chinese persons know all that plus 5k+ ideograms. :-) – Pablo H Nov 09 '21 at 15:06
  • 1
    @CaiusJard Re: _the average two year old could identify more than 100 different icons [...]_ While (amazingly) true, your examples are not abstract icons but images of things. For abstract symbols such as letters, "forbidden", "biohazard" and so on its more difficult. – Pablo H Nov 09 '21 at 15:11
  • Yeah, but really what's the difference between abstract and concrete, to a two year old? Show them ❤️ they say "heart" even though they've never actually seen a heart, and if you did toddle them off to the meat counter in the local store, it looks nothing like that. I don't ncessarily agree that "forbidden" or "biohazard" (pronunciation complexities aside) would be significantly more difficult for a child to memorize than anything else they've never experienced. Idon't understand the 100 symbols assertion because Pictionary would be a very dull game if it were true.. – Caius Jard Nov 09 '21 at 16:02
  • So Searle's argument is that passing a Turing test is not a proof of understanding. It does not prove that creating artificial understanding is impossible. (Though I'm under the impression that Searle did in fact argue the second point). – Dave Nov 09 '21 at 16:43
  • 7
    @Dave: I've never heard anyone seriously claim that passing a Turing test is proof of understanding in the first place, so if that's what Searle meant, he is refuting a straw man. – Kevin Nov 09 '21 at 18:15
  • 2
    @Kevin We could distinguish between a real Turing test administered by a human over some limited amount of time, vs. an ideal Turing test done administered some superintelligent agent who has an infinite amount of time to test *all* the behavioral capabilities of an entity to see if it really does have all the types of behaviors that in a human we would take as evidence of things like 'understanding', 'intelligence', 'creativity', 'empathy' etc. There are various flavors of "functionalism" in philosophy of mind that I think would see the ideal sort of test as a demonstration of mental states. – Hypnosifl Nov 09 '21 at 19:56
  • And as another example, although Chalmers does not *identify* mental states with functional states, he does argue that the "psychophysical laws" that he postulates relating physical states to mental states would likely respect a principle of functional invariance (functionally identical systems would have identical mental states), see [this paper](http://consc.net/papers/qualia.html) where he argues for the idea that "experience is invariant across systems with the same fine-grained functional organization". – Hypnosifl Nov 09 '21 at 20:18
  • @Hypnosifl: IMHO Searle was just using the Turing test as an illustration of the sort of behavior we would expect to see from an agent that understands, rather than as an actual test of understanding. I still think Searle is dead wrong, of course, but I prefer to interpret him as charitably as possible, and under my interpretation, he was trying to make a much deeper point about the relationship between syntax and semantics, not just discredit the Turing test. – Kevin Nov 09 '21 at 20:30
  • This is an excellent answer, but I would like to say something about Searle's claim, "a computer has a syntax but no semantics." He makes this claim twice in "Minds, Brains and Programs": firstly as a conclusion when he states what the Chinese Room is trying to show, secondly as a premise when he is trying to justify his dismissal the systems reply. This is, of course, begging the question, and when you eliminate all the circular reasoning, all that you have left is the unargued-for assertion that "a computer has a syntax but no semantics." – sdenham Mar 14 '22 at 18:02
  • At the point where you imagine a two-headed alien, I think you could substitute the phenomenon of blindsight, where a person will deny having knowledge of something, but act in a way that shows they do. – sdenham Dec 21 '22 at 13:41
9

As I see it, Searle is getting at the point that syntax is algorithmic — a system driven by predefined rules and procedures — but semantics is (as far as we can tell) not. In other words, it's easy enough to create and recognize a syntactically well-formed sentence on purely procedural grounds, but judging the meaningfulness of a sentence requires something beyond that. I mean, compare the following utterances:

  • Jarod loves potato chips
  • Loves potato Jarod chips
  • Jarod chips potato loves

The first is syntactically correct and clearly meaningful. The second is syntactically incorrect (it doesn't follow the procedural sentence construction rules of English). The third is syntactically correct (treating 'chip' as a verb), but of questionable meaning. What does it mean to 'chip potato loves'? Now, if you imagine those three phrases passed into the inverse of the Chinese room (a room in which a Mandarin-only speaker is processing algorithmic rules for English), that man would recognize °2 as structural nonsense, but he would make no distinction between °1 and °3. How could he?

Note that this is akin to the distinction in logic between the validity and the truth-value of a series of propositions. The first tells us nothing about the second, and vice versa.

What's missing from syntactical analysis is the ability to make meaning from ambiguity (through non-procedural processes like extension, analogy, metaphor, simplification, correlation...). You and I can sit and ponder what it means to 'chip potato loves', and sooner or later we'll assign a meaning to it. But in order to be able to assign meaning we have to assess the meaning of the individual words and find some correspondence within them. That is more a function of the practical use of words than their syntactic structure or overt dictionary definitions.

This might be clearer to see if we think in terms of humor. For instance, if we take a couple of (clearly stupid) jokes:

  • Why should you never fight a dinosaur? 'Cuz you'll get jurasskicked!
  • Whoever invented knock-knock jokes should get a no bell prize.

... we can see that they are both syntactically correct, but the joke lies in their odd correspondences: the link from fights to getting your ass kicked, and dinosaurs to Jurassic; the similarity of 'no bell' (meaning no doorbell, hence the need to knock) to 'Nobel' (the archetypal prize for smart people). We can program a computer to repeat these jokes, obviously, but if we fed them into our inverse Chinese room the man inside would not laugh, and would not output 'hah-hah' unless he was explicitly told to do that for these sets of symbols. To get a computer to understand the humor of these jokes (or at least the stupidity of them), we'd have to make the computer capable of wide-ranging fuzzy associations between otherwise unrelated concepts, and no one has yet developed an algorithm to do that. If they do, it will require more than syntactical analysis, so Searle's Chinese Room problem will still hold.

Ted Wrigley
  • 17,769
  • 2
  • 20
  • 51
  • 6
    You seem to be assuming some notion of [symbolic AI](https://towardsdatascience.com/rise-and-fall-of-symbolic-ai-6b7abd2420f2) where we program the AI with high level "concepts" and various kinds of associations between them, but that approach has largely fallen out of favor with AI researchers. The focus is on bottom-up approaches like neural nets, where classification of sensory inputs into high-level groupings resembling "concepts" emerges from experience rather than being programmed in at the outset. Searle would claim the argument works in this case too but many philosophers disagree. – Hypnosifl Nov 08 '21 at 20:38
  • Well, I wasnt going to there with critique like Hypnosifl, but being grammatical is no promise of sensical as 'Colorless green ideas sleep furiously' shows. Even when syntax and parts of speech are accounted for, language still posses intension and collectively the comprehension which allow topical inference to occur. While comprehensions can be supplied by epistemic ML algorithms and symbolic methods like ontologies, how these are brought to bare to form the basis of what is roughly equivalent to intuition isn't remotely understood. Meaningful language production is a highly normative thing. – J D Nov 08 '21 at 22:48
  • Oh, and computers can also be programmed to handle contingency heuristically too. – J D Nov 08 '21 at 22:49
  • Good answer, I think the jokes point is a good one, becayse they represent creative non-algorithmic uses of language. It's interesting to look at examples of where Watson failed with Jeapordy clues: https://www.thenewatlantis.com/futurisms/watson-can-you-hear-me-significance-of – CriglCragl Nov 09 '21 at 01:19
  • @Hypnosifl: For the purposes of my argument, it doesn't really matter whether the machine uses a top-down or bottom-up algorithm. The syntactic/semantic distinction still holds. For instance, if we take (say) a neural net that's designed to do facial recognition, the software is developing a set of algorithms (a *syntax*) for comparing 3D surface data. It doesn't know that it's a 'face', much less that this 'face' is a feature of a particular object. – Ted Wrigley Nov 09 '21 at 01:23
  • @Hypnosifl: As an example, consider what would happen if we tossed dog or cat faces in with human faces in the neural net training data. I doubt the net could *on its own* determine that there are two different *kinds* of faces, or that the net could so generalize the concept of 'cface' that it would be subject to [pareidolia](https://en.wikipedia.org/wiki/Pareidolia). Yet humans do this naturally, and even hold contractions (i.e., knowing that something looks like a face but isn't a face). That is an understanding of the concept face. – Ted Wrigley Nov 09 '21 at 01:32
  • 1
    If you're just saying current neural nets are too simple to qualify as understanding semantics, I'd agree, but Searle's is an in-principle argument--are you likewise arguing that even a simulated neural net as complex as a baby's brain couldn't develop semantic understanding through experience the same way people do? – Hypnosifl Nov 09 '21 at 02:45
  • 8
    We have a Chinese room, its called Google translate, and if we feed it "Jarod chips potato loves" it is perfectly able to output a meaningful sentence (at least in French, Spanish, German and Japanese) by assigning "chips" to be the family name of Jarod (as in "that guy, named Jarod Chips, loves potato"). So an algorithm is perfectly able to assign meaning to an ambiguous sentence. Now, is it the correct meaning ? Maybe not, but humans are no better. Nobody who assigns a meaning to the sentence "Jarod chips potato loves" can claim to have the actually correct interpretation of it. – armand Nov 09 '21 at 04:08
  • 1
    @Hypnosifl: What I'm saying is that simulated neural nets as they are currently conceived have limitations that are both logical and technological (depending on how one wants to interpolate Searle). Current computers and software, simply put, are not capable of conceptualizing the way human brains do. Any claim beyond that is science fiction; realizing it will depend on the creation of an entirely new and speculative form of computation. I could make an argument for quantum computers, if you like... – Ted Wrigley Nov 09 '21 at 05:15
  • 7
    @armand: Interpreting 'chips' as Jerod's last name is bad syntax; that would make 'potato' a verb, which is something potatoes definitely are not. You could make an argument Jarod's last name is 'potato', with 'chips' as his nickname (i.e., Jarod "Chips" Potato). That would be grammatically correct, but really silly. But Searle's point still stands: Google Translate does not 'understand' the meaning of the sentence. It merely follows an algorithm for translation, and if we put garbage in, we get garbage out. ***We*** can make meaning out of garbage; Google can't. – Ted Wrigley Nov 09 '21 at 05:22
  • 2
    No, it just assumes the verb is put at the end in order to make sense of the sentence, which is exactly what you assert it can't do. Well, it just does, obviously. What does "to chip potato loves" mean anyway? What is "loves" ? You assert your reinterpretation of a flawed sentence is better than Google's, but it's just as much syntactical garbage. What allows you this interpretation, if not the assumption that the AI will always be wronger than you because you are human? i.e. you are just begging the question. – armand Nov 09 '21 at 07:17
  • With regards AI and jokes, [they](http://joking.abdn.ac.uk/) have [tried](https://arxiv.org/abs/1805.11850) and [they](https://towardsdatascience.com/can-a-robot-make-you-laugh-teaching-an-ai-to-tell-jokes-815f1e1e689c) are not as bad as some humans. – Dave Nov 09 '21 at 14:44
  • 1
    You are claiming a difference between algorithmic and semantic systems based on the behavior of those systems. This misses the point. Searle is arguing that even if an algorithm were created that could perfectly mimic human behavior (within the limits of a Turing test), it still would not have semantic understanding of the language it was producing. If an AI cannot explain the kind of jokes you have given, that gap could be used to make it fail the Turing test. – Charles Staats Nov 09 '21 at 16:58
  • 2
    But when you say simulated neural nets as currently conceived have "logical" limitations, what would those be? Your argument above about a neural net monstly trained on human faces and getting confused if we threw cat faces in could be seen as just a lack of sufficient wide-ranging experience to develop good recognition of category boundaries (not so different from a kid seeing a seal for the first time and calling it a 'doggie'), along with lack of sufficient size to be able to hold such a wide range of experiences. – Hypnosifl Nov 09 '21 at 18:00
  • 3
    As an analogy, if an alien visited Earth in the Cambrian era when the biggest brains were probably comparable to, say, lobsters, they might make empirical observations about how these brains were easily confused by some novel types of sensory stimuli, but they wouldn't be justified in saying there was some fundamental "logical" limitation on true semantic understanding for brains formed by Earth-style biological neurons. – Hypnosifl Nov 09 '21 at 18:02
  • 2
    @CharlesStaats: Interesting point. That leads us to conclude either (1) that none of us 'understand' anything we do or say since we are all reducible to Turing machines, or (2) that a Turning machine cannot perfectly model human behavior. we should be wary of potentially false premisses. – Ted Wrigley Nov 09 '21 at 18:13
  • @TedWrigley I believe in Searle's view, a biological brain can 'understand' but a perfect computer simulation of that same biological brain cannot. See also [philosophical zombie](https://en.wikipedia.org/wiki/Philosophical_zombie). Note that my understanding of Searle's views is likely imperfect. In particular, I don't know for sure if he believes that simulated beings could have the same philosophical debates we have about 'understanding' without realizing they themselves have no 'understanding'. – Charles Staats Nov 09 '21 at 18:53
  • 5
    "*syntax is algorithmic a system driven by predefined rules*" This is IMO quite dubious for natural languages, specifically for English. To the best of my understanding no one has yet devised a set of rules or an algorithm that can reliably distinguish valid from invalid English syntax covring the full range of the language, Look at the attempts to formulate "rules" over on ELL.SE which often reduce to "there is no general rule, that is simply the way native speakers use this word or construction". Saying "syntax is algorithmic" hand-waves a very hard problem, which may not be soluble at all. – David Siegel Nov 09 '21 at 22:49
  • @CharlesStaats a mind is a (huge set of) simulation(s) running in a brain. So no, brains don't understand a blessed thing. Just like my car doesn't drive to work. Even if it could guide itself to my workplace, it doesn't work there. "*Is it true that cannibals won't eat clowns because they taste funny?*" – Scott Rowe Aug 29 '22 at 16:49
8

Short Answer

There's a number of positions outlined in your SEP link to Searle's Room that make clear that philosophy has not decided by consensus one way or another the question of human and semantic understanding. The history of AI is an ongoing debate, in fact, about the question. A great introduction into that history is Nils Nilsson's The Quest for Artificial Intelligence. I'll caution you that anyone who answers you strongly negatively or in the affirmative hasn't even picked up and read this book. Philosophy is undecided largely because philosophy has reached no strong consensus on what constitutes understanding. Science has not reached a consensus on semantics in the brain. That being said, computers have made gains in the last couple of decades, perhaps not demonstrative of human-level intelligence, but certainly enough to listen to you and fulfill some of your needs. Essentially, though, besides agnostics, there are two camps: those who believe in a Cartesian notion of understanding that rejects anything other than humans as being capable of human-esque intelligence, and an upstart crowd that is interested in artificial general intelligence and believes it is possible in theory. (Warning that my bias is the latter.) Whatever the case, you are firmly in the philosophy of artificial intelligence, a relatively new branch of philosophical inquiry less than 100 years old given the emergence of digital computation in the late 1930's and early 1940's.

Long Answer

Getting Your Feet Wet

The dream to animate the inanimate to conduct itself as a human goes back thousands of years straight into the mythology of the Proto-Indo-Europeans. The abridged accounting thereof also seems to be an intro in every book today that seeks to introduce AI. Obviously, you have two questions at play, one about Searle and the Chinese Room, but the larger issue of what does philosophy say about developing machines that think.

You've cited the Chinese Room Argument, which has a number of posts on this site. Start with a review of those:

Understanding Searle's argument and the arguments that reply to it, particular the systems response are necessary for orienting yourself. Once you've done so, if I were you, I'd reach out and get a copy of The Philosophy of Artificial Intelligence by Margaret Boden and What Computer Cant Do by Hubert Dreyfus. If you want to know see what the AGI camp is cooking up, a good recent publication called Artificial General Intelligence by Ben Goertzel and Cassio Pennachin (Eds.) offers into some of the (IMNSHO) flailing attempts to create architectures to imbue software with human-level intelligence traits.

As to the question of how can Searle be so sure? Well, John Searle is renowned for his philosophy, and that confidence may be a function of his success as a philosopher and his lack of technical sophistication as a computer scientist. John Searle's continuation of the successes of the linguistic turn in philosophy is tough to dispute. He's written a lot about how language reflects on reality, both personal and social, but I would point out that Searle has a tool he uses to deal with the complexity of the mind called The Background. He often dismisses details right into that nebulous, diffuse thing to simplify to make his points. Overall, it's an excellent strategy for narrowing down his argumentation to what deserves focus, but the downside to that is that runs the danger of making it too easy to sweep aside relative propositions since informal argument is governed by non-monotonic logic and defeasible propositions.

Human-Level Intelligence and the Nature of Thinking

The other part of your question revolves around coming to terms with just what it means to simulate understanding, particularly of language. As you are likely aware, Alan Turing is famous for many things, but among them is his Turing Test which is an attempt to operationalize human semantic intelligence. As we approach 100 years, no one has been able to do it, which in the history of artificial intelligence has often been touted as just around the corner much as fusion reactors have been constantly 30 years (Discovery Magazine) away. In fact, when Hubert Dreyfus began attacking the AI program on campus and with RAND, he noted the outright hostility of his fellow thinkers almost immediately.

Why has the promise of AI been so slow to materialize (though gains in machine intelligence have accomplished some fantastic goals lately)? Well, it boils down to what the science of linguistics has discovered about semantics. The easiest way to explain it is to say that meaning is rooted in physical embodiment, and that processing strings in a serial ALU falls short of some of the connectist nature of the human brain. These are the computational details where the question of human intelligence gets gritty and where an ignorance of the materials science and the mathematical structures of computation start to have a bearing on the philosophy of mind.

In fact, the question of what constitutes human intelligence is not only an open question in the philosophy of mind, but in psychology itself, where there are two, roughly speaking, models vying for approval, the Cattell-Horn model which is related to the G factor and is operationalized through intelligence quotient testing, and what might be called a pluralistic notion of intelligence that is most famous through Howard Gardner's MI theory which is popular with humanists and educators. As there are adherents to harder and softer "sciences of the mind", so too does this bias reflect itself in the notion of intelligence.

Semantics and Understanding

Ultimately, the question you are after is rooted more in the philosophy of language more than anything else, because the discussion of the syntax-semantics dichotomy rests there with the philosophers and scientists of language. There are a number of competing models for how exactly this stuff, thing, experience called "meaning" happens, and if you really want to understand what's involved in how semantics functions with people, I'd recommend two books to start you on your way, though they aren't easy reads. First, Ray Jackendoff has his Foundations of Language which is highly technical, but makes a specific architectural argument about how embodied systems of the brain give rise to what we recognize as meaning. The second is also a tough read, but worthwhile if you really want to understand why the promise of human-level AI and language use has failed to materialize, Cognitive Linguistics by Evans and Green which offers a comprehensive picture of how language and meaning are grounded in bodily experience.

Summation

Now, what I've offered here isn't an easy answer so much as a blueprint for understanding why most philosophers are out of their element when discussing how to implement an aspect of actual human cognition rooted in neural computation on systems designed to implement the von Neumann architecture of a Turing machine. Searle's contributions to language, semantics, and intentionality are indisputable, however, in some regards, the question of engineering semantic understanding has begun to move out of philosophy and into the scientific domains of machine learning, software engineering, and neurology. As such, you will see resistance to abandoning classical notions in philosophy like truth-conditional semantics and Platonic mathematics that are adduced in favor of transcendental forms of metaphysics. In fact, Searle himself concedes the brain is a biological computer, but just remains skeptical that our current computer technology can mimic them, which is a measured conservatism.

J D
  • 19,541
  • 3
  • 18
  • 83
  • 2
    As an amateur philosopher: Why is there no mention of the (possibility of the) soul? If such a thing exists, it would be unsurprising that the brain cannot be duplicated by a physical computer. There are certainly plenty of philosophers who would say that humans have a soul. Is it merely that your answer is already long enough, or is there some reason to disqualify the soul from this discussion? – Spitemaster Nov 09 '21 at 21:56
  • 2
    @Spritemaster A soul is largely a theological concept due to ratioempirical philosophy. That is to say, a soul is rejected by most people who take modern scientific philosophy seriously because it is a supernatural idea. And many philosophers reject the supernatural. I think you'll find the last 100 years of Continental and Anglo-American traditions are dominate by athiests. The soul not only lacks any empirical status, but like gods, is unnecessary to explain things. Naturalism is overwhelmingly advocated by contemporary, professional philosophers... – J D Nov 09 '21 at 22:36
  • https://philpapers.org/surveys/results.pl?affil=Target+faculty&areas0=0&areas_max=1&grain=coarse There's a poll on athiesm. – J D Nov 09 '21 at 22:39
  • 2
    [Vitalism was the last hurrah for the scientific defense of the soul and was dispatched.](https://en.wikipedia.org/wiki/Vitalism?wprov=sfla1) – J D Nov 09 '21 at 22:41
  • 1
    I wouldn't say that 50% in favour of naturalism compared to 25% in favour of non-naturalism is "overwhelming". That being said, my intention isn't to get into an argument here, so thanks for the clarification. – Spitemaster Nov 09 '21 at 23:14
  • @Spritemaster That poll is hardly more than a straw poll, and offered because of its statistics regarding athiests, so no offense taken. You don't have to believe me, but philosophers are far more disposed to athiesm than the general public, particularly in the US which has an exceptionally religious population in the modernized world. I poked around and here's something that might put you on the track of data. https://www.psychologytoday.com/us/blog/logical-take/201402/why-62-philosophers-are-atheists-part-i – J D Nov 09 '21 at 23:35
  • http://www.atheismandthecity.com/2015/06/why-are-so-many-scientists-and.html?m=1 – J D Nov 09 '21 at 23:38
  • 5
    @Spitemaster, modern philosophy may not deal much with the concept of the soul, but it does deal with the concept of mind. The entire debate surrounding the Chinese Room and related problems can be described as the question of whether or not the mind can be reduced to the brain or whether there is something more to it. – David Gudeman Nov 10 '21 at 05:04
  • 3
    Do you really imagine that if science gained a full understanding of how the brain "does" semantics, someone like John Searle would change his views? The whole Chinese Room argument is a giant tautology, begging the question, with absolutely no predictive power whatsoever. It simply asserts Searle's beliefs. What could possibly convince him to change those? It certainly isn't going to be science, or a talk with a true AI (if we ever manage to build one; and unless it's a true super-intelligence, presumably :D). – Luaan Nov 10 '21 at 20:27
  • 2
    @Luaan: I can give a better answer. The Chinese Room operation is incapable of learning it is in error. But it's still a straw man. The reality yet remains, we _don't know_ how to know if a machine is intelligent or not. – Joshua Nov 10 '21 at 22:35
  • 1
    @Joshua That would be an interesting challenge - change the rules mid-experiment and then see how long it takes for the AI to fix itself. But it also sounds a bit unfair, seeing how _humans_ are notoriously bad at adapting to such challenges :D – Luaan Nov 11 '21 at 07:47
  • Why do we need a consensus? We just need one person to figure it out. Then, we need to accept the answer. "*A camel is a horse designed by a committee.*" – Scott Rowe Aug 29 '22 at 10:50
  • @ScottRowe Broadly speaking, consensus is required for facticity. This site, to the best of it's capacity, serves the function of delivering facts about philosophy. – J D Aug 29 '22 at 14:12
  • Don't confuse me with the facticity :-). The first person to get something right is right, even if 7 billion others say it's wrong. – Scott Rowe Aug 29 '22 at 14:18
  • @ScottRowe True until it's not. – J D Aug 29 '22 at 14:20
  • Nice answer, but it would be nice if the references to AGI reflected the recent paradigm shift from logical "expert" systems to bottom-up connectionist systems, which is really only about a decade old. The arguments and critiques of AGI have consequently also shifted in response but I think it's fair to say that recent bottom-up AI attempts (e.g. GPT-3) have renewed the debate on syntactic vs semantic understanding in AI. – DerekG Sep 16 '22 at 15:14
7

TL:DR;

If we view brains as computing machines (which, for all we know, they are), there is no basis for Searle's claim.

According to the Church-Turing thesis, which is a very respected result in computer science, there is no computation that cannot be performed by an ordinary computer.

You can view it as a challenge: show me a solvable problem that cannot be solved by a computer program. So far nobody has been able to do it.

The significance of that result is that (if we don't account for speed and space) any computer that might exist in the universe be it electric, quantum or one based on technology that we cannot imagine would be just as capable of solving a problem as the phone in your pocket.

If we consider the brain as falling in that category, then the difference between the brain and any other computer is just the software that it runs.

If we agree with all that we can easily refute Searle's claim that, because he can perform computation that outputs chinese without understanding Chinese, the computer doesn't understand it. The response is that it is the software that "understands" anything, not the hardware e.g. silicone chips cannot play chess, and neither can neurons, but if we arrange them in the correct way then they both can play chess.

The Chinese room experiment only works if we don't consider brains as computing machines i.e. if we think that there is something happening in our brains that cannot happen in any other type of system. However, nobody has been able to provide evidence for such a thing happening.

Jencel
  • 380
  • 1
  • 8
  • 1
    Here is a list of problems that cannot be solved by a computer program: https://en.wikipedia.org/wiki/List_of_undecidable_problems – David Gudeman Nov 10 '21 at 04:57
  • 3
    This answer shows no familiarity with the literature and no real understanding of the problem. I recommend deleting it. – David Gudeman Nov 10 '21 at 04:59
  • 1
    If you have a specific criticism, voice it. Regarding the undecidable problems, those cannot be solved by human brains, so their existence by itself doesn't prove anything. – Jencel Nov 11 '21 at 10:50
  • 1
    Notice that I say "there is no computation that cannot be performed by an ordinary computer." and not "there is no mathematical problem that cannot be solved by computer program" – Jencel Nov 11 '21 at 10:51
  • @DavidGudeman Philosophy-of-mind shows no familiarity with the modern CS and neuroscience literature and no understanding of the problems they address. I recommend deleting it. (That's sarcasm; my point is, I find value in both the field and this answer in particular.) – tsbertalan Nov 14 '21 at 16:03
  • I don't think Searle is actually claiming that a computer program couldn't exhibit all the same functional behaviors as a human (including passing the Turing Test in Chinese), his argument is more about internal subjective qualities like consciousness and understanding. Basically he seems to be arguing the room would be something like a [philosophical zombie](https://plato.stanford.edu/entries/zombies/). And this argument would be compatible with the idea that the functional behaviors of human brains can be fully explained in computational terms. – Hypnosifl Nov 14 '21 at 17:15
  • 1
    I agree with you, and that is what I I am trying to say with my answer: that it all boils down to the question of whether you believe that zombies/robots/animals are qualitatively different from humans. Everything else in the Searle's argument, like the usage of a foreign language, seems superfluous. – Jencel Nov 14 '21 at 22:48
  • Otherwise, the question is one which cannot be answered as one side wants proof that robots are humans (of which there cannot be none) and the other side wants proof that robots are not like humans (of which,again, there are none.) – Jencel Nov 14 '21 at 22:52
  • 'Notice that I say "there is no computation that cannot be performed by an ordinary computer." and not "there is no mathematical problem that cannot be solved by computer program"' You also said: "show me a problem that cannot be solved by a computer program. So far nobody has been able to do it." This is false, and shows a lack of understanding of the issue. The space of computability is complex, and there are certainly problems that cannot be solved by any conceivable machine. Lots more problems cannot be solved (at scale) by ordinary computers--they are called the NP problems. – David Gudeman Nov 30 '21 at 05:35
  • Your discussion seems to think that Searle's argument assumes that the brain is not a computer. It does no such thing. If it did, it would be a circular argument. What he does is lay out a *specific* example where understanding is simulated without any understanding being present. This is not supposed to stand for all instances; it is only a single instance. As to your suggestion that "the software understands", this is pseudo-religious faith-based speculation. You have no evidence whatsoever for such a remarkable claim, other than that you think you need it to refute Searle's argument. – David Gudeman Nov 30 '21 at 05:44
  • Added the word *solvable* in the post. Again, the existence of problems that are unsolvable in principle is a trivial fact which is irrelevant to the argument. – Jencel Nov 30 '21 at 13:21
  • @DavidGudeman: "You can view it as a challenge: show me a solvable problem that cannot be solved by a computer program. So far nobody has been able to do it.". Your list of examples on Wikipedia are all _not solvable_. – gnasher729 Dec 01 '21 at 00:17
  • @gnasher729, the text of the answer has been changed to add the word "solvable". That word was not in there when I left my comment. If it had been, I wouldn't have been so certain that Marinov is in over his head here. However, you don't really know, without begging the question, that those problems are unsolvable, only that they are unsolvable by a machine. Could a mathematician with infinite time and infinite attention to detail solve the Halting Problem for every Turing machine? I don't know of any proof that he couldn't. – David Gudeman Dec 02 '21 at 01:31
  • Here is the proof: because you said "mathematician" I assume this person will be using some kind of algorithm to solve the problem, or a bunch of different alghorithms (same thing). But every algorithm can be encoded in a Turing machine (Church-Turing thesis). If we encode this mathematician's algorithm, we would get Turing machine that is capable of solving the Halting problem. But we know that there is no Turing machine that solves the Halting problem, so a human mathematician cannot do it either. – Jencel Dec 02 '21 at 07:03
  • @DavidGudeman There is no mathematician with infinite time and infinite attention to detail. – gnasher729 Dec 02 '21 at 16:16
  • @gnasher729 There is no machine with infinite time or infinite storage either, yet that abstraction is used in defining what "computable" means. – David Gudeman Dec 20 '21 at 17:21
  • Not true, Turing Machines must complete computation in finite time. – Jencel Dec 20 '21 at 22:09
  • @BorisMarinov -- the unsolveability of problems in our world, shows that our world is not computable. The AI project assumes the world is computable, and that intelligence is computable, unsolveability refutes one key assumption of the AI project, and brings the other under massive doubt. – Dcleve Sep 16 '22 at 13:51
  • "The AI project assumes the world is computable". This is the first time I have ever heard that claim. What is the source? The "AI project" is to make an artificial intelligence that is as powerful as a human, not one that is capable of solving all problems in the world. – Jencel Sep 17 '22 at 14:11
3

I think the simplest way to explain it: syntax can be parsed computationally, yet computation can be abstracted to ridiculous or "funny" instantiations. Since we don't know how mental states (e.g. semantic understanding, consciousness, awareness, etc) arise from the physical--"the unfathomable gap between physical process and subjective awareness which mocks our search for the filaments that bind the corporeal and the mental together", should we put any weight at all on the idea that this produces mental states (e.g. semantic understanding)? "There is no more amazing and puzzling fact than that of consciousness". We claim ignorance for how neurons and the brain give rise to consciousness, but of all things we strongly believe about consciousness, the brain is involved: "there is little doubt that humans have a mental life, because we have brains".

We are not prepared to make the jump for something other than brains giving rise to consciousness, even though we do not know how brains do so. Surely computation alone can't be it, just think all the "funny instantiations" of computation beyond a person in a room shuffling cards: water troughs, moving grains of sand around, etc. The Chinese room experiment is just another "funny instantiation" of computation.

[1] all quotes taken from: Maudlin, T. (1989). Computation and Consciousness. The Journal of Philosophy, 86(8), 407. doi:10.2307/2026650

J Kusin
  • 2,052
  • 1
  • 7
  • 14
3

A machine could conceivably have its own semantic. This would only require that it had its own internal representation of the world. However, what would be the use of that? Each human obviously has his or her own private mental representation of the world. However, despite this, we do share most of it and this simply because we are essentially biologically very similar to each other and we are gregarious, so that we share large chunks of our lives. We all understand what is the Sun because there is only one Sun and we have broadly the same experience of it. Thus, we end up with broadly the same semantic. There are differences, but they represent a small subset of the whole--contrary to what controversies on the Internet or indeed in real life may suggest.

So, the problem is not so much of a machine having its own semantic but of having a semantic sufficiently close to that of a human being, at least if we want humans and machines to understand each other. The difficulty, then, becomes that the production of a human semantic remains largely an unknown process. It may not be impossible to do something comparable in principle, but it is probably for now at least way beyond our technical capabilities, in particular in terms of the massive data that the human brain processes continuously.

Speakpigeon
  • 5,522
  • 1
  • 10
  • 22
  • Re: massive data, just as a digital image or sound gets more convincing with more detail and variation, so our own experience seems more 'non-robotic' with more details, experience time, variation and so on. We are just bamboozled into thinking we can't be 'machines'. Everything that moves is a machine. We have to turn Searle's contention around and look closely at ourselves. Of course, we are so complex that we will stubbornly insist on our divinity! – Scott Rowe Aug 29 '22 at 10:24
  • 1
    @ScottRowe (1) "*bamboozled*" No. We may in the future come to call "computer" machines more like our brains and not much like today's computers, but for now no computer is like a human brain. This may be what is meant. The point is that it is fallacious to claim that a human brain is a machine when you don't know how it works. Humans don't care much about metaphysical claims. We are pragmatic, and reducing the brain to a dumb machine just seems a very bad idea. People don't want to have a computer telling them what they should do. (2) "*Everything that moves is a machine*" Private language. – Speakpigeon Aug 29 '22 at 10:48
  • Their brain is telling them what to do. Not liking it doesn't make it false. The sooner we accept that we are very small parts of a big whole, like ants, the sooner we will stop hitting each other with sticks and get along. Like ants. It's a big, not very friendly universe. At this point, the individualism that helped us survive until now will probably get us all killed. A little mechanical thinking would be an improvement. – Scott Rowe Aug 29 '22 at 10:54
  • @ScottRowe "*Their brain is telling them what to do*" Sure, and it is their brain, and most of the time they don't even know that. (2) "*the sooner we accept*" You might be able to read the runes but homo sapiens has survived longer and has more experience than you ever will. You're trying to use logical reasoning to second guess natural selection and life. You're massively outgunned. The human species is a much bigger machine than your brain. – Speakpigeon Aug 29 '22 at 11:11
  • I was agreeing with you. – Scott Rowe Aug 29 '22 at 13:03
  • 1
    @ScottRowe Sorry for that, I am very good at disagreeing with everybody. – Speakpigeon Aug 29 '22 at 16:54
2

Let us call a comprehending agent a thinking being possessing "semantic understanding" of the meaning of words arranged propositionally. Suppose now there is in the input stream of a comprehending agent a word which the agent hasn't encountered before. Now when a translation machine for instance may encounter an "original" or nonsense word for which an appropriate translation may be inferred from contextual clues, but doesn't reflect any particular instance of text in the "real corpus" it is intended to translate -- what should its output be? A translation machine might simply output a null result, or enter undefined behavior; yet what should the output of a comprehending agent be, in that case? "Perfect" translation ability seems to perhaps imply, in other words, an extra creative step for which it would seem challenging to specify an explicit rule.

The significance of translation, the difficulty and complexity involved, is as a rule maybe sort of understated in my view -- and involves all the problematics that the phenomenologists, deconstructionists and psychoanalysts have raised surrounding the 'profound depths' at work in the genesis of local transcendental structure.

Perhaps the limits of so-called "undefined behavior" for software are suggestively similar here to those of language's own outer penumbra -- that is, of nonsense and hapax legemonon, which maybe play a more important role in the construction of "sense" than we might imagine. But suffice to say all these analyses of language do seem to me to raise the question of the origin and value of sense as a distinct entity; and moreover it seems to me that the most diligent efforts of philosophers of mathematics, Frege and Russell, do not exactly succeed in solving the ambiguity at the heart of some of these axiomatic, sense-grounding systems of reasoning like ZFC.

Joseph Weissman
  • 9,432
  • 8
  • 47
  • 86
2

Another argument I’ve seen against the experiment is that “together with a book of instructions for manipulating the symbols (the program),” capable of interpreting Mandarin like a native speaker, could not in fact exist. Natural human languages don’t work that way, and there are an infinite number of possible sentences in Chinese. Even if you somehow did find a large enough subset of Chinese that you could fit into your book of rules, the man in the box would never be able to pass himself off as a native speaker because he could only give identical answers to identical questions. This approach failed to work for English, and my understanding is that it would not work for Chinese either.

This might be more a limitation of the analogy than a decisive refutation of the underlying point, but: it turns out that a system that works by looking up and following a list of grammar rules doesn’t produce convincing enough responses that we regard it as “understanding” a human language.

Davislor
  • 816
  • 5
  • 9
  • So how do Chinese babies learn speaking Chinese? They observe Chinese people and build up a set of rules in their brain. The list of “grammar rules” is just too short to handle the English language completely. – gnasher729 Nov 09 '21 at 10:33
  • 1
    @gnasher729 I think I’m saying something different here: that approach has been tried for English, and it doesn’t work. If you tried to make a book of rules that you could follow to write cogent English responses to arbitrary questions, it wouldn’t pass the Turing Test. And that’s not just a matter of the book needing to be billions and billions of pages long; the approach itself is flawed and not like how humans talk. – Davislor Nov 09 '21 at 18:42
  • @gnasher729 babies don't build up a set of rules, they form associations, links. Infinitely flexible links, like the connections between objects via fields, waves, etc. Binary logic doesn't cut it in the physical world, so it doesn't work for minds either. – Scott Rowe Aug 29 '22 at 10:41
2

The Chinese Chinese Room

The main problem with the Chinese Room argument is that it presupposes a massive, massive thing: an algorithm which provides "Chinese language responses". We are just supposed to accept this black box without question so we can focus on the "real issues" in the debate. But we can utterly destroy the Chinese Room argument with one simple trick! We just define where this algorithm comes from!

You see, the implicit assumption is that the "algorithm" is somehow unnatural...a cold and lifeless product of human ingenuity which cannot possibly reflect the beauty and glory of human consciousness. But why not? What if the "algorithm" were nothing more than a precise description of an actual Chinese brain??? What if the "Chinese Room" were nothing more than an actual, ordinary, Chinese-speaking brain which was replaced by an English-speaking homunculus which otherwise executes the exact same actions as the corresponding Chinese brain? Is Searle still going to insist that the homunculus-in-the-shell really doesn't "understand" Chinese? Of course, the homunculus doesn't necessarily understand Chinese, but it obviously doesn't need to.

The Searle Room

Of course, we don't need to bring Mandarin (or Cantonese, or any of thousands of other Chinese languages) into it. We can just replace John Searle's brain with an alien-speaking homunculus and an algorithmic description of his brain. Then, we can turn his argument on its head and insist that brains don't understand English either. And if that's the case, then brains aren't special, and are thus on the same level as computers/AI.

Of course, Roger Penrose implicitly understood the danger of this argument, which is why he went down the long road of trying to show that there is no algorithmic description of the brain because brains are special, by harnessing quantum effects. That's a whole different thread, so I'll just leave it at that.

Lawnmower Man
  • 519
  • 2
  • 3
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/131364/discussion-on-answer-by-lawnmower-man-why-is-it-impossible-for-a-program-or-ai-t). – Philip Klöcking Nov 11 '21 at 23:49
  • Right, it is a kind of Ship of Theseus argument. But neurons and dopamine don't understand Chinese either. No *part* of the brain understands Chinese. No part of the computer / algorithm will understand it. – Scott Rowe Aug 29 '22 at 10:38
2

Searle's argument was framed in times when we only had Symbolic AI which is built on rule-based logic. This sort of system is inflexible and non-dynamic. It works statically through proof-theoretic systems and/or truth tables. Every extension and addition of rules need to be implemented manually by hand. One of the examples of a Symbolic logic environment that had high hopes in terms of AI at the time, but failed, was Prolog programming language.

The limitations of rule-based systems do not apply in this day and age. We now mimic and reproduce machine learning systems based on neural networks of the brain that are able to learn semantical contexts via either supervised, semi-supervised, or even unsupervised learning processes (Example). One example is Python-based software SpaCy which works in the area of NLP (Natural Language Processing).

Through the computational linguistic approach, the machine can now utilise neural networks to constantly learn semantic contexts of say, biological papers, scientific papers, or even newspapers and also any other pieces of text. For instance, it can extract what you need based on the meaning of the text (semantic similarity) which is achieved through creating word-embeddings, vectors that act like "maps" for terms (See demo).

To conclude, it becomes more and more feasible to make "hard" claims about AI understanding semantics than it was in the age of Searle's Chinese Room argument. Even more so, the field of computational neuroscience now implements the digital parts of the brain that model human processes of learning, and even cognition. There is indeed hope that machines would be able to "understand" meaning in ways beyond Searle's argument.

bodhihammer
  • 1,066
  • 7
  • 15
2

The problem with the chinese room argument is that the man only receives input from a single source, the message where as semantic understanding requires that the message be associated with other inputs.

So almost by definition the room cant have semantic understanding.

However, if you added extra inputs say time of day, weather and memory the man might soon come to associate "good morning" with sunny mornings.

You could go further and remove the external inputs, simply encoding the association in the instructions using english. This would amount to a translation and the man, having learned the translation, would obviously have a semantic understanding of chinese.

Its easy to see that a computer program can also be given external inputs and memory. Indeed you can imagine a very simple program "good morning" = sunPosition && noClouds which on the face of it is semantic understanding

Ewan
  • 373
  • 1
  • 6
2

Searle isn't making the point that programs cannot be semantic, he is starting from the definition of a computer program - which is formal, a.k.a syntactical - and then drawing parallels to human usage of formal systems (mechanical instructions) and comparing that to actual understanding.

A good example would be - learning the material on an exam by heart (this would be syntactical/mechanical) vs taking your time to actually understand the material (this would be the semantics). Understanding is not the 'aha!' feeling you get from deeply consuming the material, but rather it comes with that feeling. When you understand the material on the exam, it is absolutely not the same as having learnt it by heart.

This is basically the same case as in the Chinese room argument. One could give any other example of people following some kind of computer program - some kind of specific step-by-step set of instructions - and arrive at the same conclusion as Searle did with his Chinese room, but with some intuition from personal experience to back it up (we've all had to follow some kind of step-by-step list of instructions at some point, I am sure).

Gabriel
  • 51
  • 3
2

There are many answers already posted here, which offer significant insight. However, I don't see any that seem to fully understand the point of Searle's thought problem, and many that take implausible/untenable positions.

I fully answered this question for a different asker, here: How does the Chinese Room Argument handle the pile of sand paradox?

Searle's use of "semantics", which is a purely linguistics term, may be distracting some of the posters here from understanding his point. One CAN do language without consciousness, but Seale is working with a definition of understanding, such that one cannot do "understanding" without consciousness, and he was using "semantics" as a stand in for understanding.

Understanding involves competence, awareness of the answer provided by competence, and I/O. The "books" in the room have competence, but no awareness. The operator has awareness, and operator plus mechanisms in the room do I/O.

Most of the answers here that reject Searle's argument, also reject that understanding requires awareness. They are explicitly behaviorist answers. They treat consciousness as irrelevant, and humans as reductionist machines. Note, however, that philosophy, psychology, and science in general have rejected behaviorism, and also reductionism, see the SEP section 5 on scientific reduction: https://plato.stanford.edu/entries/scientific-reduction/ Note also that the very clear evolutionary tuning of consciousness require that consciousness be causal, so this dismissal of causal consciousness is in contradiction of essential biologic principles.

I also noticed that most of the rejecters of Searle's thought problem, accused Searle of being anti-science and a dualist. This is explicitly untrue. Searle is an advocate of non-reductive physicalism. He considers consciousness to be emergent, and non-computational, hence consciousness cannot emerge from the computations performed by functionalism. His alternative position is that there is some as yet unknown feature of biological systems that allows consciousness to emerge from them.

Of the responses to Searle's thought problem cited here, the only one that addresses the issue of awareness are the citations of Chalmers. Chalmers understands Searle's argument, that by separating the competence from the awareness and I/O, Searle was highlighting the implausibility of tying consciousness to demonstrated competence. Chalmers is the rare philosopher who grabbed this bull by the horns. He basically admitted that competence alone (the books) did not have consciousness, but instead claims the room ASSEMBLY has consciousness. That the room system is conscious when the operator implements the functions in the books, and does the I/O. This is a rare view, as the room SYSTEM is not the sort of thing that most philosophers will admit can support consciousness.

Dcleve
  • 9,612
  • 1
  • 11
  • 44
1

tl;dr The Chinese room argument is a pure silliness, on-par with flat-Earth theory. Without having done a formal survey, it's my general understanding that those in the field largely disregard it as an anti-intellectual position.


The "Chinese room argument" is pure silliness.

Searle's argument is basically:

  1. Assume that AI are mindless machines.

  2. That assumption can't be disproven because any evidence to the contrary might be the consequence of a script-like algorithm that merely sounds human (or sounds like it understands Chinese).

  3. Because the assumption can't be disproven, it must be correct.

Searle's argument is just a copy/paste of the Solipsist argument:

  1. Assume that other people are mindless zombies.

  2. That assumption can't be disproven because any evidence to the contrary might be a consequence of a script-like behavior that merely sounds human (or sounds like it isn't a zombie).

  3. Because the assumption can't be disproven, it must be correct.

Wikipedia lists a bunch of criticisms of Searle's argument. Not because it needs further debunking, but rather because there're so many things wrong with it.

Basically, it's Vitalism.

Nat
  • 1,930
  • 1
  • 10
  • 23
1

It's not impossible for an AI to have semantic understanding at all. All semantics are preceded by a strict syntax, of sorts, but instead of such an AI "reading" the input, it is the input.

If I prick your finger and your body has learned to react, has it not understood the semantics of the event? Such an event is associated with pain or damage, or whatever, yet it is strictly an operant-conditioned response -- there was no other "meaning".

This subtle inflection (reads the input vs. is the input) is the basis for understanding. Modify Searle's argument as follows: a conscious agent scans the Chinese characters and reads each brushstroke of each glyph -- in order to perform the right categorization (of the glyph), they will need semantic understanding of the glyph (via it's "syntax" of brushstrokes).

Marxos
  • 735
  • 3
  • 12
  • 1
    "*Be the change you wish to see in the world.*" Like, conscious! – Scott Rowe Aug 29 '22 at 10:30
  • 1
    I'm leaving out the detail that such "is"ness requires a seperate dimension of interaction, like a primitive emotional feedback that would train the AI to read those brushstrokes by linking it to the meaningful events of chinese history. – Marxos May 11 '23 at 19:19
1

At the risk of appearing naive, I would just note to the questioner that in all these very good answers, it is not entirely clear what is meant by "semantics" and "meaning."

I believe Searle was, many years ago, introducing into a much more buttoned-up analytical philosophy questions that are now more highly developed with the reintroduction of some versions of metaphysics and even idealism into Anglo-American curricula.

Questions of "meaning" are strictly excluded from Shannon's model of information, upon which computer "syntax" is still based. Like Newton's theory of gravity or Jevon's theory of utility, it offers a mathematical modeling and forbears from any "substantialist" or "essentialist" hypothesis of what the models quantify.

Searle's argument may have been necessary within the woefully constricted tradition in which he taught, but I believe would have been thought silly and redundant in the so-called "Continental" traditions since Husserl.

It is a bit like trying to compare Shannon's mathematical definition of information with Walter Ong's. In the latter, there is an irreducible bearer of "experience," the body, the physical vibrations produced by the Word spoken between bodies.

This experience is tragically firewalled. I cannot actually "feel your pain." It can be "sympathetically" but not syntactically transferred. Nor can the machine mimic actually "living" in the sense of undergoing the complex semantic interactions that enable it to reproduce itself both physically and "inexactly." (Von Neumann's machine replicators aside.)

In my own informal reading, Searle is merely indicating the exclusion of these many traditional philosophical issues from the path taken by analytical "philosophy" in his day. Though I have not read him much, Searle's fellow Californian Hubert Dreyfus may have somewhere written a contemporaneous rejection of the whole basis of the Chinese Room "problem."

Numbers are very mysteriously "equal" and can thus fit into "equations." But experiences of bodies over time at some level defy equivalence, cannot ever be exactly the same, and so can never be fully modeled in discontinuous quantities.

Nelson Alexander
  • 13,331
  • 3
  • 28
  • 52
0

What is consciousness ?

In simplified terms, if you have an AI machine (no matter how complex it is ), it could be described by three things : state of the machine (both internal and external) S , input into machine I , and output O . Output would be a function of input and state O=f(I,S) , and any randomness would be modeled trough state (pseudo-randomness) . In other words output would be deterministic.

Model described above is true even for most complex neural networks, and as we can see it simply has no free will. Even if we assume it could learn (i.e. change its own algorithm) this is still done deterministically - with certain input and state our AI machine would change itself but only in prescribed manner. Again note that even if we include randomness into this change, this randomness is still part of the state S, therefore included in our basic equation.

Since our AI machine does not have free will, from that proceeds that it cannot create, or in other words any new structure made by this machine would simply be pre-programmed into it. In any given point of time, machine would have set of patterns P and set of modifications M. Machine could apply those modifications to patterns, creating set of "new" patterns Pn, but that set would be already pre-determined by initial set of P and M.

What this have to do with semantics ? Semantics is simply study of meaning. You have certain phenomenon (words, sounds, letters, pictures ...) standing instead of something else. For example, word "dog" (both written or spoken) symbolizes certain animal species. Dogs have hair, four legs, they are carnivorous mammals. Same as cats. Yet, (almost) no human would call dog a cat. Ships on the other hand have no legs, hair and they are not animals at all, yet humans call some of them iron dogs ! How would you explain to AI what is a meaning of word "dog" ?

Humans are irrational and illogical. This has been mathematically proven by Gödel’s Incompleteness Theorems and Tarski's undefinability theorem. In any strong formal system you cannot completely define the truth. You will have some truths (and some falsehoods) that are unprovable. Everything could not be defined. Yet, in some strange way (intuitively) humans would would differentiate between the two. Zen Buddhist call that clap of one hand. It defies explanation because it cannot be defined - it is absurd but yet compelling. And completely incomprehensible to AI because it cannot be translated into formal language.

rs.29
  • 1,166
  • 4
  • 9
  • "Yet, in some strange way (intuitively) humans would would differentiate between the two." – Are you claiming that humans are capable of intuitively determining whether any statement is true or false? – Tanner Swett Nov 09 '21 at 12:34
  • @TannerSwett Yes, of course. That is a whole point of intuition ;) – rs.29 Nov 09 '21 at 19:55
  • @TannerSwett You should ask: Are you claiming that humans are capable of intuitively determining _correctly_ whether any statement is true or false? – gnasher729 Dec 01 '21 at 00:13
  • 1
    @gnasher729 Of course not (with 100% precision). But they are not capable of 100% precision when determining truth rationally either. In other words, both intuition and reason are limited in their capability. – rs.29 Dec 01 '21 at 11:17
  • 1
    Right, it only needs to be "good enough", not perfect. "*I don't have to outrun the bear, I only have to run faster than you.*" – Scott Rowe Aug 29 '22 at 10:45