2

I thought an important feature of the Turing test was that the situation was exactly equal for each contestants, human and computer. The interrogator communicates with each using a teleprinter. Turing in his 1950 paper when talking about the interrogator communicating with player A in the imitation game: "The ideal arrangement is to have a teleprinter communicating between the two rooms [interrogator's and player A's]" then in the next para: "We now ask the question, 'What will happen when a machine takes the part of A in this game?'".

So there's one teleprinter in the human's room and another in the computer's room, and the interrogator types the questions on their teleprinter and gets printed responses back from the contestants. Everything is equal except one contestant is a human and the other a machine.

But the computing machine has no sensory apparatus. It can't see the questions printed by the teleprinter in the computer's room. If it can't see the questions then it can't understand them. In fact the computer must be wired directly into the interrogator's teleprinter, and the computer gets voltages - not words. The computer might have its causality defined by a human programmer (by programming the computer) such that the computer sends voltages back to the interrogator's teleprinter and words are then printed by it, but still, the computer gets voltages, not words.

Since the causality of the computer is defined by the human programmer, doesn't that mean that the Turing test, as Turing describes it, actually tests the intelligence of two humans, the human contestant and the computer programmer?

Roddus
  • 629
  • 3
  • 14
  • "But the computing machine has no sensory apparatus." - create a robot with cameras, image from which is handled by the AI. Well, in order to [artificially] create a great intelligence ones the creators themselves should be very clever. – rus9384 Aug 01 '18 at 00:12
  • Yes, a robot with human-like sensory apparatus should be the computer contestant, but there is still the question of to what extent the behaviour of the robot is dictated by the human programmer. Even with a robots whose causation is defined or largely defined by a human, the TT is still testing the intelligence of two humans isn't it? – Roddus Aug 01 '18 at 00:30
  • It's hard to say if it's simpler, harder or exactly as difficult to create an intelligence as good (or bad) as one's own. But if the third variant is false, the test will be unfair comparing intelligence of the creator and the contestant. – rus9384 Aug 01 '18 at 00:35
  • You have my vote, The Turing test tests the ability of programmers to pass it. If the programmer cannot pass it then they are not going to be able to build a machine that does. Suppose as the human I were to ask 'What makes you angry'. Nothing would, obviously, so to pass the test the machine would have to be programmed not to answer questions as an honest human being would. I suspect that it's generally agreed these days that it is not an effective test of anything more than the programmer's skill at deception, but I may have just stumbled on a few unrepresentative articles. –  Aug 01 '18 at 11:46
  • Empirically, it's easy to write a program with unexpected behavior. It's also possible to write a machine learning algorithm (like an artificial neural net) that mere humans can't figure out, because the knowledge is expressed as a collection of numbers bearing no obvious relationship to what the machine is doing. It's possible to write a program with unexpected and highly useful behavior, such as template metaprogramming in the C++ language. – David Thornley Aug 01 '18 at 15:02
  • @PeterJ: All the alleged Turing tests I've read about have been cases of people not being able to tell if something is a computer or a human, often with forewarning that the "human" has certain restrictions. Turing intended a session with a tester, a human, and a computer. Whether success in this case is deception or the creation of a real mind is far too large a question for a comment. – David Thornley Aug 01 '18 at 15:05
  • @DavidThornley, human-like behavior can hardly be described as an unexpected. – rus9384 Aug 01 '18 at 18:03
  • @rus9384, human-like behavior can indeed be unexpected. I wouldn't expect it out of a mailbox, for example. In this case, I mean that the programmer(s) might have expected some behavior, but got better than they expected. It is possible to write a program, such as a neural net, that will get results the programmer(s) will not understand. – David Thornley Aug 06 '18 at 16:49
  • @DavidThornley, well, if NN will be that good developed, people probably will upload their minds in those NNs. – rus9384 Aug 06 '18 at 17:22

5 Answers5

3

The Turing Test is perhaps best understood as a thought experiment aimed at answering the question "if something purely mechanical could display all the perceptible signs of consciousness/intelligence, would there be any valid reason to deny it possessed those qualities?" Or, to put it perhaps more correctly, "is there any meaningful definition of intelligence other than 'able to display the empirical signs of intelligence?'"

Turing's own answer is "no." Who constructs the machine, and the details of how the machine communicates with the world are peripheral to Turing's aim, which, beyond the immediate question above, is to demonstrate that human intelligence itself admits a purely mechanical explanation, it doesn't require any mystical or supernatural soul to animate it. Turing isn't primarily concerned with the competitive aspect of the Test, it's merely a vehicle for this idea.

The Turing Test is most easily understood in a larger context of the 20th century British and American philosophical push towards redefining all concepts solely in terms of their empirical traces. There are many people who reject this, and for a variety of reasons. Most criticisms of the Turing Test, including your own, are perhaps best understood as disagreements with Turing's (still controversial) fundamental assumptions (since any practical quibbles about the implementation of his test are largely irrelevant to his larger point). He did anticipate some of these disagreements, and formulate replies, you may find those of interest.

Chris Sunami
  • 25,314
  • 1
  • 44
  • 82
  • You say Turing seeks to show “human intelligence itself admits a purely mechanical explanation”. Do you mean *behavioral* explanation? Intelligence as internal process/structure (the common concept) might still be mechanical with no implication of behavior. My problem with intelligence-as-behavior is it fully fails to explain the *inner* processes/structures that yield human-like general intelligence, and without that knowledge, how could AI create genuine machine intelligence? Doesn't accepting any process/structure that yields some intelligent behavior just avoid this problem? – Roddus Aug 03 '18 at 00:02
  • 1
    I'm not defending Turing's position, just trying to explain it. He believed that intelligence itself would eventually be shown to be an artifact of a Turing Machine. Towards that end, he sought a redefinition of intelligence wholly in terms of its outward signs. Your objections are rejections of Turing's position, they cannot be reconciled with it. – Chris Sunami Aug 03 '18 at 13:17
  • Yes, Turing wasn't trying to explain intelligence as commonly conceived (i.e., something internal) but to redefine the meaning of the term "intelligence". He says as much in his 1950 "prediction": "Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted". Which isn't a prediction but a redefinition. How can redefining "intelligence" as behaviour explain how to make a machine intelligent? Doesn't the TT let AI dodge this? – Roddus Aug 03 '18 at 22:56
  • It's a basic philosophical disagreement. For someone like Turing, the talk of something "internal" is incoherent anyway. Just as new definitions of logical operators cleared away centuries of fuzzy thinking, and paved the way for a science of logic, he thought he could do something similar for the concept of "intelligence" by redefining it. – Chris Sunami Aug 06 '18 at 13:39
  • @Roddus: How could you observe intelligence in others except by observing behavior? General educated opinion can and has changed dramatically over the centuries, and word use does also. Turing may have been predicting that the definition of "thinking" would be more exact by 2001. – David Thornley Aug 06 '18 at 16:53
  • @David Thornley We judge intelligence in others by their behavior. But what causes the behavior? If we knew the principles of human intelligence, including perception, we could try to realize them in a silicon-based system. But we don't know the principles. Human intelligence can define the causation of (i.e., program) computers so they behave intelligently in limited domains. But I think we really need to discover the principles of basic functions like learning from experience and generalization. E.g., no AI system can generalize like a human – not even close. We don't know the principles. – Roddus Aug 10 '18 at 09:15
  • @David Thornley In suggesting that the meaning of the term "thinking" will eventually be behavioral, I think Turing is dodging the main issue. We don't want to redefine words. We want to understand what happens inside humans when they think. What are the processes? What are the structures? So to me, Turing's proposed redefinition of "thinking" is, to some extent, smoke and mirrors that distract from the most important question: what is (internal) thinking? – Roddus Aug 10 '18 at 09:15
  • 1
    @Roddus Not everyone finds Turing compelling, and for those who don't, your line of argument is entirely typical. This is a central and live debate in the fields of biology, psychology, neuroscience and computer science as relates to the brain, the mind and the intellect. // With that said, I'm not sure there's much to be gained by simply repeating what --to be honest --were *already* the main objections to Turing at the time he first made his argument. – Chris Sunami Aug 10 '18 at 14:16
  • @Chris Sunami I agree this is old ground. My arguments that the intelligence is in the programmer, and the internal principles are what's most important, are hopelessly typical. But I'm looking more at Searle's uncritical acceptance of (a) computers are Turing machines, (b) computers process symbols (interpretable shapes), and (c) TMs process symbols. (b) is a premiss of the CRA, but is false. (c) is true, but taking (c) and (a) together disguises the falsity of (b). To me, it's a great idea to abandon the idea that computers process symbols. But if (b) is false, do computers compute? – Roddus Aug 12 '18 at 11:35
  • @Roddus - You have to accept some of your opponent's premises, or you're not debating them, you're just disagreeing. I think Searle tried to accept as many of Turing's assumptions as he could, in order to highlight what he thought were the most essential failings of the Turing argument. – Chris Sunami Aug 13 '18 at 14:21
  • @Chris Sunami I think you need to *understand* your opponent's premises, but maybe you could argue all of them are false? To me, Searle's CRA is a mixture of true and false premises: That symbols in themselves are semantically vacant: TRUE. That computers process only symbols: FALSE. That computers are Turing machines (what Turing called "logical computing machines" ('Intelligent Machinery', 1951)): FALSE. – Roddus Aug 14 '18 at 09:41
  • @Chris Sunami It would be great to debate these premises. I'd start by arguing that symbols are tokenised shapes that have meanings. "Have" here not meaning contains or physical possesses. A shape gets a meaning by a cognate observer assigning a meaning to it (by, say, learning a language). A human perceives the shape (which activates an internal neural representation (for want of a better word) of the shape), then (by, say, the learning process) this is connected to another neural structure – the meaning. So meanings are distinct from the shapes in the clearest way: internal v. external. – Roddus Aug 14 '18 at 09:41
  • @Chris Sunami That computers process only symbols. Searle: "a computer is a device that by definition manipulates formal symbols" (Mystery of Consciousness, p 9). A formal symbol is one I can identify by its shape alone (ibid, p 14). But do clocked voltage levels have shapes? It's not clear the idea of shape makes sense for clocked voltage levels. Then, has any human perceived and assigned a meaning to clocked voltage levels? No. Humans can't perceive them (lack the sensory apparatus). So computers don't process symbols (it's concluded). – Roddus Aug 14 '18 at 09:42
  • @Chris Sunami What about taking a more abstract view of what computers process? Searle: "digital computers insofar as they are computers have, by definition, a syntax alone" (Minds, Brains and Science, p 34). (interesting that Searle almost suggests computers might non-compute). Syntactic means reacts to a formal property (e.g., shape) of what is processed. But what about reacting to relations *between* symbols? Maybe this is syntactic too? But maybe not. Maybe reaction to symbols plus reaction to relations between them can build a semantics. These ideas I'm really keen to discuss. – Roddus Aug 14 '18 at 09:42
  • @Roddus https://philosophy.stackexchange.com/questions/50200/what-is-the-term-for-the-fallacy-strategy-of-ignoring-logical-reasoning-intended/50205#50205 – Chris Sunami Aug 14 '18 at 14:25
2

But the computing machine has no sensory apparatus. It can't see the questions printed by the teleprinter in the computer's room. If it can't see the questions then it can't understand them. In fact the computer must be wired directly into the interrogator's teleprinter, and the computer gets voltages - not words. The computer might have its causality defined by a human programmer (by programming the computer) such that the computer sends voltages back to the interrogator's teleprinter and words are then printed by it, but still, the computer gets voltages, not words.

Your thoughts are instantiated in electrical activity in your brain. So we know that a physical system that uses electricity can instantiate thoughts.

Now your brain receives electrical signals from your sense organs does stuff to those signals and sends other electrical signals to your muscles telling them what to do. So your brain receives signals, processes the information in those signals and sends out other signals. Your understanding of the world is a pattern of information processing.

The Turing machine is a universal computer - it can compute anything that can be computed by any other physical system and can simulate any other physical system to any desired level of accuracy. Your desktop computer can do the same operations as a Turing machine so it can also simulate any physical system, including your brain. So a computer that is programmed the right way and receives information similar to the information you receive can think in a similar way. And it won't just reproduce the appearance of doing the same thing, it can also simulate all the internal processes leading up to whatever thoughts you come up with. So it will think in the same way a human being thinks. We don't currently know how to write such a program, but the laws of physics say that it can be written.

See "Godel,Echer,Bach: An Eternal Golden Braid" by Hofstadter, "The Fabric of Reality" by David Deutsch chapter 5, and "The Beginning of Infinity" by Deutsch, chapters 5-7.

alanf
  • 7,275
  • 12
  • 18
  • TM is a model much more powerful than any physical system due to unlimited memory. – rus9384 Aug 01 '18 at 09:14
  • @rus9384 If a physical system runs out of memory you can add more. There is no known upper bound to how much memory you can add. – alanf Aug 01 '18 at 11:13
  • But you must add it, unlike in TM. In fact real computer is a 2-way finite automaton. – rus9384 Aug 01 '18 at 12:44
  • 1
    No. Since memory can be added a real computer has the same repertoire as a Turing machine. – alanf Aug 01 '18 at 15:51
  • @alanf There are only 10^80 hydrogen atoms in the known universe. If you turn them all into memory chips you'll still run out. "No known upper bound?" What are you talking about? Where are you going to get an endless supply of stuff to make chips out of? – user4894 Aug 01 '18 at 20:58
  • @user4894 The relevant issue is how much stuff the laws of physics will allow us to use. We're not close to understanding enough about the laws of physics to settle that issue. We don't have a good understanding of cosmology and so can't say exactly what resources we will have access to. There are cosmologies that allow indefinitely large amount of computation https://arxiv.org/abs/gr-qc/0302076. There is dispute about what dark matter and dark energy are made out of and whether they exist. Nor do we know whether it will be possible to make new universes: https://arxiv.org/abs/1801.04539 – alanf Aug 02 '18 at 07:39
  • 1
    @alanf You're just waving your hands. You have no evidence you can build an infinite TM in the physical world. What you claim is contrary to known science. – user4894 Aug 02 '18 at 09:12
  • @user4894 My guess is that supplying a computer with an indefinite supply of information storage media is possible. This guess may be right or it may be wrong. My position has the merit of not requiring a replacement of the existing theory of computation, unlike your position. There are some unsettled issues that are relevant to the truth of my guess. There are unrefuted theories consistent with the truth of my guess. There are unrefuted theories consistent with the truth of your guess too. – alanf Aug 02 '18 at 13:17
  • @alanf My position IS the existing theory of computation. What you earlier presented as fact you now refer to as a "guess." You just conceded my point. Go learn some science. – user4894 Aug 02 '18 at 16:12
  • Whether or not it is possible to infinitely add the memory is another thing. There are weaker TMs which have their memory added when the input becomes longer. But they are weaker. – rus9384 Aug 02 '18 at 17:28
  • @user4894 I'm probably going to be called a complete idiot for saying this, but... computers aren't Turing machines. It might be convenient to think computers *are* Turing machines when human use is at issue. But it's a really bad idea when AI is the issue. Turing machines process symbols but computers don't. Because of the semantics of symbols, to think a computer is a Turing machine makes it almost impossible to think clearly about how a computer might be intelligent in its own right. (Searle makes the error, saying his Chinese room – which processes symbols – is a computer, but it's not.) – Roddus Aug 03 '18 at 01:38
  • @user4894 I referred to both of our positions as guesses because they both are guesses. Science consists of guesses controlled by criticism. – alanf Aug 03 '18 at 07:20
  • @rus9384 Computers to which memory can be added indefinitely have the same repertoire as a Turing machine: http://rspa.royalsocietypublishing.org/content/425/1868/73.short – alanf Aug 03 '18 at 07:23
  • @alanf Of course. But those are not physical computers. – user4894 Aug 03 '18 at 14:40
  • @Roddus: Turing machines are mathematical models of computers, useful because they can model anything reasonably described as computation and are simple enough to prove things on. Turing machines process symbols in exactly the same way computers do: otherwise meaningless configurations that can be assigned meaning. A Turing machine can model any sort of computer, and a computer can model a finite version of a Turing machine. (And why doesn't the Chinese Room count as a computer?) – David Thornley Aug 07 '18 at 18:20
  • @David Thornley Why isn't the Chinese room a computer? 1. The Chinese room processes symbols that have meanings. But the meanings don't come with the symbols (the man in the room processes Chinese ideograms, but he knows no Chinese (the meanings are not inside the man either) so he can't understand the ideograms). The man is forever a prisoner in a world of syntax – intrinsically meaningless shapes. Since nothing else in the room could conceivably understand Chinese, the room will never understand Chinese. And anyway, all the room gets is symbols, and symbols are semantically vacant. – Roddus Aug 08 '18 at 08:16
  • @David Thornley Why isn't the Chinese room a computer? 2. Humans use eyes to sense words and can understand them, but lack the sensory apparatus to sense clocked voltage levels, semiconductor switch states etc., so can't understand what computers process. Can computers themselves understand what they process? No, since computers also lack the sensory apparatus to sense clocked voltage levels, semiconductor states, etc. And why should they? We don't have the sensory apparatus to detect what our brains process (neural pulses, etc.). – Roddus Aug 08 '18 at 08:17
  • @David Thornley Why isn't the Chinese room a computer? 3. Also, if the computer situation mirrors the Chinese room situation, then clocked voltage levels exist outside the computer in the environment and inside the equivalent of books, and someone (or thing) has given these external clocked voltage levels meanings. But this doesn't seem even remotely plausible. – Roddus Aug 08 '18 at 08:17
  • @David Thornley That Turing machines are mathematical models. The problem I have with using Turing machines to think about the mind is that TMs process symbols (linguistic descriptions including abbreviations that Turing reformats into "Standard Descriptions", or "S, D."s). I think we need to completely forget about the idea that computers process symbols. The symbol-semantic issue causes so much trouble. Computers don't process symbols. The things they process don't have meanings. They have no semantics. The symbol-processing idea is just a giant red herring, as far as I can see. – Roddus Aug 08 '18 at 08:35
1

Here is the question:

Since the causality of the computer is defined by the human programmer, doesn't that mean that the Turing test, as Turing describes it, actually tests the intelligence of two humans, the human contestant and the computer programmer?

The OP also mentioned the teleprinter that takes information as input from one side of the Turing test, processes it, and delivers information to the other side.

Note that both the teleprinter and the computer set up for the Turing test are very similar. Both input information, process information, and output information.

The two humans, contestant and programmer, have similarities as well regarding understanding. Regardless of whether the teleprinter or the computer under a Turing test understand anything when they process information, there is no doubt that these humans do understand language.

There are at least three reasons to remain hesitant about claiming that machines understand just as humans do.

First, John Searle in "Minds, Brains and Programs", where he presented his Chinese Room Argument, reprinted in Mind Design, pages 291-2), mentioned:

If strong AI is to be a branch of psychology, it must be able to distinguish systems which are genuinely mental from those which are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental.

Second, Searle mentions in the same article (page 303) that what the computer, or the teleprinter, do when they "process information" implies that they have "a syntax but no semantics":

Thus if you type into the computer "2 plus 2 equals?" it will type out "4." But it has no idea that "4" means 4 or that it means anything at all. And the point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned.

Third, there is the the fallacy of anthropomorphism. Bradley Dowden describes this fallacy as:

This is the error of projecting uniquely human qualities onto something that isn't human. Usually this occurs with projecting the human qualities onto animals, but when it is done to nonliving things, as in calling the storm cruel, the Pathetic Fallacy is created.

Claiming that the computer understands as humans do because it processed information could be viewed as an example of the fallacy of anthropomorphism, or more specifically, the pathetic fallacy.


References

Bradley Dowden, "Fallacies", Internet Encyclopedia of Philosophy.

John R. Searle, "Minds, Brains and Programs" reprinted in Haugeland, J. (1981). Mind Design: philosophy, psychology, artificial intelligence (Mongtomery, VT, Bradford Books).

Frank Hubeny
  • 19,136
  • 7
  • 28
  • 89
  • Searle makes an awful lot of assumptions, in addition to his constant begging the question in "Minds, Brains, and Programs". Uniquely human qualities? Exactly what's unique to humans? It isn't necessarily intelligence. – David Thornley Aug 02 '18 at 19:57
  • 1
    Bradley Dowden used the phrase "uniquely human qualities" in the last passage I quoted, not Searle. He was defining the fallacy of anthropomorphism. There are a lot of assumptions in strong AI. Here are two: (1) processing information makes something conscious, and (2) processing information is what humans do in their brains to make them conscious. Both of these need to be justified. In particular whatever models of information processing offered for consciousness need to be biologically plausible in terns the how neurons actually behave in humans. @DavidThornley – Frank Hubeny Aug 02 '18 at 20:10
  • @FrankHuberry: Something makes people intelligent and conscious, and we don't know that it isn't possible on a computer. Unless you're an old-fashioned dualist, you must acknowledge that the brain is a physical device. Given that, it's conceivable that another type of physical device could do the same thing. Unless we know what consciousness is well enough to determine the mechanism(s), we can't say it's a "uniquely human quality". (Indeed, some animals do show intelligence, although we have no test for consciousness, so intelligence isn't uniquely human.) – David Thornley Aug 06 '18 at 17:00
  • We don't know that it is possible on a computer either. Having a link between what a computer does and what the human brains do is critical to claiming that a computer could be conscious by its "processing information". One of the problems is "biological plausibility". See Seanny123's question/answer on Psychology and Neuroscience SE for references on this issue: https://psychology.stackexchange.com/q/16269/19440 In general there is less problem with animals that have brains than there is with a computer. @DavidThornley – Frank Hubeny Aug 06 '18 at 18:10
  • @FrankHuberry, Searle is attempting to show that turing machines and computers can't be conscious, and his reasoning is not sufficient to show that. Personally, I think it is possible to produce conscious computers that think and understand things, but here I'm just concerned with opposing Searle and noting that "uniquely human qualities" is very ill-defined. – David Thornley Aug 09 '18 at 17:24
  • Again, "uniquely human qualities" was a phrase used by Dowden. I don't know if Searle used it. Dowden makes no claims about AI to my knowledge, just anthropomorphism as an informal fallacy. I think Searle has successfully shown that Turing machines cannot be conscious. That is why I recommend that people be hesitant to accept any claim or assertion that they can be conscious. @DavidThornley – Frank Hubeny Aug 09 '18 at 18:52
0

You seem to be making two arguments here. Let me rephrase:

  1. A computer cannot see the words. It just gets voltages. Therefore, it cannot possibly understand the questions.

Response: Getting voltages is getting a kind of input. In fact, by your logic, you could argue we humans aren't seeing words either: we're just getting hit by light waves. But of course we are seeing words. And a computer is perceiving words as well ... just through a different sensory medium.

  1. It's the programmer that created the program. Therefore, any intelligence we attribute to the program when doing the Turing Test should really be attributed top the programmer, not the program

Response: Why would it matter how the program was created? You and I were created by our parents .. should they get the credit for our abilities rather than us? If I build a fast car, does that mean that the car isn't fast, because I built it? Of course that doesn't follow. Yes, I built it ... but it is also true that the car is fast. Likewise, if I create a computer program that is able to solve problems, make decisions, do reasoning, etc. ... should the fact that I created it mean that the program is in fact not doing any of those things? No. Of course, the question is whether I can create a computer program that has all these cognitive and mental abilities, but if I can, the fact that I did it does not take away from its abilities.

Bram28
  • 2,699
  • 10
  • 14
  • 1. Yes, maybe the interrogator's teleprinter is a sense organ of the computer. And "Getting voltages is getting a kind of input", which input to the computer is the output of the teleprinter. But in this case what is being sensed? Taking the keys to be in a keyboard, this sense detects press-release events at different locations within the keyboard, not words. – Roddus Aug 03 '18 at 02:13
  • 2. So the question is, does the program inherit the intelligence of the programmer? And if so, is the inherited intelligence about the same thing as the programmer's intelligence. Say I program the machine to print “fine thanks” in response to a human typing “how are you?”. By coding this program (of the form: if input = “A” then output = “B”) does the computer inherit my knowledge of the meanings to the words? – Roddus Aug 03 '18 at 02:13
  • 3. The “question is whether I can create a computer program that has all these cognitive and mental abilities”. I agree that if you could do this, then the computer would be genuinely intelligent (to the extent of those abilities). Though the issue seems more about data structure than program. The program of intelligence seems intractably complex and coding it, impracticable. But what if the complexity of intelligence is in structure, and the program, very simple? Maybe an adequate structure could be derived from the world via sensory detection, not from human design? – Roddus Aug 03 '18 at 02:14
  • @Roddus I say it is still completely analogous to the human case: you could argue that what we detect are bursts of light, not words. And those bursts of light are bouncing off other things (what? Ink on paper? LEDS inside a computer screen?) ... but if we 'zoom out', we all agree that this physical process is what is underlying the cognitive process of seeing words. With a computer we use diferent physical representations and different physical media, but in the abstract, the computer sees, or at least senses, words as well. – Bram28 Aug 03 '18 at 10:38
  • @Roddus 2. Yes, to some extent you could say the intelligence is inherited ... just like some of my infelligence has been inherited theough the instruction of my parents and my teachers. But understanding the meaning of words will require a good bit more than hardcoding 'Fine, thank you' in response to 'How are you' .... just as a basic calculator isn't anywhere close to understanding what the numbers are that it is processing. If and how we can get a computer to get such genuine understanding is one of the big problems in AI – Bram28 Aug 03 '18 at 10:42
  • @Roddus I agree with you that it is indeed a very difficult issue to make a computer genuinely cognitive, and that straight up programming a computer is unlikely to do it. In his original paper, Turing himself figured that probably the best way to achieve it is to merely set up a computer in a way so that it can learn to uderstand the world for itself by interacting with it, much as you and I do. – Bram28 Aug 03 '18 at 10:45
  • 1. OK, so humans have neural pulses and strucure, and there is a causal chain from external word through the eyes to neural pulses in neural structure, and this consititutes what we call understanding the word. And computers internally have electical pulses, too, but in a different form, and also structure, but in a different form, but substrates aside, the computer system is analogous to the human system. This is fine (ignoring Searle's biology argument) but the computer needs to *perceive* the external word, and being wired to keys with letters printed on them isn't perceiving words. – Roddus Aug 03 '18 at 23:17
  • 2. I was looking at the intelligence problem from Searle's symbol/semantic perspective. It just seems clearer to say (as a starting position) that perception is such an important aspect of human intelligence that AI first needs to solve how to make a machine that perceives, and to do this, AI first needs to identify the principles of perception - which it hasn't done. I find it really instructive that Turing's 1950 paper about how to make a computer intelligent doesn't even mention sensory perception. I just wonder whether computation can't explain perception. Is it non-computational? – Roddus Aug 03 '18 at 23:27
  • 3. If you mean his 1950 paper, he had a problem with explaining how a machine could intrinsically learn: "The idea of a learning machine may appear paradoxical to some readers. How can the rules of operation of the machine change?", and he posited "ephemerally valid" rules - what ever they are. The problem was that the description of the machine (the description is the program) fully defines its behaviour, past, present and future. In a later paper he talks about learning "from experience" but he means humans typing words into a computer. Apart from his B-types, has Turing explained learning? – Roddus Aug 03 '18 at 23:39
  • @Roddus 1. I agree that perception (or at least perceiving something *as* something) requires more than just some kind of signal-transduction. For example, to perceive a chair requires a good bit of visual processing, and having some concept of chair in the first place. I wouldn't go so far as to say that seeing words requires one to understand those words though. And even if it does, I see no reason why a computer would not be able to understand words ... not that I know how a computer *does* understand words, but I don't think you can just assume they cannot and use that as a premise – Bram28 Aug 04 '18 at 00:49
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/81141/discussion-between-bram28-and-roddus). – Bram28 Aug 04 '18 at 00:50
  • 1
    @Roddus 2. Again, I largely agree: perception seems to be a really important aspect of intelligence and cognition; it is so much more than 'mere input', and likely much more closely integrated with other cognitive processes. For example, when we use imagination in order to think about something, we are presumably invoking perceptual processes. Intereestingly, however, the first AI people did not think perception was such a big deal at all. FAmously, Marvin Minsky put out a call for a graduate student to 'solve' perception as a one-summer project. :) – Bram28 Aug 04 '18 at 00:55
  • 1
    2. (cont'd) Of course, perception turned out to be much harder than the original AI people thpught .... but does that mean it is not computational? Again, I think drawing that conclusin is premature. Maybe the computations are just a lot more complex than we think. – Bram28 Aug 04 '18 at 00:57
  • 1
    @Roddus 3. Yes I mean his 1950 paper. Turing had absolutely no problem with the notion of learning machines. He was familiar with rhe work that had already been done with artificial neural networks, and had done research with them himself. In his 1950 paper, he explain that to many people, the idea of a machine that learns is paradocxical ( how can a machine learn ... i.e. Change its behavior ... if it is guided by a fixed program?) but explains how by differentiating between the (low) level of the program, and the (high) level of behavior, it is in fact easy to change behavior. – Bram28 Aug 04 '18 at 01:03
  • 1
    @Roddus 3. (Cont'd) So again, to Turing, there is nothing paradoxical or problematic with the idea of a learning machine. In fact, there already were learning machines in Turing's time, and nowadays much of AI is all about learning (Deep Learning has been the big hubbub the last decade or so) – Bram28 Aug 04 '18 at 01:06
  • Great Minsky reference. Checking his (as contributor and editor) hugely-popular-at-the-time 1968 *Semantic Information Processing* (which seems to have nothing whatsoever to do with semantic information), perception gets mentioned on one line on page 15 out of a total of 430-odd pages. Interestingly, around 1950 when radically different ideas were competing for dominance in emerging AI, James Gibson's ecological theory of visual perception was the only serious attempt to explain perception, and his theory is almost always regarded as non-computational (and also very weird). – Roddus Aug 05 '18 at 11:14
  • @Roddus Yes, I think Gibson was definitely going towards the right track, at least in so far as claiming that what we see is more affordances like 'something-I-can-climb-on' (something that certainly goes a lot more towards recognizing that perception is integrated with action, rather than mere input) rather than objects like chairs ... though his idea that there is no cognitive processing and construction going on seems nuts indeed. So, I think it is not implausible that we do parse up the world in terms of affordances ... and see no reason why such parsing isn't a computational process. – Bram28 Aug 05 '18 at 14:08
  • @Roddus In fact, neuroscience suggests that there are two pathways in our brain that process visual information differently [Two Streams Hypothesis](https://en.wikipedia.org/wiki/Two-streams_hypothesis). The dorsal pathway would seem to process information more in terms of affordances, whereas the ventral pathway would be more compatible with the more classical idea of labeling objects. And again, I see no reason to believe that what happens in these pathways is something other than information-processing, i.e. a computational process. – Bram28 Aug 05 '18 at 14:12
  • Apologies for the delay. I suppose the question "What is computation?" is pretty fraught. I mean: from Turing's 1936 human who computes with pen and paper, to pancomputationalism. I guess I need to explain what I think. Searle would presumably say computation is (interpretable-) symbol manipulation according to purely syntactic (i.e., formal) rules? The Chinese room presenting the (alleged) essence of the computer and also of computation. The room seems to play a dual role as an explanation of computers and of computation. I agree with Searle about computation (but not about computers) – Roddus Aug 10 '18 at 09:49
  • @Roddus Yes, I agree that 'computation', and consequently 'computationalism', can have quite a few different meanings. This is why I think Searle is putting up a bit of a straw man with his Chinese Room: he has a very narrow view of 'computation', pretty much focusing indeed on those symbol transformations alone. But most computationalists would argue that the computational system that they believe underlies human cognition involves a good bit more than pure symbol manipulation, e.g. long-term memory. Also the very functionality of the computational system in relation to its environment. – Bram28 Aug 10 '18 at 18:16
  • @Roddus Remember, computationalism is a specific instance of functionalism, and functionalism stresses not just the multiple realizability of the functional system, but also the functional relationships the system has with its environment. E.g. a chair is not a chair in what it is made of, but in its ability to support a human. Similarly, the computational system is said to support cognitive activity (which is always *about* things - Searle's intentionality) because of its connections to those things (again, perception and action are integral here). Searle completely misses this. – Bram28 Aug 10 '18 at 18:33
  • I agree about the long-term memory. One of Searle's omissions is structure. The Chinese room manipulates only symbols. Where are the structural elements? Searle says the baskets or boxes of symbols are databases, but there is no item the man can use to connect symbols together. In computers of course these items are usually pointers, which join locations, are relational and hence not symbols. Long-term memory being populated structure. The Chinese room ontology is deficient and needs relational elements (e.g. bits of string (+ pots of glue)), then the man could build structure. (imho). – Roddus Aug 13 '18 at 03:57
  • "functionality of the computational system in relation to its environment". How does computation explain perception? A programmer using their knowledge to pre-define the total causality of the system (fully describe the machine to be simulated whatever its history might be) quickly leads to combinatorial explosion and related frame problem. So the causal chain needs to be: environment to sensor to computer inner structure. No description of the machine to be simulated: no program, no programmer between environment and computer memory. Is a such non-teleological process computational? – Roddus Aug 13 '18 at 03:58
  • I can see how functionalism implies affordances. Though there seems to be much counter evidence to Gibson's idea of simple information pick up. But how does computationalism system explain intentionality? The function of a chair is to be sat on (by a human-shaped, sized and articulated entity) but what is the answer to Searle's main Chinese room conclusion that what computers process is semantically vacant? Sensory symbols carry no indication of what in the environment caused them (via the sensor) And all a computer gets is the symbols. So how does the machine perceive the environment? – Roddus Aug 13 '18 at 03:58
0

I would like to address your main question "...how can the computer understand the interrogator?"
I am going to start by assuming that by "understand", you mean "communicate with."
Lets start with two isolated rooms 1 & 2. There is an input device (keyboard), and an output device (monitor screen), in each room.

First scenario:

Two humans, that understand English, each sits in front of a screen and keyboard.
The human in room 1 (H1) sends a message to human in room 2 (H2). Something like "what is your name?"
The human in room 2 (H2) responds "my name is (random name), what's yours?"

Second scenario:

Two humans, one understands only English (H1), and the other only understands Spanish(H2).
H1 sends the same message as before, but now in order to communicate with H2, there is a (E -> S) translator between H1's output and H2 input device. The same applies for the response from H2 going through a (S -> E) translator, so H1 never finds out he/she is communicating with a non-English speaker.

Third scenario:

Same as scenario 2 but he human in room 2 (H2) is replaced by a computer (Hal), and the translators are replaced with BCC(binary encoded characters) translators.
H1 sends the same message as before, and Hal responds with a random name. Again, H1 never finds out he/she is communicating with a machine.

The above should make it clear that communication between humans and machines (computers), is not a problem. All it takes is an appropriate "translator" so that they can communicate with each other.

Now, as to the matter of "understanding", at this time this is a quality that exists only in the human. However, progress in AI has come to the point that the human might be fooled into believing that he/she is communicating with another human, rather than a machine!

One final note. Although is true that the computer receives and sends "voltages", the presence and absence of these voltages form what is called a binary piece of information (a bit), and groups of them are encoded to form characters, and characters are grouped to form words!

Guill
  • 1,744
  • 8
  • 5
  • You say, "I am going to start by assuming that by "understand", you mean "communicate with." Yet the idea of communication seems unclear. I press my garage door remote, it communicates with the door controller, the door opens. Neither remote nor controller has the causality of understanding. By "understands" I mean that a system contains a meaning of a shape, and this meaning is activated either by virtue of perceiving the external shape, or by the system directly identifying the shape - as with Chinese symbols entering the Chinese room. Though the Chinese room contains no meanings. – Roddus Aug 09 '18 at 10:45
  • First scenario: Two humans H1 and H2. Each understands the shapes caused (via the equipment) by the other. Second scenario: Three humans, H1, HT, and H2. HT perceives and understands the shapes caused by H1, then sends (mostly different) shapes to H2 that H2 perceives and understands. Third scenario: one human, one keyboard/screen, and one computer. H1 taps away on the K of K/S which transmits binary clocked voltages direct to C (no perception). C responds and sends clocked voltages back, which cause certain shapes on the screen of H1. – Roddus Aug 09 '18 at 10:46
  • I agree the K/S communicates with the computer but in the garage-door sense of "communicates". When we say humans communicate, we use a different sense that implies understanding and hence intelligence - but the garage-door sort implies neither. I agree that a human might be fooled by a system that has no understanding. Joseph Weizenbaum's 1960s ELIZA language simulator fooled quite a few people, as did Kenneth Colby's PARRY. I agree that hardware can be designed so that clocked voltages cause shapes that people understand. – Roddus Aug 09 '18 at 10:46
  • This is the reason I limited my answer to the "communicate" part. Not the "understand" part. You are right, your garage remote is "communicating" with the garage controller, but there is no "understanding" going on. Artificial Intelligence is trying to change that. It may take another 100 or 1000 years before the I-Robot machine becomes a reality. – Guill Sep 02 '18 at 03:42