9

The Chinese Room argument attempts to prove that a computer, no matter how powerful, cannot achieve consciousness.

Brief summary:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

How is this any different than what goes on inside our brains?

Certain impulses are received from sensory organs and processed by neurons. This is a completely deterministic process and to these neurons, individually, the input/output has absolutely no meaning. Individually, they possess no consciousness. Sure, it happens 10^n times simultaneously and maybe there is some recursion involved, but the concept is the same - the origin of the input and the destination of the output are irrelevant.

The only difference I can think of is that, in the brain, the instructions/look-up tables/whatever can be modified by this process. The experiment makes no mention of this, because there is no need for it - language syntax remains relatively constant over a short period of time. But as long as these modifications are carried out according to a set of rules, it would make no difference.

Am I missing some crucial part of Searle's argument?

(Inspired by this question)

J.Doe
  • 99
  • 1
  • See http://philosophy.stackexchange.com/questions/34358/how-can-one-refute-john-searles-syntax-is-not-semantics-argument-against-stro – Alexander S King Jun 01 '16 at 17:42
  • Also http://philosophy.stackexchange.com/questions/30091/on-the-difference-between-knowing-and-understanding – Alexander S King Jun 01 '16 at 17:43
  • 5
    If you have no personal experience of your own experience ... you get the official philosophy.stackexchange.com Zombie badge. It astonishes me that people pretend to be unaware of themselves. – user4894 Jun 01 '16 at 17:45
  • You might like this quote from Scott Aaronson: "Like many other thought experiments, the Chinese Room gets its mileage from a deceptive choice of imagery -- and more to the point, from ignoring computational complexity. We're invited to imagine someone pushing around slips of paper with zero understanding or insight. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time?... – Tim kinsella Jun 01 '16 at 18:13
  • If each page of the rule book corresponded to one neuron of (say) Debbie's brain, then probably we'd be talking about a "rule book" at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it's not so hard to imagine that this enormous Chinese-speaking entity -- this dian nao -- that we've brought into being might have something we'd be prepared to call understanding or insight." http://www.scottaaronson.com/democritus/lec4.html – Tim kinsella Jun 01 '16 at 18:13
  • @Timkinsella Thanks for the Aaronson quote. It makes me think how undecipherable would be a film shown (and heard) at 1/1000 speed or something. And also the Star Trek episode "Blink of an Eye". Speed as qualitative (and not just quantitative) difference... – Jeff Y Jun 01 '16 at 19:12
  • @Timkinsella Awesome quote, would upvote for dian nao alone if i had the privilege – J.Doe Jun 01 '16 at 19:26
  • Yeah Aaronson is great. I also saw this recently and thought it was cute http://smbc-comics.com/index.php?id=4124 – Tim kinsella Jun 01 '16 at 19:33
  • 1
    Hahaha thank you sir. I think thats going on the fridge – J.Doe Jun 01 '16 at 19:44
  • Maybe to put your question backwards, what evidence do you have to suggest that this is what's going on in your brain? If that's what consciousness is, it does an exceptionally good job of hiding the process from the one experiencing it. – virmaior Jun 01 '16 at 23:53
  • Correct me if I'm wrong, but isn't this also known as Searl's homunculus argument? – NationWidePants Jun 03 '16 at 11:54

4 Answers4

5

I would like to suggest that your puzzlement arises from confusing intelligence and consciousness. Neither concept is well defined but nonetheless they are distinct. Searle would say that a Chinese room cannot be conscious, not that it cannot appear to be intelligent. In fact the original argument revolves around the concept of understanding which is another blurred concept with no clear definition. Searle is a philosopher who believes that the mind cannot be expressed in terms of computations, or that in other words, a computer may never have a mind, regardless of its architecture and particular computation. He does not rule out that machines in general may have a mind, only that mechanisms (a subset of machines) may never amount to a mind. You can still disagree with him (as most people do) but to do that it is important first to understand him (pun intended).

nir
  • 4,531
  • 15
  • 27
  • 1
    The person in the room is following an algorithm, which means that each step is simple and does he does not "devise his own response". – nir Jun 02 '16 at 14:00
  • Right, sorry, just found that on the wiki page. – Tim kinsella Jun 02 '16 at 14:05
  • So do you feel like it's a problem for the experiment that the "algorithm", written in English, on paper, might require a filing cabinet the size of a planet? And that the subject in the room might die of old age before he or she completed a single exchange? – Tim kinsella Jun 02 '16 at 14:13
  • 1
    It is not a problem since all that is required is the capacity in principle, not in practice. Functionalist like dennett believe that once the computation is complex enough nothing will be missing. searle believes that the complexity of the computation is irrelevant (me too). searle used the thought experiment to argue that computation is what he calls observer relative. but other than that it is not very different than Leibniz's mill - http://home.datacomm.ch/kerguelen/monadology/printable.html#17 – nir Jun 02 '16 at 14:22
  • Thanks again. One would think though, that if it were possible "in principle", there would be a thought experiment which delivers the result which is purported to be possible. The luxury of a thought experiment is that the dreamer is allowed to dispense with all practical limitations. If you do that and you still come up short, maybe you did so for reason of some underlying "principle". This is what's always seemed to me slightly dishonest about the Chinese room. – Tim kinsella Jun 02 '16 at 14:33
  • I don't understand what you mean – nir Jun 02 '16 at 14:34
  • The whole thing or a particular sentence? By "come up short" I mean fail to deliver a real time conversation in Chinese. – Tim kinsella Jun 02 '16 at 14:36
  • And by "the result which is purported to be possible" I mean the result that a real time conversation can be had with an entity which clearly does not understand the content of the conversation. And by dreamer I of course mean the experimenter. In this case, Searle. – Tim kinsella Jun 02 '16 at 14:46
  • John Searle was pretty explicit in his lectures and subsequent discussions that the Chinese Room wasn't about consciousness but about meaning. – Alexander S King Jun 02 '16 at 17:14
  • @AlexanderSKing, I did not write that Searle said the Chinese Room was about consciousness but that he would say (if asked) that it cannot be conscious. Anyway, he does say it explicitly in *Minds Brains and Science*: "The reason that no computer program can ever be a mind is simply that a computer program is only syntactical, and minds are more than syntactical. Minds are semantical, in the sense that they have more than a formal structure, they have a content. To illustrate this point I have designed a certain thought experiment." and he goes on to describe the Chinese Room. – nir Jun 02 '16 at 19:21
  • 1
    @Timkinsella, I think the thought experiment is purposely phantasmagorical. It is clearly impossible for the person using the rule-book to produce answers in a timely manner, and yet that point is irrelevant. For what does the timescale of the scene have to do with the principle? – nir Jun 02 '16 at 19:30
  • For a couple of reasons: 1. I think it's slightly dishonest to present a thought experiment, and then to dismiss certain considerations *inside the totally unfettered universe of the thought experiment* as mere "practical- therefore irrelevant" limitations. The whole point of a thought experiment is to isolate practical from theoretical limitations. If your thought experiment fails irremediably to give a certain result, then it does so for *theoretical* reasons, by the very definition of a thought experiment – Tim kinsella Jun 02 '16 at 19:48
  • 2. The Chinese room rests on an appeal to intuition based on specious imagery.; it's what Dennett calls an "intuition pump." It's not just about time, but also scale. We're told to imagine a single person sitting alone in a room with a stack of papers. Once were honest about the scale of the entity in the room- a huge team of robots swarming around a planet sized filing cabinet at the speed of light, as in the Scott Aaronson quote- it loses its intuitive punch. – Tim kinsella Jun 02 '16 at 19:53
  • Also the "timeliness" of computation is not just a practical consideration. Bounds on the amount of time it takes to compute certain functions express something deep about the universe and epistemology. If P=NP, then the truth or falsehood of any mathematical proposition would be knowable with the click of a button, for instance, despite the fact that P=NP is just a (probably false) statement about how long it takes find out whether a graph is connected (or whatever, I can't remember any short NP complete problems). – Tim kinsella Jun 02 '16 at 20:05
  • I should have said "provability or refutability" instead of "truth or falsehood". (Gödel) – Tim kinsella Jun 02 '16 at 20:16
  • 1
    @Timkinsella, Dennett and Searle fundamentally disagree. Dennett believes that the chinese room can be conscious and Searle believes that it cannot. But their difference of opinions does not hinge on the details of the intuition pump. To the extent they are fighting about its details, it is just inconsequential skirmishes. BTW, what do you think? can a computer be conscious in principle? what do you think about Leibniz's mill? http://home.datacomm.ch/kerguelen/monadology/printable.html#17 – nir Jun 03 '16 at 06:25
  • @Timkinsella, also as a note. It is not clear to me why timescale matters. Imagine that you do not simulate a brain that interacts with the "real" world but an entire room with a person in it sitting on a couch, reading a book and listening to music. Now what does it matter if each simulated second of that room takes one second or one billion years of our time? What does it matter if you simulate it using the combined computing resources on earth, or by moving around rocks on an infinite stretch of sand? https://xkcd.com/505/ – nir Jun 03 '16 at 06:31
  • @nir I didn't make any claims about Dennett's writings on the Chinese room except to say that he calls it an intuition pump. And I don't think I mischaracterized Dennett's definition of that phrase. – Tim kinsella Jun 03 '16 at 19:15
  • @nir with respect to your second comment, I can only say that IMO the persuasiveness of the experiment depends entirely on an appeal to our intuitions about a room containing a human who is working with pencil and paper. After all, what else distinguishes this particular challenge to strong AI from the more straightforward argument that a Chinese-speaking Turing machine could not understand Chinese since it consists only of a tape-head reading ones and zeros? – Tim kinsella Jun 03 '16 at 19:34
  • I.e. If you don't think the imagery or scale of the Chinese room is relevant, why not replace the human with an even more oblivious tape-head, and let the machine run at full speed? Then the thought experiment has no more content or novelty than our intuitions that an AI can't actually understand anything because it's just a hunk of circuitry. – Tim kinsella Jun 03 '16 at 19:34
  • And that's why it matters, IMO, whether you postulate some kind of huge time dilation that makes a billion years inside the room equivalent to a second outside it. Once you do that, all intuition goes out the window. Leaving aside the question of whether and how much information exchange between the two regions the laws of physics actually permit, once you postulate something like that any appeal to intuition becomes absurd. – Tim kinsella Jun 03 '16 at 19:46
  • Humans have no intuition for the kinds of absurd things that can occur over time scales that large. To take one example, if took only a few billion years for our brains to spontaneously assemble themselves from some raw materials sloshing around randomly in a primordial ocean. – Tim kinsella Jun 03 '16 at 19:47
  • Also, sorry, I haven't gotten to Leibniz's mill yet, but I will shortly. Thanks for the link :) – Tim kinsella Jun 03 '16 at 19:55
  • @Timkinsella, I would like to know what your personal opinion is. do you believe that a computing system may be in principle conscious in the fullest sense as Dennett believes? may Leibniz's mill of moving (wooden?) cogwheels be conscious? – nir Jun 03 '16 at 20:00
  • @nir Yes, I'm inclined to believe that if you made an neuron-for-neuron-isomorphic copy of my brain using transistors (I think those are the right analogue? But Idk much about electronics), then it would be as conscious as I am. – Tim kinsella Jun 03 '16 at 20:03
  • @Timkinsella, another related question. when you look at the world around you. do you concede that it is entirely in your head like a dream is? a neurologist once put it as “Life is nothing but a dream guided by the senses”. The opposite belief, that we perceive the external world directly as it is, is called [naive realism](https://en.wikipedia.org/wiki/Na%C3%AFve_realism). The classic example is that of color. are you aware that color is a phenomena in your mind rather than a property of the objects you look at? – nir Jun 03 '16 at 20:11
  • @Timkinsella, the reason I ask is that naive realists do not think that a theory of mind needs to account for that inner "virtual reality". – nir Jun 03 '16 at 20:12
  • Interesting. I think the brain creates some kind of model of the external world, a sort of messy homomorphism created from sensory data. So I guess that's a sort of middle ground between those two positions. – Tim kinsella Jun 03 '16 at 20:16
  • @Timkinsella, I don't understand what you mean. take for example the white of the screen in front of you; do you think that the white color that you now experience is a thing in your mind or a thing in the external world? – nir Jun 03 '16 at 20:19
  • So if you can't tell, I don't have any training in philosophy, let alone phenomenology, so we might be talking past each other.But I'll give you this much: When I look at some thing white, I certainly have an intuition that there exists something called "the feeling of white", which is hard to pin down. However I don't think there's much reason to take our intuitions about our sensations too seriously when trying to sort out what's going on with minds and brains. Maybe when writing sonnets, but not if you really want to understand cognition and sensation. – Tim kinsella Jun 03 '16 at 20:30
  • But I don't mean to be dismissive. I realize these are deep questions. I just don't place much stock in humans' intuitions about what's happening in their own heads. – Tim kinsella Jun 03 '16 at 20:39
3

John Searle's Chinese Room example is clumsy and is vulnerable to all sorts of refutations from a strictly technical point of view (The systems reply, The brain simulator reply, etc...).

But this is unfair to the argument, because beneath the awkward thought experiment there is a deeper epistemological question which does warrant serious consideration. John Searle in his lectures goes into the details and often repeats that "Syntax is not Semantics" (See the SEP article):

Anybody who has studied formal logic knows that rules like De Morgans laws or the laws of idempotency ( e.g. A ^ A = A ) are independent of the meaning of the symbols being processed. A rule of the type

IF A then: 
   B 
Else: 
   C

Works regardless of of the meaning of A, B, and C. But all a computer does is process rules of this type.

This is the idea that syntax (the rules) is independent of semantics (the meaning), and therefore a computer can function perfectly without ever knowing the meaning of what it is computing. Even the most advanced brain simulator, that can pass all sorts Turing tests is still ultimately just shuffling symbols around without ever knowing the meaning of those symbols.

Searle claims that this shows that no computer, no matter how advanced, canbe considered truly intelligent, since it lacks the understanding of the meaning behind the symbol.

Somehere in the lectures I linked to above. He does mention however, that if a biological artificial brain is produced, this might lead to true intelligence, since it would possess the biological characteristics of human brain processes, and will be driven by whatever mechanisms drive human brains.
See this question for further details: How can one refute John Searle's "syntax is not semantics" argument against strong AI?

Alexander S King
  • 26,984
  • 5
  • 64
  • 187
  • 1
    Thanks, will take a look at the lecture later. For now I'll just say that to me, the distinction between biological and simulated seems arbitrary. With enough processing power you could replicate a human brain down to molecular scale inside a computer. It will function just as well as the original. – J.Doe Jun 01 '16 at 19:23
  • @J.Doe ". With enough processing power you could replicate a human brain down to molecular scale inside a computer." I agree with you to some extent. I'm not defending John Searle's position, just explaining it. See the post I linked to for more information. – Alexander S King Jun 01 '16 at 20:24
  • 1
    You write "John Searle's Chinese Room example is clumsy and is vulnerable to all sorts of refutations". I would like to know what is so clumsy about it. Most, if not all, philosophical arguments are open to attacks and the fact that the Chinese Room is does not make it any worse. On the contrary it is one of the famous thought experiments in philosophy of mind and not accidentally. Throwing such insults at it is like throwing insults at a mirror. – nir Jun 02 '16 at 06:59
  • @nir what is so clumsy about it is that it can be refuted on purely technical grounds: Anyone with some basic knowledge of computer architecture can respond "Of course the man in the room doesn't understand Chinese - it is the combination [man + rule book + db of chinese symbols] that understands Chinese". People get bogged down in this argument and miss John Searle's more important point of Syntax vs Semantics. – Alexander S King Jun 02 '16 at 17:12
2

The difference is that we are conscious, that is, we have the subjective experience of understanding and awareness. Remember, it was consciousness that we were trying to explain in the first place, not the ability to process input and produce output.

There is a difference between being able to produce Chinese answers to Chinese questions and hearing a question in Chinese and thinking, "Oh, I know what that means". The latter is consciousness.

David Schwartz
  • 1,926
  • 11
  • 11
  • 1
    But how do we know computers do not also have a subjective experience, or stated otherwise that our own subjective experience cannot arise from purely mechanical means? – Dan Bron Jun 02 '16 at 03:33
  • @DanBron: As long as they do not "behave" as if they do (show signs of it), we have no reason to assume it. It would be mere speculation, which is bad as it is open to even the lightest form of scepticism - and rightfully so. Call it Occam's Razor, Sellarsian Myth of the Given or whatever you like: We have the knowledge that there is no reason to assume it or that there can be said anything substantial about a computer's conciousness. Why should we bother to think about mere logical possibilities instead of truth-bearing reality at this point? – Philip Klöcking Jun 02 '16 at 10:16
  • @PhilipKlöcking By the strong church Turing thesis, there is a Turing machine that exhibits the same "behavior" as you or I, including all the behavior that leads you to believe that humans experience consciousness. – Tim kinsella Jun 02 '16 at 13:01
  • @Timkinsella: Yes, but that is a different problem. Here the question moves to 'Can it show the behaviour because it is meant to do so (e.g. by ways of programming) or because of the programming enabling the machine to be concious?' Here it is more complicated regarding language, but philosophically, there are answers on that for about 90 years now, i.e. the dependence on bodily positing in the corporal environment that is a precondition for conciousness. – Philip Klöcking Jun 02 '16 at 13:12
  • @PhilipKlöcking I'm not quite sure what you mean. Can you expound a little on the "answers"? – Tim kinsella Jun 02 '16 at 13:22
  • @DanBron Why not just assume that everything is conscious? Then certainly the Chinese room is conscious. But in any event, that argument seems a a bit absurd. Say you could have a massive lookup table with every possible Chinese question and a Chinese answer, that lookup table executed by a trivial machine. It seems absurd to argue that the trivial machine and lookup table has the subjective experience of understanding the Chinese input. – David Schwartz Jun 02 '16 at 16:08
  • @DavidSchwartz A lookup table containing every conversation tree of modest length would be many times the size of the known universe. Searching it would be impossible because of the universal speed limit. Put in those terms, it's a little less compelling, at least to me. – Tim kinsella Jun 02 '16 at 16:24
  • @Timkinsella And certainly would only be conscious in a trivial sense, such as if you argued that everything was conscious. The point is, it's only conscious if you assume it's conscious. It doesn't explain anything about consciousness if the only way to argue it's conscious at all is to assume it. – David Schwartz Jun 02 '16 at 16:25
  • @DavidSchwartz Such a setup would not be able to perform a real time conversation. So it doesn't even exhibit the behavior you would want to say correlates with consciousness – Tim kinsella Jun 02 '16 at 16:28
  • @Timkinsella Right, but on what basis can you argue there's a connection between those two things, other than supposition? – David Schwartz Jun 02 '16 at 16:34
  • In fact there is no such setup. It's literally impossible in principle. – Tim kinsella Jun 02 '16 at 16:34
  • Which two things? – Tim kinsella Jun 02 '16 at 16:35
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/40668/discussion-between-david-schwartz-and-tim-kinsella). – David Schwartz Jun 02 '16 at 16:37
-2

"Consciousness" requires definition. As a neurologist, I use the term in as many as four different ways depending on the circumstances. Let's see if we can box it in for the purposes of this discussion by defining some related concepts:

Reflex: A simple deterministic system. Input -> output. Neurons are connected in series. Cognition: implies a complex system in which concepts govern the relationship between input and output. Input -> concept -> output. A concept is a set of linked attributes, in the sense that "hairy," "barking," "four-legged" and "smelly" might link to form the concept of "dog." Linkage between attributes represents an implicit theory. Theory = meaning. (See Bernstein, "A Basic Theory of Neuropsychoanalysis.") Neural network: a computing system made up numerous simple processing units that are massively interconnected. They are hooked up in parallel, as well as sequence. Cognition is the characteristic behavior of a neural network. All brains capable of thinking are neural networks. Artificial neural networks can be thought of as models of the brain, albeit with important qualitative and quantitative differences. Intelligence: a parameter of cognition. Anatomically speaking, intelligence correlates with the degree of connectivity of neurons in the brain. Humans have more connections than animals. Einstein had more connections than me. When we say something is intelligent, we mean that it has the capacity to carry a lot of information in the form of concepts and theories. Concepts can be numerous, subtle, and hierarchical. Theories can be deep and valid. Sentience: is a superset of intelligence that implies the presence of consciousness. Consciousness: remains to be defined. We can call it "self-awareness" for now.


The "deterministic" system described by OP is neither intelligent, nor conscious. The computer system OP describes is reflexive.

Not all computers are reflexive. Suppose the task were handled by a self-organizing neural network. The formation of concepts that drive output implies theory formation. Even though these theories are primitive and idiosyncratic, we could say that such a system has rudimentary intelligence.

To make this computer seem human, we need to build in enough processors, and a massive enough degree of connectivity to approximate the capabilities of the human brain, both in terms of raw storage capacity, and also in terms of the capacity to form hierarchical concepts. We would also need to build in emotion. "Emotion" may require assimilation of as few as three attributes: valence (punishment vs reward), intensity, and whether an approach or withdrawal response is called for.

I don't know that it's possible for us to design such a system. But suppose we could. Would it be conscious?

Perhaps not, because of the question of "self-awareness."

I dislike the term "self-awareness" because it's too easily defined in ways that are trivial or circular. It means something (perhaps not much) if you are aware of my self. Anything with a brain, and any computer, can do that. As to whether you are aware of your own self -- of course you are. You ARE your own self.

The concept of "self-awareness" only has meaning, then, if the nature of the self is ambiguous.

The question then becomes, under what circumstances would a computer be faced with an ambiguous self-nature?

To date, we have not evidently seen the need to design such a computer. Ambiguity is a bug, not a feature. To be clear, the more complex the system, the more difficult will be the design task, and the more unpredictable the result. But it will be a system nevertheless, one that will render its own particular output perfectly. Such a system will have no basis to contemplate or even to define its "self" in any meaningful way.

But what if we were to design a computer that could evolve? If it could perceive that it were changing over time, would that be enough ambiguity to cause it to consider the nature of its self?

I doubt it. All humans evolve, in a sense. We mature over time. But the change is seldom dramatic enough to cause us to question our own nature.

With regard to the question of the Self, humans are struggling with bigger issues. Priests tell us we are body and soul. Philosophers tell us we are an existence and an essence. Psychoanalysts tell us we are an Ego and a Self. Are these conundrums unique to humans? Or can computers get in on the action?

Possibly. The question is, what is the nature of intelligence? Where does it come from?

So far, we have been talking about intelligence as a form of information that arises as an emergent property of a complex system, in the sense that theories and concepts represent the work product of a neural network.

We also hinted at the obverse; in other words, the notion that complex things are based on information. When we talk about designing computer systems, for example, the "design" part refers to information. If a computer design is based on a blueprint; we can look at the computer as a physical manifestation of the information space outlined in the blueprint.

Likewise, our solar system is a physical manifestation of information. Information in the form of gravity came over from the infinite beyond into the "3+1" material world at a time when the universe consisted of rapidly expanding gases. Gravity caused gas clouds to coalesce into stars and planets. The structure of the information space carried in gravity implied the structure of the physical universe, including our home.

The human brain is a manifestation of an information space, and in spite of what one might have heard, we are nowhere near understanding the nature of that information. We have decoded the human genome, and found it to account for little of the brain's structure. The most important gene that distinguishes the human brain from that of animals is ARHGAP11B, which allows for wild chaotic branching of neural connections. Hebb's principle -- "neurons that fire together, wire together" -- then accounts for the development of the neural network. To say this is a non-deterministic process would be an understatement. In addition, there is evidence that the brain has fractal structure (www. stat.wisc.edu/~mchung/ teaching/MIA/reading/ fractal.kiselev.NI.2003.pdf - remove the spaces) and so there's another information space that needs to be accounted for.

What is the information space that defines the brain? And what (if anything) does that have to do with our concept of "consciousness?"

If consciousness is only ever a property of a distinctly human intelligence, I would submit that we cannot possibly design a computer that would have a consciousness separate from that of its maker. No matter how human it seemed, it would always be the puer eternis, never able to differentiate its Ego from the parent. (Might still try to kill us, but that's another story.)

But. To the extent that anything we imply in the word "consciousness" precedes human intelligence, we might be in trouble. Note well, the laws of thermodynamics suggest that it's a downhill run from star nurseries to intelligent life. If so, the information that gave rise via gravity to primordial star nurseries preceded and implied human intelligence. In that case, we can expect intelligence to stay in its meat suit as long as that's the best way to fend off the Second Law. When it encounters a more efficient solution, it will jump ship.

In summary, the answer to your question is: 1. The brain does not work like the computer you describe. 2. But, it is theoretically possible to design a computer that does. Whether or not such a thing is a practical possibility is another matter. 3. We can't know if such a computer could become sentient. Under some circumstances, it would be inevitable.

  • Very informative answer. I'm surprised however that you did not mention how quantum effects allow humans (i.e. socially constructed neural networks) to hermeneutically transgress boundaries that classical Turing machines and even relativistic Turing machines cannot. – Alexander S King Jun 02 '16 at 05:06
  • 2
    could you work on rewriting this answer to be much clearer and much less stream of thought? (your other answers on a few questions have been poor enough to look like spam). – virmaior Jun 02 '16 at 06:43
  • @Alexander S King Quantum consciousness is nonsense – D J Sims Jun 02 '16 at 15:04
  • 1
    @DJSims the response already invokes string theory, I'm not going any deeper down the rabbit hole by referring to quantum effects. – Alexander S King Jun 02 '16 at 16:56