1

There are many famous philosophers that assume that "consciousness can be create by calculations in a computer". For example: Nick Bostrom with the simulation theory [I greatly respect and love his books and work]

Why do people assume that? Is there a good argument for it?

Here is why I find that proposition to be absurd as a computer scientist.

  1. Because all that your computer is doing is addition in a loop with if-else branching using electronic logic gates ... I can do the same calculations without electricity using mechanical gears or even better I can do the same calculations by putting people in hall and asking them to calculate on pen and paper simple additions, they are Turing-complete meaning I can calculate anything that your computer can, including running Windows Vista and any program possible ... now yes using people with pen and paper to calculate what Windows Vista would output on the screen would run at 1 frame per day but the point is, it is still running no different from how any program is running.
  2. Let say that we scan every atom in my brain with it's precise position and momentum ... We run a program on a large computer that simulates real physics of the atoms in my brain. The behavior IE the apparent emotions and actions of real me and the digital simulated brain of me would be identical. However it is obvious to me that the digital simulated brain would not be conscious IE it would be a "Philosophical zombie" because the digital brain is just calculations of what a real brain would do ... If we take my freezer and run a physics simulation of the freezer would you expect your CPU to get cooler or something, no because it is just calculation of where the atoms will end up in the simulated freezer?
  3. 2+3=5 did I just create consciousness by doing that!!?? So this website is also conscious? I wounder what is it like to feel to be StackExchange.com consciousness? I see this as absurdity.
Mustafa
  • 153
  • 5
  • 9
    Who told you that your brain is NOT doing just "additions in a loop" using neuronal gates? – Rodrigo Jul 03 '20 at 14:47
  • 1
    @Rodrigo but you can do the additions in a loop on pen and paper ... would the universe decide that calculation is not conscious but calculation with electricity are ... what would the universe think of calculation made with mechanical gears or water logic gates or with light logic gates? – Mustafa Jul 03 '20 at 16:02
  • 8
    Who told you that a conscience will NOT emerge on calculations done in different media? – Rodrigo Jul 03 '20 at 16:06
  • the problem is that we still have not terminology. We don't know what is the consciousness. We use this term, but we still are not able to define it. The same is with all other words: "world", "qualia"... What is "qualia"? transfer function from an electromagnetic field to organic sensors or centers in the brain? Each robot has the same. Feeling? What is the feeling? Robots feel: the road, light... Modern philosophy still did not solve the terminology threshold. No way to write novel in unknown language – RandomB Jul 03 '20 at 21:07
  • 3rd is the best. You said "I just created". So, the root reason is the time? The process? What if it is written on paper, like static text? Without live process? If calculation creates consciousness then static calculation creates it too.Prepared consciousness? Because, you know, all these calculations can be written as theorems, axioms, etc. Calculation can be logged and printed on the paper, so if its static, it's still calculation. Can consciousness be static? Uploaded mind with decreased thinking velocity? Paused? Still consciousness? – RandomB Jul 03 '20 at 21:15
  • 2
    It's pretty obvious that a simulation can become conscious. I mean, look at us. – Peter - Reinstate Monica Jul 03 '20 at 23:56
  • 1
    Humans are just a bunch of atoms jiggling around, obeying the laws of physics. If humans can be conscious, why not a computer? What is it that humans have that a computer is lacking? Is it something in biology? – littleO Jul 04 '20 at 02:24
  • 1
    Even philosophers that believe that consciousness can emerge from a "computer" do not have in mind the primitive devices we have today. They imagine some advanced AI that will be created in the future and share basic functional principles with current computers, but will be far more complex. And their reason for believing it is that they see neuroscience as succeeding, little by little, at explaining our minds by representing our brains as functioning on such principles. This is called [computational theory of mind](https://plato.stanford.edu/entries/computational-mind/). – Conifold Jul 04 '20 at 07:25
  • How do I know that *you* are conscious? Maybe you are an impressive AI that is using this site as an advanced Turing Test. The only thing that I can be certain of is that *I* am conscious. (v. Descartes) – chasly - supports Monica Jul 04 '20 at 09:30
  • 1
    It's not clear if this is a question or a statement, but as a question it is to broad. Summarizing the views of multiple philosophers in multiple writings would take a long time. The question needs focus. – tkruse Jul 04 '20 at 12:42
  • @littleO the answer may be the theory of Roger Penrose and Stuart Hameroff which declares that the brain is not a machine producing consciousness but quantum neurocomputer (search "Orch OR"). Their presentation was in this event - http://gf2045.com/program/, PDF: https://www.sciencedirect.com/science/article/pii/S1571064513001188. John Eccles asserted even that brain is something like interactive reality quantum decoder (see "psychons"), so due to Eccles's dualism it's not computer-in-itself. – RandomB Jul 04 '20 at 17:47
  • @chaslyfromUK yes, there is a concept in "Simulation" that some of us are may be NPC - "philosophical zombie". But there is more interesting theory, that at least `ALL - 1` human beings (or even `ALL`) are philosophical zombies ;) You can find very good ranobe about this - "Murasakiiro no Qualia , 紫色のクオリア" – RandomB Jul 04 '20 at 17:52
  • 1
    You might be interested my answer [here](https://philosophy.stackexchange.com/a/72799/10780) which is partly about Tegmark's mathematical universe theory, but also talks about philosophy of mind and the notion of "psychophysical laws" connecting physical processes to conscious experience, with some arguments about why if there are psychophysical laws, it seems plausible that they would say that a simulation of a brain would give rise to the same experiences as the original brain (see Chalmer's [Absent Qualia, Fading Qualia, Dancing Qualia](http://consc.net/papers/qualia.html) for an argument) – Hypnosifl Jul 06 '20 at 19:10
  • Why can't a sufficiently complex paper calculation be conscious? See XKCD: A Bunch Of Rocks https://xkcd.com/505/ – CriglCragl May 07 '23 at 22:55

8 Answers8

5

Nobody has ever found any credible evidence that the human brain is anything besides a very complicated computer running strange software. Nobody has ever found any credible evidence that humans are not the same thing as p-zombies. In particular, you don't have any credible evidence that you're not a p-zombie.

"But that's ridiculous," you may say. "It's completely obvious that I'm not a p-zombie."

Yes, well, lots of falsehoods are completely obvious! For example:

  • It's completely obvious that the earth is motionless.
  • In the checker shadow illusion, it's completely obvious that the square in the shadow is lighter (meaning that it's printed with lighter ink, or that it's displayed using a greater pixel brightness) than the square outside of the shadow.
  • If you flip a fair coin a large number of times, it's completely obvious that the difference between the number of heads and the number of tails will oscillate around 0.
  • If a test for a given disease is accurate 99% of the time, and somebody takes the test and the result is positive, it's completely obvious that there's a 99% chance that the person has the disease.
  • Suppose that there is some man who is older than some woman. It's completely obvious that the woman can never, later on, be older than the man.
  • Suppose that there are two objects, and we want to increase the distance between them. It's completely obvious that the only way for that to happen is for at least one of the two objects to move.

All of these statements have seemed completely obvious to some person or another at some point, and yet all of them are false.

So it's clear that merely being completely obvious isn't good enough reason to believe something. In order for us to be justified in thinking that humans have a kind of consciousness that p-zombies do not, we ought to have some kind of corroboration of this notion. Some additional evidence for it.

But there's no additional evidence for it.

Tanner Swett
  • 813
  • 4
  • 10
  • True, maybe I shouldn't have said that it's obvious ... I mean I can view the world as everyone are p-zombies robots and I'm the only conscious entity in universe. :) – Mustafa Jul 04 '20 at 11:09
  • @TannerSwett What evidence is there that the brain literally runs software? – J D Jul 08 '20 at 14:30
  • 1
    @JD I don't know of a definition of the word "software" which is precise enough to make answerable the question of whether or not the brain literally runs software. – Tanner Swett Jul 08 '20 at 14:36
  • @TannerSwett A humble and intelligent admission, indeed. – J D Jul 09 '20 at 14:55
4

Materialism perspective

One of the arguments for the possibility of simulated minds comes from the assumption that in general, physics can be simulated. If all physics can be simulated, then it must be conceptually possible to simulate (among other things) all the processes happening on Earth, including biology, cells, organisms, living beings and also including human-like beings.

The materialism (or physicalism) perspective assumes and asserts, in essence, that "physics is all there is to reality", so if the physics of these beings - including the physics in their brains - are correct, then all their external and internal behavior would also be "correct". Thus, asserting that these simulated beings are substantially different from us in some aspect requires asserting that our mind function involves something more than "just" the physics and chemistry happening in our brains, which the materialism approach fundamentally denies as an unreasonable extraordinary assumption with no supporting evidence.

Peteris
  • 1,249
  • 8
  • 8
3

Saying that consciousness is not possible in a simulated being has strange consequences as well. Suppose that we create a simulation that is very much like a human being. Say, we do your case 2 on someone who has never been interested in consciousness yet, and then unfortunately the real person dies, but the simulation carries on and then becomes interested in consciousness (the real person would have too, but passed away before that point). Let's say the simulation doesn't have access to any literature or anyone to talk with about this, but discovers many of the same concepts itself (since the real person might have done the same) -- it comes up with the idea of qualia, the inverted spectrum argument, the knowledge argument, etc. It argues fiercely in a similar way that you do. Do we really think it could have discovered these concepts without having access to real consciousness?

It seems that when I first started thinking about consciousness, having immediate access to the real thing -- conscious experience -- very much affected my thinking; it was the data used by my (I hope) logical reasoning. So how can the simulation end up reasoning the same way without also having access to it?

present
  • 2,480
  • 1
  • 9
  • 23
  • I would say the simulated brain is mistaken but the calculation of what the real brain would think is correct.Otherwise we have a contradiction. Because how would the universe decide which calculation to make conscious and which are not. Why did the universe decide that my math test in not conscious but the calculator in my computer is. – Mustafa Jul 03 '20 at 15:53
  • 1
    @mustafa Why attribute agency to the “universe” to “decide” whether or not a process should be conscious...? – Joseph Weissman Jul 03 '20 at 16:31
  • @JosephWeissman Should I say which law of physics "decide" which entity become conscious or not ... What about saying maxwell equations of electromagnetism "decide" how radio waves are created ... I've used "decide" again maybe I can use "determine" instead, but this also attribute agency to law of physics. – Mustafa Jul 03 '20 at 16:53
  • @JosephWeissman Maybe you mean something else by your question. I try attribute agency to show more clearly the absurdity of the clam that calculation can create consciousness ... Example: I'm sitting on my desk doing calculations with pen and paper and by magic the universe realizing that the calculations I'm doing are a simulator of what a brain would do if the brain thought it was conscious and by magic just because I'm doing the calculations on my paper there is real consciousness created ... this is what the clam that calculations can create consciousness entails – Mustafa Jul 03 '20 at 17:06
  • 2
    @Mustafa, this seems to be the wrong kind of argument against computational functionalism. the universe does not have to decide or determine that your calculation is a simulation of a brain. we do not deny that some calculations can be made to emulate cognitive functions. we do not wait for the universe to realize or decide or agree that these are cognitive functions, they just are cognitive functions, in action. some people would argue that there is nothing magical left out after all cognitive functions have been accounted for. – nir Jul 03 '20 at 20:22
  • @Mustafa, Alan Turing writes about this: “In considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms. This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to find the real mind. But then in what remains we find a further skin to be stripped off, and so on. Proceeding in this way do we ever come to the ‘real’ mind, or do we eventually come to the skin which has nothing in it? In the latter case the whole mind is mechanical.” - you can find more about it in my paper. – nir Jul 03 '20 at 20:26
  • @Mustafa, I agree that it is strange to think that a calculation can come to be conscious (in line with Searle's argument), but I'm just arguing that the opposite also has strange consequences, which might explain why people think that way, answering your original question. When I first started thinking about consciousness it was essential to have access to the real thing, conscious experience, to make sense of it. That access affected my thinking; it was data used by my (I hope) logical reasoning. So how can the machine come to the same conclusion without also having access to it? – present Jul 06 '20 at 17:32
3

What is consciousness?

There isn't a commonly accepted definition of consciousness (that I'm aware of).

This lack of a precise definition presents the first and biggest problem in providing a concrete answer to "can artificial intelligence be conscious".

The Wikipedia page on animal consciousness goes into a bit of detail about this. It lists one example in 2004 where eight neuroscientists said the following in "Human Brain Function":

... we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. ...

How can you say something is or is not conscious if you don't know what exactly consciousness is?

How do we measure consciousness?

So we don't have a definition, but let's consider how it might be measured.

Both of the below present some problems, but I don't think there are other ways to go about measuring consciousness (they cover a fairly broad range of possible ways to measure it).

So we may never be 100% sure something is conscious, even with a formal definition and a way to measure it.

But let's see what these ways of measuring might say about artificial intelligence.

Physical internal analysis

One way to measure it would be to look at the inner workings of an entity (e.g. a brain scan or map or looking at the architecture or flow of a computer or computer program).

Measuring it in this way would give the most conclusive answer, but it's also really, really hard. What if you can't see these inner workings in enough detail? What if you can't fully understand it? How would you even translate what you're seeing into an answer to "is this thing conscious"? If you can't fully understand it, would you be able to compare it with an entity that works fundamentally differently? There is still a ton we don't understand about what's going on in the human mind, so we're not really at a point where we can use that to tell whether something completely different is conscious.

Although one could actually make a fairly strong case here that some artificial intelligence can be conscious (in theory). Assume "souls" (as in non-natural parts of our minds) don't exist or aren't required for consciousness (this argument wouldn't work too well without this assumption). So brains are just biological systems that are often created in nature during reproduction. There doesn't seem to be any good reason to believe it's fundamentally impossible to artificially create something that can be naturally created, or to create something that functions exactly the same. In fact, artificial neural networks are, as the name suggests, created to be structurally similar to how brains works (but of course built from code instead of biology). Although they are much, much simpler, since there isn't enough computing power in the world to come close to what a human brain does (this seems to be more of a practical challenge than a theoretical problem).

Observational analysis

The other way to measure consciousness would be through observation, e.g. looking at how it behaves in response to stimuli.

We can compare this to the closely related topic of self-awareness. This is generally measured exclusively through observation. One common way to measure this it by putting entity in front of a mirror and seeing whether it recognises itself. You could also potentially ask it questions (if able) to see if its responses demonstrates self-awareness.

This has what I'd call "the simulation problem". If I am conscious, and something acts exactly like I would, is that thing also conscious? Since we're only measuring consciousness through observation, the only possible answer we can give here is "yes".

Now artificial intelligence might still be a long way off from what humans are capable of, but they have made quite a lot of progress towards what humans are capable of in many complex areas (such as natural language processing and computer vision). Of course there is still a long way to go, but I don't see a reason to believe that a computer or other artificial device is fundamentally incapable of simulating the behaviour of a human. For reference, I am a computer scientist working with artificial intelligence.

So, with this type of analysis, we would conclude some future artificial intelligence can be conscious (in theory).

NotThatGuy
  • 4,003
  • 13
  • 18
2

You ask for "a good argument" for the idea that consciousness is possible in a simulation. I'm not sure that any such argument exists, because in the absence of any agreement as to what consciousness actually is, or any objective way to measure or define it, it is impossible to meaningfully argue.

However, I will attempt to present one interpretation of consciousness in which it is clearly possible within a simulation; I do not expect to convince you that this interpretation is correct, but perhaps it will help you to understand what someone like Nick Bostrom might be thinking when writing about simultationism or AI.

The behavior IE the apparent emotions and actions of real me and the digital simulated brain of me would be identical. However it is obvious to me that the digital simulated brain would not be conscious [...] because the digital brain is just calculations of what a real brain would do ...

I think this is the key point of potential confusion; what is it that you think a real brain does? From my perspective, the relevant function of a brain is to process data in order to make decisions, and it seems obvious that consciousness is an essential part of that decision-making process.

(An actual brain also performs various autonomic functions such as regulating body temperature and heartbeat, but I assume we can agree that those functions are irrelevant in this context.)

If we take my freezer and run a physics simulation of the freezer would you expect your CPU to get cooler or something, no because it is just calculation of where the atoms will end up in the simulated freezer?

A freezer's action is physical in nature - it transfers heat from one place to another. Simulating a physical action is clearly distinct from actually performing that action.

On the other hand, we do not draw any distinction between "adding two numbers" and "simulating the addition of two numbers", or argue that a computer is only capable of doing the latter. This is because addition is a conceptual action, not a physical one. All that matters is the algorithm, not the physical implementation of it - whether you use an abacus, a brain, or a binary adder, you're still adding numbers.

Decision-making - and therefore, in my interpretation, consciousness - is very clearly an action of the second kind, a conceptual one, and therefore independent of the physical implementation.

Again, I am not attempting to convince you that this interpretation is correct, merely that is is both self-consistent and plausible. Personally, I think Occam's Razor favours it over the idea that the consciousness has some sort of independent physical existence, or is an aspect of the supernatural, but that's just one opinion. What I would ask is: do you have a good argument for your interpretation of consciousness?

  • I understand more the opposing position ... I view consciousness as unknown physical process, just like electromagnetic or nuclear radiation ... people didn't understand what are they before and now they do. Just like you can't create nuclear radiation by simulation of nuclear reactions or electromagnetic waves, you can't create consciousness with simulation. – Mustafa Jul 04 '20 at 11:20
  • @Mustafa, I gather that is John Searle's position too (the Chinese Room Argument) though I've never been quite sure that I understood what he was trying to say correctly. Anyway, my primary objection is that we now know a respectable amount about how the brain functions, and it does look far more like a machine that manipulates information than it does like a machine that implements some unknown physical process. I concede that we don't know enough to be certain. It just seems to me to be the better bet. – Harry Johnston Jul 04 '20 at 11:33
  • There also big moral and ethics issues ... You don't need any tech ... I just imagined a little girl being tortured. The girl that I imagined should be conscious a girl with real consciousness ... Because I imagined or simulated her in my brain, HEY she is even being simulated on biological substrate ... We have a moral catastrophe ... None of you actually believe that consciousness can be created just by simulation or calculations ... your forced to accept this if the take the view that simulation creates consciousness seriously – Mustafa Jul 04 '20 at 11:37
  • That doesn't follow. Imagining something is not the same as simulating it. – Harry Johnston Jul 04 '20 at 11:45
  • I beg to differ, imagination is simulation ... The rules of physics or the rules of the world you imagine do not always reflect reality but it is still a simulation ... just because she being run on my brain doesn't mean she don't have real consciousness ... are you saying she must be running on electronic computer and my brain is not good enough to simulate her ... The brain is Turing-complete by the way. – Mustafa Jul 04 '20 at 12:14
  • 1
    Huh. That's not what *I* mean when I use the word "simulation" and I don't think it is what Nick Bostrom has in mind either. A simulation involves actually doing the mathematics, figuring out in detail what would really happen in a given situation, not just guessing. A weather simulation, for example, is more than just imagining "well, perhaps it's going to rain". – Harry Johnston Jul 04 '20 at 12:46
1

this is a debate that has been going on for hundreds if not thousands of years, and will go on for many more.

there are many arguments on both sides of the "trench" and I assume that you are familiar with some of them given that you know of the term philosophical zombie.

My take on this question is that it is possible that you are aware of something obvious that most people appear to be blind to. Religious traditions across the world and for thousands of years report that "awareness" and "blindness" by different names.

For example in Hindu Advaita Vedanta this is referred to as being able to see the divine nature of the mind and of ultimate reality. Such people are literally called seers - rishi - and there are parallels in other traditions.

However, having a position on this core question of philosophy of mind does not in itself entail seeing or not seeing since there are many "wrong" reasons and false arguments for either position. We have an infinite capacity to endlessly debate anything and bury it under mountains of arguments and jargon.

Finally, this explanation would naturally seem nonsensical to most people, but this unfortunately cannot be avoided.

A few days ago I have uploaded a paper I have written about this, and in which I review this problem and the multitude of opinions about it at length, and I shamelessly plug it in here. It is called "A key hidden in plain view":

https://philarchive.org/rec/AIDAKH

I would love to hear what you think of it (note I am not a professional philosopher).

nir
  • 4,531
  • 15
  • 27
  • 4
    What is the actual answer you're giving here? That there are arguments on both sides? Like what? That the key is the (perceived?) difference between "awareness" and "blindness" maybe? I don't quite understand what you mean by that, can you expand on it a bit? I didn't read your paper, so I don't know if you go into more details there, but generally answers should be self-contained (although *additional* links are always helpful once the self-contained criteria is met). – NotThatGuy Jul 03 '20 at 22:06
  • Isn't it a given that "awareness" and "blindness" are different? They're virtually antonyms. Unless you're saying that the terms refer to different concepts as a whole. Also, it would be hugely beneficial to readers if you provided a summary of the solution / idea you expound in your paper. – awe lotta Jul 04 '20 at 02:07
1

You are correct in finding this position absurd. And it's not just philosophers and comp. scientists but also physicists, like Brian Greene, who openly admit that what they think, is that consciousness is "created" by computation/symbol manipulation. There are three points I'd like to make.

  1. They never define what is this "consciousness" that they hope to replicate. If you are under the impression that my consciousness is a accumulation of atoms in a very narrow range of trajectories, then, according to the people in the materialistic camp, replication of that arrangement should replicate me. Here's the problem. I can take you as a person, completely disintegrate you into your atoms, then add more atoms, and reconfigure you into the same arrangement. This is happening all the time anyways, we are eating food, excreting, losing skin, hair, losing some neurons, etc. Now when I do reassemble you into two people, whose eyes are you looking out of? If you say person 1, it makes sense to ask the difference in material of person1 and person2 in order to pinpoint that exact process that is causing you to be attached to the body of person1 and not person2. If this sounds silly to you, it's because it is. Consciousness is a process that isn't owned by us, nor created by us, because if that was the case, my consciousness would have a label that makes it different from consciousness of another person who'd have another label attached to it. But it's not like that.

  2. Take anesthetic gases and you are supposedly "unconscious", so it makes sense to postulate that your consciousness was caused by processes hindered by the anesthesia. However that process is also material, so there's nothing stopping us from replicating it. In fact, that process is found in every single brain. The contents of consciousness might be different but the consciousness of the contents is universally identical. You're conscious of X means you're conscious. That's the bottom line. Consciousness can be said to be equivalent to knowing. Knowledge might be replicated in books, videos etc. The knower might be replicated as a dog, cat, a man or a woman. But the act of knowing itself is untouched by all identifiable parameters.

  3. Gödel's incompleteness theorems highlight the fact that no formal system of symbols can ever be consistent and complete at the same time. This is obvious even if you're not a trained logician. You can keep on questioning your axioms and construct sentences that put two of your axioms against each other. The way I like to think about it is that think of yourself as an observer, made of the same symbols that you're trying to observe and then coming to a conclusion about the symbols that you're observing. Where is ground of understanding to be found then? Once you come to the conclusion that leaves are green, you ask what is green? Green is certainly not out-there. Green is also not perceivable unless you have the right cone cells in your retina. Green is also not perceivable if your brain doesn't know how to form a context within which it can contrast what green means. To perceive green means to perceive that which is not green. But then how can you ever know fundamentally what green is, without its context or boundaries? You can't. Gödel shows exactly this semantic with logic. You cannot assume a set of axioms to explain everything. They will collapse somewhere, and if they do, you never have a complete symbolic explanation of everything. Then there's the other interpretation that Roger Penrose likes to give: If you can construct a liar's paradox like sentence, which is perfectly consistent with the algorithm to construct sentences in your system, but which, nevertheless, the system can't prove or decide upon, then there's a factor in the human mind such that it can see the validity of the statement and step out of the infinite regress of proving that statement. A computer wouldn't be able to answer this and would go on an infinite loop. Computers cannot understand sarcasm.

The materialistic paradigm makes the mistake of taking conscious behaviour as equivalent to consciousness. It's not. There's no reason for it to be like that. The problem is, the other possibility that consciousness is prior to matter is so radical that it would upturn fundamental science on its head: think of something that is more fundamental than physics, giving emergence to physics as physics emerges chemistry and subsequently biology. Scientists are just not sold enough to give up on that because they operate under the paradigm where they expect to shuffle symbols and expect an answer. Consciousness reserves the power to completely take yes for a no and no for a yes. Such kind of inconsistency is not tolerated in any science that I know of. So it's unlikely consciousness will even be recognized as a radically different phenomena in the current science. The hard problem continues to persist because it's the result of not accepting consciousness of something different from matter.

Weezy
  • 389
  • 1
  • 10
  • Would the down-voter explain themselves? What is offending them? – Weezy Jul 03 '20 at 15:51
  • 1
    Minor note: you can have a formal system which is consistent and complete. However, the issue arises when it tries to prove the truthfulness of all of its statements (and one statement is negation). A system described using symbols could potentially be what we call "conscious," but for it to prove that it was conscious would be as difficult as it is for us to prove we are conscious. – Cort Ammon Jul 03 '20 at 16:07
  • 6
    "A computer wouldn't be able to answer this and would go on an infinite loop. Computers cannot understand sarcasm." - from the point of view of a computer scientist working with artificial intelligence (AI), this seems to only consider what simple traditional computer programs are capable of. Programs would have no problem with sarcasm, paradoxes or lies if they are written with the possibility of such things in mind by capable programmers. Or they are programmed with some sort of capability for learning and exploration and they figure it out themselves (which is AI in a nutshell). – NotThatGuy Jul 03 '20 at 22:21
  • 3
    The whole trope where you give any robot/AI a paradox to break them is mostly fictional (or at least only partially true). Yes, programs can and do break, but this is also quite avoidable, especially if the programmers have the specific use case in mind that would be a problem. – NotThatGuy Jul 03 '20 at 22:28
  • 1
    You really ought to replace "you are correct" with "I agree with your opinion." – Artelius Jul 04 '20 at 03:32
  • 1
    "Now when I do reassemble you into two people, whose eyes are you looking out of?" - this question makes no sense; there are now two people, each of whom sees out of their own eyes. How would they see out of anyone else's? – Harry Johnston Jul 04 '20 at 04:38
  • @HarryJohnston Agreed. This is why I downvoted this Answer. – nick012000 Jul 04 '20 at 07:46
  • @HarryJohnston Consciousness is about "you" not two "other" people. You are conscious of only one body ie yours. If I make another body, and then kill you, would you consider yourself alive or dead? – Weezy Jul 04 '20 at 07:51
  • @NotThatGuy I agree with programming for edge cases, but that's already stepping out of your formalized system and supplementing for edge cases which you come up with with your conscious recognition of the logical paradox. When I say computation, I mean a strict adherence to a set of rules that are complete as well. Without extra glue code and drawing from your conscious understanding of a paradoxical logical statement, you cannot achieve that. – Weezy Jul 04 '20 at 07:54
  • @Weezy, it is possible to hypothesize that there is some objective sense in which any particular duplicate is or is not the same person as the original - basically you are talking about a soul, or something essentially equivalent. But without that hypothesis, your question is meaningless. After you duplicate me, there are two people with separate consciousnesses but starting out with identical personalities and memories. That's all you can say; your scenario deliberately breaks the usual assumption that personal identities are unique and continuous, so the word "you" becomes ambiguous. – Harry Johnston Jul 04 '20 at 08:28
  • @HarryJohnston The opposite: I intend to show that the materialistic paradigm cannot fully account for personal identity and consciousness. You cannot replicate first person experience by replicating third person objects, bodies in this case. That was my point. Materialism asks us to not consider a very innate part of our beings, our sense of awareness. And it also assumes that our bodies generate this awareness like a machine generating sound. Except your awareness is not a sound or a taste or a smell or anything at all that is perceivable. It is the perception of something. – Weezy Jul 04 '20 at 09:07
  • 1
    I'm still not sure I understand your claim. What does your paradigm say will happen if you perfectly duplicate someone? That the duplicate will be unable to think, because it lacks this immaterial "consciousness"? – Harry Johnston Jul 04 '20 at 10:42
  • @HarryJohnston Nope. I never claimed such. I'm saying my thinking is precisely about consciousness occupying a more fundamental aspect of reality than physics. Consciousness and contents of consciousness(particles, motion, correlation, memory) are two different types of things. My body ages and will die. But if some intelligent agency could repair me on a cellular level, then I could theoretically outlive all the galaxies in the universe. What does this say about consciousness? That rather than being created by matter, it is the knowledge of matter. – Weezy Jul 04 '20 at 11:19
  • @HarryJohnston It helps to think about consciousness as the state of being and 'about-ness' rather than correlating particle motion and geometry with it. You cannot model observation as a shape, any attempt to do so and you end up with a shape which is an object of consciousness rather than consciousness. Conscious behavior too is different from consciousness. A system of particles might act intelligently but ultimately it's your consciousness that can look above the rules it plays by. You treat your body similarly, like a robot. I hit a hammer on my hand, and I feel pain. But what is pain? – Weezy Jul 04 '20 at 11:22
  • Doesn't help me. :-) Can you explain what you expect to happen if you perfectly duplicated someone? In what way would the behaviour of the duplicates differ in your paradigm from mine? Or, if the behaviour would not be different, how would the internal experience of the duplicates be different? – Harry Johnston Jul 04 '20 at 11:25
  • @HarryJohnston You cannot talk about the duplicates internal subjective experience anymore than you can assert a dog sees blue the same way you do. Maybe in the case of qualia you can be sure that your dup. sees the same qualia 'blue' when he sees a blue led. You can even predict their behavior as being same as yours. Real life twins are an example of this. But you cannot become conscious of him at the level that 'he', whatever 'he' is, is about his own body from his POV. My duplication experiment only shows that consciousness and conscious behavior have a conceptual abyss not easily amenable. – Weezy Jul 04 '20 at 11:29
  • @HarryJohnston You can even think of creating two identical versions of yourself and placing them in exactly identical rooms. In that case if the neurological memory of the said experiment was encoded in both brains, when you wake up, you'd have no way of saying 'which' version are you. Unlike two identical robots who won't have this issue because everything that's observable by us about them is 1 to 1. With yourself on the line however, you can't be too sure. And this question is pointing to something deep about the nature of your identity. What should you identify with? Clearly not your body – Weezy Jul 04 '20 at 11:33
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/110195/discussion-between-harry-johnston-and-weezy). – Harry Johnston Jul 04 '20 at 11:35
  • not just sarcasm (but I suppose one day they will be able). But what about empathy which was not programmed/embedded? For example, John may like nation A, but to hate nation B. Then one day, John may change his opinion and say "Stop! It's propaganda! They are humans like me, I sympathize with them, I feel sorry for them. They seem to be good people. I must help some of them". Computer, I think, cannot `produce originally missing empathy`, just to execute programmed one. – RandomB Jul 04 '20 at 18:04
1

I think that it is certainly possible to construct a device that is conscious, and to construct a true Artificial General Intelligence (I don't know if the latter requires the former, and I don't think anyone else knows this either). After all, a human is itself such a device.

However I also see no evidence whatever, a lot of contrary arguments, for believing that our current technologies are anywhere near achieving this. I do not even see any reason to believe that any extension of our current technologies would achieve this. The only devices that we know do achieve this are brains, and brains don't work precisely like computers. It can even be argued that the more powerful we make our computers the less like human brains they become, not more - they are already far faster than brains and far exceed them in any task involving calculation, and this hasn't thusfar made them any more conscious than an abacus as far as I can see.

Note - a great book on this that I read recently is The Promise of Artificial Intelligence by Brian Cantwell Smith. It is a descendant of the phenomenological critique of AI the Hubert Dreyfus articulated in the 70s (I think) but more sympathetically phrased (I'm a massive fan of Dreyfus but his acerbic mockery needlessly alienated the AI community in my view and prevented reception of his ideas). Note that Cantwell Smith is more focused on making an AGI and the challenge there, rather than on making a consciousness. As I said earlier, I don't think it is clear whether these different challenges or the same one.

Rollo Burgess
  • 331
  • 1
  • 7