4

So, ok, it's by definition impossible for an outsider to spot a philosophical zombie, but could a philosophical zombie introspectively look inside itself and realize that it has no qualia?

Paulo Raposo
  • 157
  • 3
  • 3
    I think the first question is “can a philosophical zombie be intelligent or conscious?”, and then if so, the second question is “can he realize he has no qualia?”. I wonder if something without qualia can be conscious or intelligent. – Just Some Old Man Jul 14 '22 at 17:18
  • Philosophical zombies are supposed to be behaviorally identical to conscious people, by assumption they say and do everything exactly the same. So the I think the only way you might be able to justify the idea of such a zombie "realization" would be a situation where a conscious person (say, Daniel Dennett) concludes they have no qualia, and in your philosophical view they are deluded, but the brain processes underlying this statement are such that when identical brain processes occur in the person's zombie double, you would say the zombie is actually expressing a correct self-realization. – Hypnosifl Jul 14 '22 at 20:25
  • A P zombie can't realize anything. It's an automaton. At the green light it does not see green and decide to advance, the green light in its vision sensor provokes a reflex to advance. It can look like it realized something, even say "oh I realized that..." but by its own definition its only the pretense of it. – armand Jul 14 '22 at 22:03
  • @armand You're presupposing that "realizing" something requires consciousness conceived of as something metaphysically distinct from reliable forms of information-processing used to get from some initial data to some conclusions. Certainly there are eliminative materialists like Daniel Dennett who don't believe it even makes sense to talk about consciousness distinct from some form of physical information-processing, and they are fine with talking about people "realizing" things. – Hypnosifl Jul 14 '22 at 22:09
  • @Hypnosifl Just like with my example with the green light, p zombie has no qualia of you explaining what qualia feel like. It does not hear you, get conscious of what you say and make connections. It's defined to be purely computational. And just like there is qualia of "how it feels to see red", there is qualia of "how it feels to have an idea occupying your consciousness", which Pzombie don't have. You could say it can realize something in the way a banks IT system can realize i'm out of credit and send me an email. But not like in "this fact has raised to my consciousness". – armand Jul 14 '22 at 22:48
  • @armand If you "believe in" qualia as something separate from information-processing, that's of course true, but plenty of philosophers don't believe it, and so their concept of what it means to "realize" something would have nothing to do with qualia. If that's coherent, then even for someone who *does* believe in qualia that are metaphysically distinct from physical processes, they might still consider it coherent to talk about a functional definition of what it means for a system to "realize" something without any requirement that it have qualia/conscious experience. – Hypnosifl Jul 15 '22 at 00:02
  • @Hypnosifl I don't "believe" in qualia, it's right under my nose. If you don't get the difference between a routine triggered in a computer by a bit signaling a negative number and you realizing your balance is negative when you see the minus sign, it's probably that you don't understand what "qualia" stands for. It has nothing to do with wether the brain is just a signal processing machine or not. Now, Pzombies don't make sense in the first place, so it's quite normal that speculating on their everyday life leads to absurd discussions. – armand Jul 15 '22 at 00:12
  • @armand The fact that you are experiencing things is right under your nose, but the idea that these experiences are something more than just perceptions of complex associative relationships is not so obvious, and part of the philosophical meaning of qualia is the idea of "simple" or "monadic" experiences, like an elementary color experience that doesn't depend on associations one has with that color (a perception of green stripped of the background knowledge that green is the color of plants or that it is a 'cool' color, for example). Nothing in my direct experience is clearly non-associative. – Hypnosifl Jul 15 '22 at 02:36
  • 1
    See for example the paper [here](https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00237/full) which argues "I will suggest that the apparently non-structural and monadic elements of consciousness, namely the qualia, are in fact compositional and have an internal structure." and "According to Crick and Koch, the structure of such reddish color experience (or the meaning of that experience) is a vast network of unconscious associations of all the countless encounters with red objects in that person’s personal history and of personal histories of her ancestors, embodied in her genes." – Hypnosifl Jul 15 '22 at 02:38
  • 4
    Charlmers had a famous logical conclusion: either p-zombie is possible (thus materialism is false) or neutral monism is true. Your definition is not standard, see [here](https://en.wikipedia.org/wiki/Philosophical_zombie): *p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience...* Thus by its definition it never will realize it has no qualia since *realization* is a conscious/sentient/intelligible experience... – Double Knot Jul 15 '22 at 04:51
  • I would argue more or less [along these lines](https://selfawarepatterns.com/2016/10/03/the-problems-with-philosophical-zombies/) about p-zombies – Nikos M. Jul 15 '22 at 17:07
  • 1
    Perhaps a much more interesting question could be "can a living human being realizes she has no qualia"? This sounds impossible from Descartes' introspection principle, however, as Yogacara classic [Shurangama sutra](http://www.cttbusa.org/shurangama/shurangama37.asp.html) hinted: *The light does not fade, and what was hidden before is now revealed. This is the region of the consciousness skandha... Seeing and hearing become linked so that they function interchangeably and purely... This is the end of the consciousness skandha... and that upside-down false thoughts are its source...* – Double Knot Jul 16 '22 at 04:49

3 Answers3

4

Why are we discussing qualia right now?

  • If we discuss qualia because we experience it, a philosophical zombie would not. They would thus be distinguishable from conscious people. (Under this interpretation, the philosophical zombie is externally distinguishable, which contradicts its definition.)
  • If we discuss qualia for some reason not related to consciousness, our concept of qualia is not grounded on actual consciousness: what we call "consciousness" is not true consciousness. (Under this interpretation, the philosophical zombie has what we call consciousness, but not "true consciousness", whatever that is.)

I find the strong form of a philosophical zombie – one that lacks consciousness experience, but is otherwise identical to a conscious person, and is theoretically indistinguishable from one from the outside – incoherent.

Being able to realise that you lack certain classes of qualia is different to actually realising it, though. Aphantasia was probably first attested in 1880, but actually noticing this difference in qualia is rare enough that pretty much nobody studied it from 1900 until 2005, when one individual with the ability to visualise mental images suddenly lost it.

Plenty of people with congenital aphantasia assume that talk of "visualising" things was mere metaphor, until they learn that actually, some people can experience seeing imagined pictures. If the philosophical zombie learned about the concept of qualia and how people talk about it, and nobody ever really thought too hard about whether the zombie was conscious, they might never realise the difference. And that's assuming the zombie is acting in good faith.

Imagine a predictive text algorithm that learns how people use the referent "I" when discussing qualia, and can faithfully reproduce its part of the conversation about qualia: the insights about how it feels to be conscious have to come from somewhere, but they don't have to come from the zombie. The predictive text algorithm's "goal" (to the extent we can anthropomorphise it) is not to answer questions truthfully, but to answer them typically. To the extent it has a concept of truth, it's "what do people truly respond to this sequence of words?", not "what true proposition is relevant to this query?". "Am I conscious?" is not a question that the predictive text algorithm would ever ask itself; to the extent that "I" has meaning to it, the pronoun refers only to a character in the dialogue.

A sufficiently-advanced predictive text algorithm, trained on enough writings about consciousness, would appear conscious regardless of whether it was actually conscious. If it were a zombie, it would never realise it; there wouldn't be anything to do the realising.

Under this weaker definition of "philosophical zombie", the space of possible zombie minds is too large for me to know how large it is; I lack the hubris to claim anything about their properties in general.

wizzwizz4
  • 1,300
  • 6
  • 17
  • 1
    I think I agree, do you find philosophical zombies incoherent because physicalism absent qualia is incoherent? As in, you do believe physical states decide brain states; a zombie close enough to a human must think like a human – J Kusin Jul 14 '22 at 18:35
  • 1
    @JKusin Personally, I'm a physicalist – [here's a good explanation of why](https://philosophy.stackexchange.com/a/4549/30700) – but [as I've argued before, the incoherence of the philosophical zombie](https://philosophy.stackexchange.com/a/73747/30700) is also apparent in dualistic theories. It's less a physicalism thing, and more a causality thing. – wizzwizz4 Jul 14 '22 at 18:36
  • Okay thanks for clarifying. Fair to say in dualism p-bodies *are* zombies (which seems coherent but doesn’t solve any issues) and in (coherent) physicalism zombies are straight incoherent? – J Kusin Jul 14 '22 at 18:43
  • 1
    @JKusin In dualism, physically identical p-bodies are imaginable, but they behave differently (since souls have effects on human behaviour); a p-zombie indistinguishable from a human is still incoherent. In a dualistic theory where souls don't have effects on human behaviour, consciousness does not live in the soul and we're back to physicalism. – wizzwizz4 Jul 14 '22 at 18:46
  • I guess I disagree with “A separate (dualistic, metaphysical) consciousness is required for human behaviour” in your link. I would say “sufficient for” and “co-existing with” rather than “required” are also possible. But that’s a separate thread, all agreed here. – J Kusin Jul 14 '22 at 18:58
  • @JKusin It'd be worth posting that comment on that answer; I don't understand it, even though you've been very clear, which suggests a hole in my understanding that I should address later. – wizzwizz4 Jul 14 '22 at 19:00
  • 'If we discuss qualia for some reason not related to consciousness, our concept of qualia is not grounded on actual consciousness' One ambiguity here is that saying that behavior (like speaking about one's qualia) could be *predicted* from past physical states + laws of physics alone may not be the same as saying those past states + laws are the sole *cause* of the behavior if causality is treated as a basic metaphysical category--Chalmers discusses this as a way of avoiding [epiphenomenalism](https://plato.stanford.edu/entries/epiphenomenalism/) on p. 150-156 of *The Conscious Mind*. – Hypnosifl Jul 14 '22 at 21:44
  • @Hypnosifl I don't understand your point. If something is determined solely by past physical states + laws of physics, then surely those are the only things causing it? (Note that this answer doesn't make a physicalist claim: it doesn't matter whether the causal graph is purely physical or has metaphysical elements in it.) – wizzwizz4 Jul 15 '22 at 18:29
  • 1
    I think you have to be careful about words like "determined" (likewise with the phrase in your answer about whether the behavior of talking about qualia has a 'reason' not related to qualia). The fact that one set of facts would allow a sufficiently knowledgeable observer (like Laplace's demon) to infer or predict a set of later facts would not necessarily be equivalent to the idea that the earlier facts *caused* the later facts, not for philosophers who take causality as a basic metaphysical category. Note for ex. that in many physics models future states can be used to infer past states. – Hypnosifl Jul 15 '22 at 18:39
  • Also see Bertrand Russell's ["On the Notion of Cause"](https://en.wikisource.org/wiki/Mysticism_and_Logic_and_Other_Essays/Chapter_09) where he argues for eliminativism about metaphysical notions of causality mainly because they go beyond the type of law-based informational relationships (inferring one set of facts based on another set + laws of nature) that are the closest thing physicists have to "cause". He talks about the fact that such inferences can go in either time direction (unlike the notion of exclusively past-to-future causality), and involve a different sense of "determine": – Hypnosifl Jul 15 '22 at 18:48
  • *'The law makes no difference between past and future: the future "determines" the past in exactly the same sense in which the past "determines" the future. The word "determine", here, has a purely logical significance: a certain number of variables "determine" another variable if that other variable is a function of them.'* (from section 195 of the numbering scheme on the left side of that online version) – Hypnosifl Jul 15 '22 at 18:49
  • Nice! See my alternative answer as well, along these lines – Nikos M. Jul 17 '22 at 08:44
  • @Hypnosifl I made some updates to my answer, I think they cover your concerns. In any case one can skip #3, if uncomfortable – Nikos M. Jul 17 '22 at 08:53
1

I would like to offer a different line of reasoning:

  1. If a p-zombie can recognize it has no qualia, then it is distinguishable from conscious beings, regardless if this act itself counts as qualia or not. So a p-zombie cannot recognize it has no qualia.
  2. By the same token, either a p-zombie recognizes that it has qualia (ie not absence of qualia), thus either qualia or p-zombie is incoherent. Or cannot recognize it has qualia only when qualia are missing, thus its lack of recognition means lack of qualia, thus is distinguishable. Or cannot recognize it has qualia even if they are there.
  3. If a p-zombie cannot recognize it has qualia even if they are there, it is conceivable it might as well have qualia. Either qualia or p-zombie is incoherent.
  4. If qualia are such that their presence is always recognised, then their absence produces a distinct state, at least in terms of informational content, which is conceivably distinguishable, eg by consistent questioning about information on qualia and associated behavior consistent with that. If anything can be provided as information on qualia, qualia is an incoherent concept. Else if it can provide the information, it is conceivable it has qualia.
  5. In any case a p-zombie as is defined is incoherent.

Further reading:

  1. ANTI-ZOMBIE ARGUMENT
  2. ZOOMBIE ARGUMENT
  3. ZIMBOE ARGUMENT
  4. The Unsoundness of Arguments From Conceivability

[..] the inability of our ancestors in the fourteenth century to imagine, say, genetic engineering does not show that genetic engineering is impossible, any more than their belief that they could conceive of lead being transmuted into gold by the application of the appropriate chemical process (consistently with all the actual laws of physics) showed that this is in fact logically possible.

[..] No matter how these details are fleshed out, my main point for present purposes is that the following is a necessary condition on any plausible account of ideal conceivability: one’s conceiving of X can count as ideal only if the removal of any existing epistemic distortions (of the general sort described above) would not result in X’s ceasing to be conceivable. For example, if we could come to see X as inconceivable through acquiring a new piece of knowledge, then X cannot now be for us ideally conceivable. This is not by any means an overly ambitious criterion for ideal conceivability. In particular, it does not require that some X can only be for us ideally conceivable if we are already in relevantly epistemically ideal conditions—a tall order indeed. It merely requires that if we were in relevantly epistemically ideal conditions we would continue to see X as conceivable. Indeed, this condition is simply a requirement of the fact that ideal conceivability is supposed to be a reliable a priori indicator of logical possibility: this requires minimally that its judgements should be stable in the face of the acquisition of new knowledge, or the shedding of false beliefs.

[..] The systematic problem with arguments from conceivability, then, is the following: unless we are already in relevantly epistemically ideal conditions, the justification of the modal sub- conclusion of an argument from conceivability can in principle never be completed.

[..] I shall focus on the case of zombie worlds. Chalmers’ argument, then, rests on two claims:

  1. “If a physically identical zombie world is logically possible, it follows that the presence of consciousness is an extra fact about our world, not guaranteed by the physical facts alone” (1996, 123).
  2. Physically identical zombie worlds are logically possible.

What makes Chalmers’ argument an argument from conceivability is that his defence of both of these claims rests ultimately on considerations about what is and is not conceivable. In order to establish Claim 1—that the logical or conceptual possibility of zombie worlds is sufficient to falsify materialism—Chalmers must answer philosophers who claim that materialism is content to rule out metaphysically possible zombie worlds, and is consistent with the ‘logical possibility’ of zombie worlds. In other words, Chalmers must show that the relevant modal judgements are a priori rather than a posteriori. 12 And in order to establish Claim 2, Chalmers must show that zombie worlds are in fact logically possible.

For further arguments regarding the (in-)conceivability of the p-zombie see related entry at SEP

(adapted from my other answer)

PS: Currently there is no machine intelligence algorithm able to pass the Turing test consistently. Even if this becomes possible, it will still be different from human beings, eg by being structurally different (ie being a machine). So there is no issue of a sentient machine being an exact duplicate of me or anyone else.

Nikos M.
  • 2,113
  • 1
  • 11
  • 18
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/137833/discussion-on-answer-by-nikos-m-can-a-philosophical-zombie-realize-that-itself). – Philip Klöcking Jul 16 '22 at 21:57
1

The question is virtually equivalent to asking whether a color-blind person might realize that there are colors they cannot experience.

So, the answer is an emphatic Yes!

Daniel Asimov
  • 646
  • 3
  • 11