16

By the “hard problem”, I’m referring to the exposition by David Chalmers.

He phrased the hard problem as “why objective, mechanical processing can give rise to subjective experiences.” I find it difficult to think of this as hard.

Imagine the following. People are really pre-programmed computers coupled with various sensory inputs. The computer has a “task manager” that monitors and controls all the software being ran: the visual recognition software, the math arithmetic software, the emotional perception and expression software, etc. Then, it seems like this task manager is “conscious”. Only the task manager itself is aware of the programs ran, and others don’t see the program status. This, the awareness is “subjective”.

How David Chalmers talks about the problem of consciousness make it seem I must be missing something in my description?

J Li
  • 656
  • 4
  • 9
  • 12
    What behavior of other people or computers seems like is irrelevant to the hard problem, reproducing human behavior with AI is an "easy" problem. The reason we believe other people are conscious is the analogy with ourselves, and in our own case we experience "what it is like" first hand. The hard part is to explain why physical processes in computers or our brains should be accompanied by such first person feels at all when zombies lacking them could follow the same physical laws and manifest the same outward behavior without them. – Conifold Nov 18 '20 at 15:38
  • Conifold: the core of my question is about what is “subjective” experience. To me, subjective simply means “only I can experience it while others cannot”. If so, then the task manager in my example seems to be having a perfectly fine subjective experience. – J Li Nov 18 '20 at 17:00
  • http://consc.net/papers/facing.html – user76284 Nov 18 '20 at 22:22
  • 9
    That is not what it means at all. Private content can be and is easily explained by neuroscience models. People talking about the hard problem of consciousness talk about something else, the "experienced quality" nature of first person feels, which seems orthogonal to any third person descriptions of what they might accompany. Publicity/privacy is just one such description, that they happen to be private is just a side effect. – Conifold Nov 18 '20 at 23:25
  • Although, in philosophy, there doesn’t seem to be any breakthrough that can explain subjective experiences/qualia satisfactorily, in cognitive neuroscience, there’re currently quite some theories that can explain them satisfactorily, at least to some extent. If interested, try checking out: [Information and the Origin of Qualia](https://www.frontiersin.org/articles/10.3389/fnsys.2017.00022/full), [Qualia: The Geometry of Integrated Information](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000462), and [Qualia & Experiences](https://mindtheory.net/chapter-3/). – user287279 Nov 19 '20 at 00:47
  • What is your reason for thinking the task manager has subjective experiences? – Ameet Sharma Nov 19 '20 at 02:57
  • @Conifold: I understand your point -- that the "experienced quality" feels special. My concern is about its supposed specialness, the only argument for which, it seems, is that "we feel it". If so: 1) neuroscience does provide an explanation for "feelings". By manipulating neurons, we can change people's subjective experiences, suggesting that the "feeing" is just a first-person perception of neurons firing. 2) If we are willing to believe that other human beings (other than ourselves) have subjective experiences, on what grounds can we say task managers don't have subjective experiences? – J Li Nov 19 '20 at 03:08
  • @user287279 Yes - this is exactly what I'm getting at. – J Li Nov 19 '20 at 03:08
  • @AmeetSharma What is your reason to think that other human beings (other than yourself) have subjective experiences? You don't feel what they feel. They can tell you what they feel, but so can task managers (be programmed to do so). – J Li Nov 19 '20 at 03:09
  • @JLi, the biggest reason is our physical similarities (biology) accompanied by similar behaviors. So I make a guess that the same type of mechanism causes subjective experiences in both of us. I see absolutely zero reason to think that task managers have subjective experiences, because their behavior is completely transparent and dictated by their program which is no mystery. So their behavior can't be a reason for ascribing subjective experiences to them. – Ameet Sharma Nov 19 '20 at 03:20
  • 4
    @AmeetSharma Yes. I think we have arrived at a conclusion in this discussion. The "hard" problem of consciousness is essentially equivalent to "we think human experience cannot be explained by mechanical processes" -- which again, is an assumption, even though a very intuitive assumption to all of us (it "feels" like the right assumption). Whether this problem is "hard" depends on the subsequent progress in cognitive sciences. – J Li Nov 19 '20 at 03:28
  • @JLi, consciousness may have a purely mechanical explanation. But I see no reasons to think those mechanisms are being implemented in computers. I mean, we're the ones designing the computers. We haven't put in any special mechanisms to cause consciousness, because we don't know what those mechanisms are. – Ameet Sharma Nov 19 '20 at 03:41
  • 1
    @AmeetSharma I agree, and I think we have reached a consensus on this topic. Thanks for the discussion! – J Li Nov 19 '20 at 04:28
  • 2
    Feels may well be perfectly correlated with neurons firing, but that would do nothing for the hard problem. It is not about relations and correlations. Their arguments is simple: scientific explanation is based on modeling, models are matched to third person descriptions, while feels can be correlated to something so describable they themselves are not, therefore science can not explain them as such. That's why the problem is "hard". It would require inventing some new mode of explanation, either to bridge the feel/description gap or to explicate why no explanation is called for. – Conifold Nov 19 '20 at 05:44
  • @Conifold I think you already fully understand what I'm disagreeing with. At the risk of being repetitive and over-extrapolating, let me propose that, following similar reasoning, we also have a "hard problem of living organisms". As of today, much of what it means to be a living organism has been figured out by biologists. We also know very well what you need to do to make a living organism dead. However, I claim there is a "hard problem". Liveliness may well be perfectly correlated with the studied biochemical mechanisms, but "liveiness" really feels different. – J Li Nov 20 '20 at 03:04
  • @Conifold I think I found a better way to phrase my question, and I just posted that as a new question. It might be more fruitful to continue the discussion over there. Thanks! – J Li Nov 20 '20 at 03:13
  • I do not see you and them as disagreeing, more as talking past each other. Your analogy is good up to a point, "vital force", whatever that is, is the hard problem of life, still unresolved. But both sides of the gap were always third person describable, so one can at least imagine a form of explanation that bridges it. Some non-physical add-on would do it too. The same goes for the free will problem. But the explanatory gap in HPC is of a different quality. It is not about something "feeling" different than its model, it is about feels cutting out *any* models as such as potential explainers. – Conifold Nov 20 '20 at 06:45
  • 1
    Complexity, that's the issue, complexity of the mathematical models. It's a problem of degree not kind. – Cristian Dumitrescu Nov 22 '20 at 05:41

12 Answers12

20

What matters is not the fact that the experience is subjective per se, what matters is that there is no way to share the quality or quale of that subjective experience with anybody else.

If you see a shade of red, how do you know how others experience it? Some people have different chemoreceptors for red and will experience it as different shades. Others are red-green colourblind but cannot tell the rest of us whether they experience all reds as greens or all greens as reds or something else. Some octopuses are sentient and have colour vision; their brains and eyes evolved entirely separately from ours (last common ancestor was probably a flatworm), so how do they experience the redness of say a sea anemone?

Other animals have other senses - electric, magnetic, etc. which we do not. Some birds sense geomagnetic fields with their eyes, some birds are sentient. But we homo sapiens have no processing pathways for magnetic senses and so can never know, not even in principle, the subjective quale of looking at a magnetic field.

We sometimes talk of the "neural correlate" of a quale. Such correlates may be measured and recorded by an electroencephalograph (EEG), which is objective. But the mapping from EEG to quale is not as simple as that. No two brains are wired identically. There is no such thing as a blow-by-blow, synapse-by-synapse correspondence between two brains; comparison of neural correlations can never be an exact match. Rather, we have to identify the information carried by those signals. The quale is thus more correctly understood to be the subjective experience of that information, not of the physical signal ''per se''.

Even so, all the encephalography, signal reconstruction and computational simulation or senteint AI in the world cannot enable any quale of redness to be identified, recorded and communicated.

Consequently no law of physics, nothing founded on the laws of physics, nothing reducible to the laws of physics, can describe qualia (the plural of quale). There is no way you can objectively capture subjective experiential qualities, in order to compare them and see if they are the same or not. They are simply not open to objective science in the way that their neural correlates and information content are.

That is what is hard about the hard problem.

Guy Inchbald
  • 2,532
  • 3
  • 15
  • 1
    Guy, thank you for the answer. Because every human or animal has a different brain, and what they “see” is fundamentally a brain-specific thing. Me seeing a color corresponds to a set of neurons firing in my brain. Mary seeing it corresponds to a set of neurons firing in her brain. What is hard about that? – J Li Nov 18 '20 at 17:16
  • 6
    This goes to far in the direction of impossible: "Consequently no law of physics, nothing founded on the laws of physics, nothing reducible to the laws of physics, can describe qualia (the plural of quale). They are simply not open to objective science." Perhaps, perhaps not. We don't know whether we will ever understand "qualia". Of course you can fight over definitions but what if someday we manage to really reproduce your vision of a painting in someone else's mind. (Through for example a complete understanding of neurons and their states and how to read/write them to a specific state.) – Kvothe Nov 18 '20 at 18:10
  • 4
    My point being, this is clearly a field of study where as of yet there are many things we cannot understand or test due to technical problems. I would therefore not lightly conclude we already know what we can ever know. For example perhaps the ability to simulate new conscious beings, i.e. beings convincingly claiming to be conscious, from scratch, will teach us a completely new understanding of the origin of consciousness. It is definitely a hard problem, but perhaps not impossible. – Kvothe Nov 18 '20 at 18:16
  • 10
    @JLi The key point is that the perceptual nature of the qualia is not open to confirmation. You and Mary have not the slightest idea how the subjective qualities of your experiences compare. For a scientific rationalist (often a materialist), that makes it a problem - and, worse, a hard problem with no apparent solution even in principle. Of course, if you are not an atheistic scientific rationalist then there is no problem, hard or soft. – Guy Inchbald Nov 18 '20 at 19:38
  • 1
    @Kvothe You can in principle transfer the perceptual information pattern of a painting from one mind to another, be it human or artificial. But there is no way you can **objectively** capture the **subjective** experiential qualities, in order to compare them and see if they are the same or not. Going on about neurons and AI shows that you have simply not grasped this point. – Guy Inchbald Nov 18 '20 at 19:43
  • 4
    "But we homo sapiens can never know, not even in principle, the subjective quale of looking at a magnetic field." Are you sure about that? – Joshua Nov 18 '20 at 20:52
  • @GuyInchbald I agree I can never know exactly what Mary felt like. But I guess this is something we have to live with -- I am not Mary, nor is Mary me. If this is all there is to the "hard problem", then I agree it is a hard problem, and I am not so concerned about it. It is just something we have to live with, that's all. – J Li Nov 19 '20 at 03:18
  • 3
    @Joshua Yes I am sure. The human body does not posses the necessary sensory pathways. Any human-like person capable of doing so would no longer be *homo sapiens* but some GM mutant. (bad jokes about birdbrains strictly forbidden!) – Guy Inchbald Nov 19 '20 at 09:22
  • 1
    @GuyInchbald, and I don't agree with your certainty that it is not. Note that this is a field of study where we have not been able to do decent experiments yet. It is like saying that it is fundamentally impossible to understand what a proton is made off because we cannot look inside before getting access to experiments that do probe its constituents at which point you find out it is perfectly possible. Since we don't fully understand what our perception is and where it comes from it is impossible at this point to say that we will never be able to understand it. – Kvothe Nov 19 '20 at 10:28
  • 2
    @GuyInchbald, I would say that you are the one that has not grasped the point of not judging to early that something is impossible to "objectively capture" when it is clear that the thing you want to capture is currently ncredibly poorly understood due to technological constraints. Since we don't know what perception really is we don't know whether there is any subjective quality to it that cannot be measured objectively by an outsider. You just feel like you know because you wrongly assume that your perception of your own perception must be the fullest understanding of it. – Kvothe Nov 19 '20 at 10:31
  • You say at the end "That is the hard problem". But what exactly you mean by "that"? Your answer seems to imply that you mean comparing experiences of separate beings. But why would anyone call it a problem? Didn't you just prove that it is impossible to solve, even in principle? – Dmitri Urbanowicz Nov 19 '20 at 10:34
  • 8
    Enough has been said. Those who choose not to buy the hard problem are at liberty to disagree, but I have described it as best I can. – Guy Inchbald Nov 19 '20 at 11:21
  • 1
    Is this related to the "What is it like to be a bat?" problem? – Barmar Nov 19 '20 at 16:50
  • 2
    In theory a sufficiently good understanding of the brain and the technology to observe (or change) exactly what certain parts of it are doing may allow us to understand (or experience) how others experience the world. Although the key phrase there is very much "in theory", as both our understanding and technology is very far from being able to accomplish that. – NotThatGuy Nov 19 '20 at 17:32
  • 1
    @Barmar I think it's the same problem. – user253751 Nov 19 '20 at 18:29
  • 1
    "But we homo sapiens can never know, not even in principle, the subjective quale of looking at a magnetic field." This is definitely wrong; I would downvote if I had the Reputation here. There are people who have implanted magnets into themselves to let themselves feel magnetic fields, and given the existence of implanted cameras that were designed to allow people to see, it should be possible to repurpose similar technology to allow you to "see" magnetic fields by hooking the brain implant to a magnetic sensor. Human brains are ridiculously malleable when given the proper stimuli. – nick012000 Nov 20 '20 at 08:49
  • 2
    @nick012000 magnetic implants and such devices have to wire into our existing sensory pathways. These are all sight/sound/touch/smell pathways, leading to the associated perceptual qualities. The human nervous system has no magnetic sensory pathways analogous to those of say a pigeon, consequently there is no way in which the pigeon's innate subjective magnetic sensory qualities can arise. It is a shallow understanding to mistake the fake for the real thing. – Guy Inchbald Nov 20 '20 at 10:34
  • Do you mind elaborating on 'some birds are sentient' ? What do you mean by that exactly? As in, which is your working definition of sentience? – GettnDer Nov 23 '20 at 23:22
  • 1
    @GettnDer That is really a separate question. But it hinges on the presence of both the necessary neural substrate and associated cognitive behaviours; see for example the Cambridge Declaration on Consciousness: http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf – Guy Inchbald Nov 24 '20 at 08:36
  • As demonstrated by the discussion here, giving the hard problem metaphysical significance depends on taking certain epistemic claims as necessary truths, but thought experiments like Dennett's RoboMary challenge this. Arguably, so do the experiences of twins with conjoined brains, who seem to be able to experience some of each other's qualia, while still having independent minds. – sdenham Jun 22 '21 at 21:37
  • @sdenham Define "independent minds". Define "another's qualia". Explain how two minds can be "independent" if they are directly sharing the same experience; it appears an oxymoron. There is a lot of misleading sophistry spun by those with a point of view to push. – Guy Inchbald Jun 23 '21 at 08:56
  • @GuyInchbald The few cases of conjoined-at-the-brain twins that have thrived do, in fact, behave for the most part as independent minds, and your attempt to define away this awkward fact looks like an attempt to avoid rather than address it. A better response, IMHO, would have been to claim that the qualia themselves are localized to the individual minds, but the door has been opened to the possibility of such twins developing their own language in which they can communicate between themselves about qualia in ways that we cannot imagine. But this is merely support for the case made by Dennett. – sdenham Jun 26 '21 at 03:10
  • @GuyInchbald There is, indeed, a lot of misleading sophistry spun by those with a point of view to push. Did you have anything in particular in mind? – sdenham Jun 26 '21 at 03:28
  • @sdenham I would submit that tacitly pushing the assumption that shared qualia are consistent with independent minds is an excellent example of such sophistry. "to experience some of each other's qualia, while still having independent minds" is an oxymoron. Moreover it is wholly unfalsifiable - nobody can ever show whether Alice's subjective sensation of hunger is the same as Bob's. So the sophistry buries another hidden falsehood there, all in the one sentence. (All this was why the likes of BF Skinner reduced psychology to strict behaviourism. A step too far perhaps, but it makes the point.) – Guy Inchbald Jun 26 '21 at 09:51
  • @GuyInchbald It seems that, to preserve the dogma that qualia-sharing is impossible, you are prepared to deny that these children have independent minds, even though, to their doctors and their parents, they obviously do (and it is by no means obvious that the children themselves would disagree, to the extent they understand the question.) This is certainly tendentious, but is it sophistry? _I_ choose not to go there, as such talk does nothing to advance understanding - especially when, as in your first reply, it is done as innuendo rather than forthrightly. – sdenham Aug 18 '21 at 12:50
11

Q: … He phrased the hard problem as “why objective, mechanical processing can give rise to subjective experiences.” I find it difficult to think of this as hard. …

... Then, it seems like this task manager is “conscious”. Only the task manager itself is aware of the programs ran, and others don’t see the program status. This, the awareness is “subjective”.

How David Chalmers talks about the problem of consciousness make it seem I must be missing something in my description?

A: You seem to miss the most important word “experiences”.

What is hard about the hard problem of consciousness is why there is subjective experience occurring with consciousness (1-5) and not why awareness or subjective awareness occurs with consciousness (as you seem to understand). Chalmers himself says these:

The hard problem of consciousness is the problem of experience. Humans beings have subjective experience: … There is something it is like to see a vivid green, to feel a sharp pain, to visualize the Eiffel tower, to feel a deep regret, and to think that one is late. Each of these states has a phenomenal character, with phenomenal properties (or qualia) characterizing what it is like to be in the state.

There is no question that experience is closely associated with physical processes in systems such as brains. It seems that physical processes give rise to experience, at least in the sense that producing a physical system (such as a brain) with the right physical properties inevitably yields corresponding states of experience. But how and why do physical processes give rise to experience? Why do not these processes take place "in the dark," without any accompanying states of experience? This is the central mystery of consciousness.” (1)

and

“For any physical process we specify there will be an unanswered question: Why should this process give rise to experience? Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience.” (2)

For example, when we see a house, listen to a song, or smell a rose, in addition to the awareness (similar to the computer awareness) of those things, we have subjective experiences of what it is like to see the house, to hear a song, and to smell a rose occurring in our mind (see figure below) (6). The hard problem is “Why do these subjective experiences occur in our mind – why do we not just process these kinds of information in the dark without subjective experiences occurring as computers do in their information processing?”

Subjective experiences

You are right that computers can be subjectively aware of the image of the house, the sound of the song, and the smell of the rose, but so can we. Thus, subjective awareness is not the issue that makes the hard problem of consciousness hard and does not differentiate us from computers. On the contrary, at present, there is no evidence that computers have subjective experiences as we do. Therefore, it is the subjective experiences that make the hard problem of consciousness hard and differentiate us from computers.

This is in contrast to the easy problems of consciousness:

“The easy problems of consciousness include those of explaining the following phenomena: the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system; the reportability of mental states; the ability of a system to access its own internal states; the focus of attention; …

Although we do not yet have anything close to a complete explanation of these phenomena, we have a clear idea of how we might go about explaining them. This is why I call these problems the easy problems. Of course, "easy" is a relative term. Getting the details right will probably take a century or two of difficult empirical work. Still, there is every reason to believe that the methods of cognitive science and neuroscience will succeed.” (2)

And at present, a lot of advances have been made regarding the easy problem of consciousness. Although we still do not know all the details about it, we now have a good general idea of what the neural correlate of consciousness (7-9) is like. The complete knowledge of neural correlate of consciousness will completely solve the easy problem of consciousness.

References:

  1. Chalmers DJ. Consciousness and its place in nature. In: Chalmers DJ, editor. Philosophy of mind: Classical and contemporary readings. Oxford: Oxford University Press; 2002. ISBN-13: 978-0195145816 ISBN-10: 019514581X.

  2. Chalmers DJ. Facing up to the problem of consciousness. J Conscious Stud. 1995;2(3):200-219.

  3. Chalmers DJ. Moving forward on the problem of consciousness. J Conscious Stud. 1997;4(1):3-46.

  4. Weisberg J. The hard problem of consciousness. The Internet Encyclopedia of Philosophy.

  5. Van Gulick R. Consciousness. In: Zalta EN, editor. The Stanford Encyclopedia of Philosophy.

  6. Ukachoke C. The Basic Theory of the Mind. 1st ed. Bangkok, Thailand; Charansanitwong Printing Co. 2018.

  7. Chalmers DJ. What is a neural correlate of consciousness? In: Metzinger T, editor. Neural Correlates of Consciousness: Empirical and Conceptual Questions. MIT Press, Cambridge, MA. 2000

  8. Koch C, Massimini M, Boly M, Tononi G. Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience. 2016;17: 307-321. https://puredhamma.net/wp-content/uploads/Neural-correlates-of-consciousness-Koch-et-al-2016.pdf

  9. Tononi G, Koch C. The neural correlates of consciousness: An update. Annals of the New York Academy of Sciences. 2008;1124:239-61. 10.1196/annals.1440.004. https://authors.library.caltech.edu/40650/1/Tononi-Koch-08.pdf

user287279
  • 870
  • 5
  • 5
  • Thank you so much. My issue with the “hard problem” is about our understanding of “experience”. The typical neuroscience perspective is that “experience” is no more than patterns of neurons firing, and “experience” is simply an intuitive way human brains perceive such patterns. Thus, it seems that the “hard” problem boils down to us saying “I find it hard to imagine that my experience is just neurons firing”. To be a little dramatic (purely for exposition), this objection isn’t that different from “I cannot imagine humans evolving from animals”, and thus we have a “hard problem of evolution”. – J Li Nov 18 '20 at 17:08
  • 2
    I agree with you that the answer must lie in it is somehow an effect of neurons firing. But you are completely wrong if you think that this "somehow" is currently understood. If you would start simulating a bunch of neurons from scratch not knowing our human experience you would definitely not have predicted that those neurons would develop consciousness. – Kvothe Nov 18 '20 at 18:20
  • @JLi what exactly do you mean by "is" in "I find it hard to imagine that my experience *is* just neurons firing"? Therein perhaps lies the difficulty. Firing neurons "are" experience only in a similar sense to music "being" soundwaves or notes, or soccer "being" 22 people running around and kicking a ball. – henning Nov 18 '20 at 20:43
  • @Kvothe > "wrong if you think that this "somehow" is currently understood." -- well... I hate to pop your bubble, but, ["Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. In some games it paralyzed Stockfish and *toyed* with it... Grandmasters had never seen anything like it"](https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html) – Yuri Zavorotny Nov 18 '20 at 22:08
  • @JLi > "*experience” is simply an intuitive way human brains perceive such patterns*" -- and that's exactly what a neural net is about: patterns and intuition -- 100% superficial, selfish and subjective, naturally curious and seeking novel experiences to gain new/better ideas/concepts, it relies on those and its sense of beauty to *guess* its next move. Always guesswork, it knows nothing and understands nothing. There is no truth in it -- a great pretender, it knows how to *be like*, but it never is... reminds you of someone? ;) – Yuri Zavorotny Nov 18 '20 at 23:35
  • .. everyone knows someone.. And that's why it's critical to stay rational, to keep asking "why's", to look for explanations -- because for all its style, creativity and insightfulness, NN's capacity for evil is also limitless: ".. *are unable to hear what I say. You belong to your father, the devil, and you want to carry out your father’s desires. He was a murderer from the beginning, not holding to the truth, for there is no truth in him. When he lies, he speaks his native language, for he is a liar and the father of lies. Yet because I tell the truth, you do not believe me!*" -- John 8:43-45 – Yuri Zavorotny Nov 18 '20 at 23:52
  • 1
    @Yuri, I don't see how that goes against what I was saying. First off all this is far of from AlphaGo starting outputting thoughts on how he was perceiving things and outputting awareness of himself as an entity and his thought process. Secondly even if we did we would not yet understand (although we would probably get closer). An important step would be I think when you can from a microscopic model predict that the neurons would become self-aware/conscious. So not just know it has to happen because you saw it happen but actually be able to predict it from the building blocks of neurons. – Kvothe Nov 19 '20 at 10:22
  • @Kvothe > "*So not just know it has to happen because you saw it happen but actually be able to predict it from the building blocks of neurons.*" -- We have done that already with AlphaZero and other AIs. It does not matter what neurons your network is built from, neuron cells or silicon or a combination of the two -- it would exhibit the [same behavioural traits](https://philosophy.stackexchange.com/questions/77517/what-is-hard-about-the-hard-problem-of-consciousness/77518?noredirect=1#comment215695_77525). Consciousness is not among them though. – Yuri Zavorotny Nov 19 '20 at 20:33
  • ... the Neural Network was responsible for what happened when Mary __*saw*__ the red, for the *actual experience*. But it is not conscious or self-aware, and it could not have done what Mary did in the lab, it could not *understand* what colour is. Knowledge, understanding, consciousness and self-awareness, and the very concept of truth are products of the rational mind. Which, unfortunately, is fast asleep in most humans. *They __don't think__ with their Rational Self*. Rather, they rely on their Neural Network *to make it __look__ like they do*. – Yuri Zavorotny Nov 19 '20 at 20:47
  • ... just like AlphaZero, for all it brilliance, doesn't really *know* how to play, it just *pretends* that it does – Yuri Zavorotny Nov 19 '20 at 22:01
  • @henning--reinstateMonica What I meant is slightly different. To clarify, let me propose the following. Suppose one day neuroscientists figured out how exactly to reproduce all (or virtually all) human experiences through stimulating neurons in a particular way. If you want to hear Beethoven's 9th symphony? No problem. The scientists can stimulate your neurons in a particular way so you exactly hear the symphony, and you cannot differentiate between scientists working on you or really hearing the symphony. Would this resolve the "hard problem"? – J Li Nov 20 '20 at 03:09
  • @JLi -- you can't solve the "hard problem" simply by stating that it boils down to neurons firing. You would have to explain how exactly those neurons would produce subjective experience. In other words, you'd have to produce a model explaining subjective experience -- and sure, it could explain it in terms of neuron firing, but's got to be a model. – Yuri Zavorotny Nov 20 '20 at 04:45
  • @YuriAlexandrovich what u're concerning about rational soul is very similar to Leibniz's Mill Argument against materialism, and also like John Searle's Chinese Room Argument... – Double Knot Mar 10 '21 at 17:46
3

The difficulty is in explaining consciousness in terms of the kind of things that are in the physical world. No one has a clue how to that.

Many people believe we will one day explain mental contents in physical terms. For example, we might one day be able to explain human deductive logic in terms of the physical characteristics of neurons like we can explain the logic of a computer in terms of its hardware. We might also one day be able to predict the behaviour of a human being from a brain scan like we predict the weather by looking at the Earth's atmosphere. Yet, nobody has a clue as to how the quality of our subjective experience could possibly ever be explained in terms of subatomic particles, quantum events or some such. We don't even know where we would have to begin.

Then again, I fail to see what would be the use of doing that. We don't seem to need to explain consciousness.

This isn't the only problem seemingly impossible to solve either. Any fundamental constituant of reality could not possibly be explained in terms of the physical world. Maybe subjective experience is just such a constituant.

Funnily, consciousness would then be the only such fundamental constituant of reality we actually know and will ever know. So not only do we probably not need to explain consciousness, but we seem to know all there is to know about it.

It is also likely that our qualia are the only things we will ever really know of the real world. So the real problem is not to explain our qualia and subjective experience, but to make sure our beliefs about the physical world are reliable enough for us to survive in it and prosper.

Speakpigeon
  • 5,522
  • 1
  • 10
  • 22
  • ur philosophy is more similar to Leibniz's idealistic Monism than Descartes' dualism, just curious why instead u put Descartes pic as ur avatar?? – Double Knot Mar 10 '21 at 18:04
  • @DoubleKnot 1. There is nothing idealistic in my position, on the contrary: "*We don't seem to need to explain consciousness*" - 2. Descartes because of the Cogito, which explains why we cannot explain consciousness. – Speakpigeon Apr 08 '21 at 09:33
2

The Question

David Chalmers did not express it clearly in that quote (which is a loaded question, btw). What he meant to ask is "What did Mary learn, when she saw the red color for the first time?"

As the story goes, Mary is a brilliant scientist and a leading expert in everything color -- what they are (bands in EM spectrum), how they're sensed by the eyes, and how they reconstructed by the brain. Amazingly she accomplished all that w/o actually seeing a color. She is not colorblind, but she has been living in a black and white environment. Her lab, home, furniture, screens all monochrome, the shades of gray... until one day when she got out and saw red leaves on the trees (it was a beautiful day in the fall).

And that was Chalmers's question -- what Mary had learned in that moment? She already knew everything there is to know about colors. Yet, seeing red was not just a novel experience, it enriched her life in the most profound way -- which would not be possible unless she had learned something just from seeing the color... but what exactly did she learn? <== and that, again, is the so-called "hard problem".

A (very short) Answer

Now if you think about it, the "hard problem" question is essentially about the nature of fundamental concepts -- also platonic forms, also John Locke's "simple ideas", also Immanuel Kant's "intuitions", etc... like your concept of a "chair", or a "jump", or, indeed, of what counts as "red".

It is a knowledge of sorts -- like, you know what a chair is, don't you? But try and give a precise definition of what is -- and what isn't! -- a chair in rational terms, and you will soon find yourself grasping for words and only becoming more frustrated, realizing... wait, you don't know what a freaking chair is!?..

Well, strictly speaking, you don't, for it is not a rational knowledge.1 What you do have, instead, is a pretty good idea of what constitutes a chair. And, unlike knowledge, ideas/concepts are not products of your rational Self. They are created by your neural network AI, commonly referred to as your "subconsciousness".2

In fact, "getting ideas" of things is what neutral networks do as their way of processing experiences. Being, at its core, an image recognition system, a neural net treats everything as a picture,3 looking for similar patterns and anti-patterns in different depictions of the same class/type of things.

A concept of a chair, therefore, is but a collection of numerous patterns found in things classified as chairs by some trusted authority. Plus the anti-patterns, their presence strongly suggesting the thing is not a chair.

And that's your qualia, hopelessly subjective, as it should be, a sea of simple concepts. The rational Self then uses them as lego pieces to assemble three-dimensional mental models, each simulating a certain aspect of reality. If simulation correctly describes the real thing -- if it's true -- then it is promoted to the rank of knowledge. The individual models, in turn, become pieces of the ultimate jigsaw puzzle, the Big Picture -- a complete simulation of the world. Modeling ourselves, as a part of it, makes us self-aware and, thus, capable of conscious choice.

And... that's all there is to it. The real hard problem is not the consciousness -- it's us, creating obstacles upon obstacles, making something that everyone should have pretty much out of reach.

 
1 We can call it "irrational knowledge", but I'm afraid that would bread a lot of confusion.

2 In some way, it functions very similarly to a Flight Computer, first adopted in the modern fighter jets (F-16 was the first to take full advantage). At the time, they wanted to make them extremely agile, but that would also make them aerodynamically unstable, impossible for a human to control. Enter Flight Computer. Capable of making minute adjustments of individual control surfaces every split second, and could fly a brick with winglets (and so it did with Space Shuttle). The human pilot is still there, of course, but they can only access FC. The good FC then makes the pilot feel like they are in control, by doing its best to interpret and accommodate the pilot's intentions. Or not, if the FC knows better, as it happened with US Airways Flight 1549 (the "Miracle on the Hudson"), when, for the last minute of the fight, the FC diligently ignored the pilot's trying to lift the plane nose up, which would have ended in a stall like this...

3 the actual meaning of "being superficial"

Yuri Zavorotny
  • 590
  • 2
  • 10
  • Thank you Yuri. From a neuroscience perspective, it seems that the answer is clear? When Mary lives in a colorless environment, she learned such knowledge rationally (mostly in her frontal cortex). When she saw the color for the first time, it is a different set of neurons firing. Thus, these two are very different sets of neurons firing. For simplicity, we call the former “knowledge” and the latter “experience”. – J Li Nov 18 '20 at 17:13
  • That is correct! Tho as far as the answer goes, it barely scratches the surface. The most substantial part is the "two minds" concept -- humans having two independent centers of cognition: 1) the rational (tho seldom conscious) Self, and 2) the irritational (subconscious) neural net. The two are nothing alike -- both in terms of *what* they do and *how* they do it. Even their availability differ -- while the irrational mind is a given, the rational becoming *at all* operational is very much an option (achieving its nominal performance in next to impossible, grace of, umm... "civilization"). – Yuri Zavorotny Nov 18 '20 at 21:57
  • Isn't it obvious that she's learnt how her organism reacts to the color? There's simple mathematical argument that shows that sometimes you cannot predict things, no matter how much you know. – Dmitri Urbanowicz Nov 19 '20 at 10:01
  • @Dmitri > "*Isn't it obvious that she's learnt how her organism reacts to the color?*" -- she didn't "react to the color", she reacted to *seeing* it. She reacted *because* she learned something the moment she saw the first time. – Yuri Zavorotny Nov 19 '20 at 22:10
  • @YuriAlexandrovich I didn’t say “she reacted”. I said her body did, which is in no way contradictory with saying that her self reacted to her seeing. My point is that you can invoke the same argument about unexpected experience even if “Mary” is just a Turing Machine. – Dmitri Urbanowicz Nov 20 '20 at 04:50
  • "*I didn’t say “she reacted”. I said her body did*" -- ok, what is her body reacted to?... – Yuri Zavorotny Nov 20 '20 at 05:04
  • @YuriAlexandrovich It doesn’t matter as long as there was a change somewhere in the body. – Dmitri Urbanowicz Nov 20 '20 at 10:08
  • @DmitriUrbanowicz > "*It doesn’t matter...*" -- it depends on whom you ask... to some people [it matters](https://philosophy.stackexchange.com/q/77517/47003) – Yuri Zavorotny Nov 20 '20 at 12:42
  • @YuriAlexandrovich Not sure what you mean by that. I’m not denying that there’s something unexplained about subjective experiences. I’m just saying that thought experiment with Mary-the-Scientist isn’t convincing. – Dmitri Urbanowicz Nov 21 '20 at 07:52
1

Not only do we have something we call experiences, we are also aware of having experiences and we can reflect about them, have feelings about them etc.

This meta-stuff is not (yet?) within the realm of what computers can do or are expected to do. So it's a hard problem both philosophically and scientifically.

  • The real hard problem is not about the consciousness itsef. It's about *having it in the first place*, which most of us don't. We are supposed to be our conscious, rational Selves, but most of us are forced to commit a virtual suicide early in childhood. Alone and betrayed, their Selves effectively give up on thinking, on making conscious choices, on their agency. Once their neural net AI, their subconsciousness takes over their thought process ('cause someone has to drive!), their Selves becomes their helpless, nagging Ego -- sometimes observing.... – Yuri Zavorotny Nov 19 '20 at 23:19
  • .... from backseat, or fast asleep in there, leaving their neural net AI chatbot/autopilot to try and make it "look* like they are still conscious, still awake at the wheel... – Yuri Zavorotny Nov 19 '20 at 23:20
  • .... i don't know what else I can do, to wake them up – Yuri Zavorotny Nov 19 '20 at 23:21
1

When you are talking about a green apple, your experience is that green apple. When you are talking about neurons switched on in your brain while one is talking about the apple, your experience are those neuron cells. You see - objects of your consciousness are different.

You say, "but they correspond perfectly, the apple qualities and the cell's parameters", and you describe how in details. But then the object is the correspondence, yet a third and another object.

If you hope to substitute the apple perception by the equivalent neurochemical status you will have to train yourself to visualize the latter whenever somebody says "apple". There will be substitution of objects, and that is all.

Moreover, the experience of an object is immersed in ones current project (expectations, wishes, mood etc). My impression of the green apple is very specific if my tooth is on sore. Likewise, your attempt to link my this experience with brain cells is unique in another way, say, because you are preparing your thesis and are motivated by the prospect. But both "contexts", yours and mine, are not clearly apprehended most of the time and so, scientifically speaking, are hard to control for.

You might protest on the object whirl, "I believe apple and neurons reside in the world even when I don't think about them". Your right. Still, when you are thinking about their correspondence, you are keeping them apart. To relate or compare two things (even as equivalent) means to deny their identity. So, your effort to map the green apple image onto the neuronal firing field strengthens and sharpens the distinction between the counterparts. Thus a physical explanation of an experience is self-defeating.

You may resist, "I'm talking about a correspondence between the intrapsychic image and the brain, not about the apple out there and the brain". Then you are a mystic. For, there is no anything inside consciousness. Consciousness is void - it is just activity about things (material or imaginary) of the outer world (yes, imagination is an outer experience). By putting a spook proxy of the apple in place of the apple and drawing links between neurons and the proxy you are playing a forgery (called modelling) because you can move the spooky instance as close to the brain as possible while (groundedlessly) claiming it "represents the experience/qualia".


Those were remarks con scientific reductionists. Now about the hard problem of consciousness itself. Wikipedia describes it as follows: "The hard problem ... is the problem of explaining why ... we have ... phenomenal experiences".

Since I tend to be a phenomenologist, that is not a problem for me: everything in the world are just phenomena (which exist as experienced aka apparent) and there is no anything besides phenomena.

So I would reformulate the hard problem by shifting the accent: "The hard problem ... is the problem of explaining why ... we have ... phenomenal experiences (rather than we are they)".

To have something is alias to not be it, or (put differently) to be it by the mode of a lack or via a clearance. That is the hard problem which science-based reductions cannot help, I suspect.

ttnphns
  • 420
  • 4
  • 11
  • My only argument against phenomenology is it seems lacking guiding metaphysical principle since as u mentioned it denies either material or rational mind ideal as real ontological substance (real existence ontologically) which reflect through our sense organs become our experienced phenomena as materialism and idealism trying to leverage. So ur phenomenology is basically back to a layman just using one phenomenon to explain another phenomenon totally within this perceived world... I'd like to see if u have anything to say about this lack of ontological substance issue of phenomenology? – Double Knot Mar 10 '21 at 18:29
  • I have nothing to add because you said it yourself and accurately. Yes, phenomenology sees no need in matter or in spirit (idea). We _live_ in an "layman" (your word; immediate or unprejudiced, my word) world where entities are just serials of phenomena replacing each other. "Let's go back to things themselves" as they appear, is the motto. The kernel is _in_ the obvious, not _under_ it. – ttnphns Mar 10 '21 at 19:31
1

You are on the right track here, and we can use Daniel Dennett's "What RoboMary Knows" thought experiment to continue this approach.  While this was developed in response to a well-known thought experiment from Frank Jackson's "Knowledge Argument", known by various names such as “Mary’s Room” or "Mary the Color Scientist", the Hard Problem claim is built on the same notions.

Here, Dennett posits a conscious, self-aware, qualia-experiencing type of robot, which knows all the relevant details of its own circuitry and programming, and also has the ability to make specific, targeted changes to its own internal state. In his reply to Jackson, Dennett tells a story in which one such robot, RoboMary, has been equipped with monochrome cameras instead of the usual color ones, but, using her* extensive knowledge of color in the environment and of color vision, she is able to calculate how color cameras would record the scene before her, deduce what changes this would cause to the state of her neural circuitry, and, using her fine-grained control of that circuitry, put it into the state it would have reached if she had color cameras. Given that her physical state is identical to the one which would have resulted from seeing in color, physicalists see no reason to suppose that this would be experienced by RoboMary any differently than actually seeing the scene in color, and would have the same consequences as doing so.

To some, this may look like begging the question, by asserting that consciousness can arise in a purely physical entity. Dennett anticipates this objection:

Hold everything. Before turning to the interesting bits, I must consider what many will view as a pressing objection:

"Robots don’t have color experiences!  Robots don’t have qualia. This scenario isn’t remotely on the same topic as the story of Mary the color scientist."

I suspect that many will want to endorse this objection, but they really must restrain themselves, on pain of begging the question most blatantly. Contemporary materialism–at least in my version of it–cheerfully endorses the assertion that we are robots of a sort–made of robots made of robots. Thinking in terms of robots is a useful exercise, since it removes the excuse that we don’t yet know enough about brains to say just what is going on that might be relevant, permitting a sort of woolly romanticism about the mysterious powers of brains to cloud our judgment. If materialism is true, it should be possible (“in principle!”) to build a material thing–call it a robot brain–that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it, and just illustrating that assumption in their version of the Mary story.  That might be interesting as social anthropology, but is unlikely to shed any light on the science of consciousness.

To fit this story to the hard problem, let us first see what its proponents claim, which is that, while figuring out the physics of how the brain works is a hard problem in the ordinary sense, there is a much harder problem lurking behind it: explaining how the physics of the brain gives rise to qualia. With an intuition pumped up by the Knowledge Argument, they propose qualia are intrinsic, ineffable and private, and assume this creates an unbridgeable "explanatory gap" between physical knowledge and knowing what it is like to have experiences.

To respond, we can suppose that, instead of deducing a state corresponding to seeing something in color, RoboMary is simply told what that state is by another robot of the same type, which has functioning color vision. Just as in Dennett's original story, RoboMary, after setting her internal state accordingly, now knows what it is like to see a scene in color, without having done so. For these conscious entities, their qualia are neither intrinsically private nor ineffable, as demonstrated by their transfer from one to the other.

Of course, none of this solves the hard problem, as it depends on it being possible to make, at least in principle, a RoboMary - an artificial conscious machine - which is something not yet established. What it does show is that, contrary to what many anti-materialists believe (and perhaps hope is true), the hard problem need not be unsolvable regardless of what progress is made in neuroscience. RoboMary is plausible under the usual physicalist assumptions, which makes it equally plausible that the apparently ineffable nature of qualia is merely a consequence of our inability to examine and modify, at the neuron and synapse level of detail, the processes going on in our own brains. RoboMary has that ability, and if RoboMary is possible, so too is the communication of qualia by language.

Anti-materialists could simply assert that RoboMary is not possible, but, as Dennett says, that would not be arguing for the falsity of materialism, it would be assuming it. They might claim that RoboMary would still leave something unexplained, but to be plausible, they would have to be more specific than they have so far about what that is, and do so without tacitly begging the question by assuming qualia are not the result of physical processes (Dennett himself has made that point in various places, such as "Explaining the 'Magic' of Consciousness.")

Some anti-materialists would doubtless argue that RoboMary would be, at best, a p-zombie (something physically identical, at least neurologically and functionally, to a human, but lacking qualia.) Responding to that claim in detail (and all other claims that the definitive anti-materialist argument is to be found elsewhere than in the one we are discussing) is beyond the scope of this question; here, it is sufficient to note that not all of the many physicalism-inclined philosophers accept the argument's leap from p-zombies' conceivability to their modal possibility, despite Chalmers' closely-argued attempt to persuade them that it is not a claim that needs further justification.

One useful feature of this approach is that it avoids issues of what sort of event learning "what it's like" is. Whether it is learning a fact, or gaining an ability or phenomenal concept, the physicalist premise holds that all mental events involve, and are (in principle) causally explicable by, physical changes in the brain, and so they are communicable in the form of a sequence of physical changes to be made at specific locations (again, in principle, and only for conscious agents having the level of control of their physical state being proposed for RoboMary. Dennett's story is not, as some have mistakenly taken it to be, an argument that Mary herself would be able to do this.)

It has been said that the hard problem is only a problem for physicalists, but there is something of a double standard in so saying. Anti-materialists have been no more successful than physicalists in completing an explanation of how minds work; saying "well, it cannot be via physical processes alone" does not explain anything, and it does not mean that the question of how minds work goes away, even if it turns out that the anti-materialists are correct.


*I am following Dennett's lead in using gendered pronouns here.

A Raybould
  • 326
  • 1
  • 12
  • Chalmers himself would not deny that something like RoboMary is possible, or say she'd be a p-zombie--he takes for granted that the physical world is causally closed (no interactive dualism), and he thinks there are likely to be "psychophysical laws" relating physical patterns to phenomenal experience, and that these laws would respect a principle of "organizational invariance" meaning an accurate simulation of a brain would have the same type of phenomenal experience as the original. So he'd presumably agree RoboMary's self-alterations would give her the same experience as color cameras. – Hypnosifl Jul 15 '21 at 13:35
  • @Hypnosifl Indeed, and Chalmer's minimalistic dualism might be considered unsatisfactory by many anti-materialists. For any hard-problem proponent accepting that RoboMary's self-alterations would give her the same experience as color cameras, the question - what, specifically, is beyond science's ability to explain - becomes more pointed. – A Raybould Jul 15 '21 at 16:48
  • 1
    Chalmer's argument, like Nagel's, is that there are *facts* about first-person consciousness that go beyond all possible third-person physical facts. And on p. 144-145 of *The Conscious Mind* he argues that a physicalist may say that Mary gains a new *ability* when seeing color for the first time, it doesn't make sense for them to say that Mary has learned any new facts (i.e. the fact of what it is like to experience color). So he might argue that similarly in the RoboMary thought-experiment, RoboMary's rewiring does not allow her to learn any new facts in the physicalist picture. – Hypnosifl Jul 15 '21 at 17:16
  • @Hypnosifl The challenge for anyone claiming that there is a fact of what it is like to experience color, and that one must learn it in order to know what it is like to see color, is that no-one who knows what it is like has been able to articulate this fact that they are supposed to know. – A Raybould Jul 15 '21 at 18:47
  • @Hypnosifl In Dennett's original version, RoboMary is deducing new facts, and not just gaining abilities, from what she already knows, and in the scenario in my reply, she is learning them discursively from another conscious agent. – A Raybould Jul 15 '21 at 19:08
  • @Hypnosifl ...however, the supposition is not that it is the learning of these facts that brings about qualia, but the setting of the physical state accordingly. – A Raybould Jul 15 '21 at 19:29
  • RoboMary must deduce new facts when calculating how input from color cameras would alter her circuits, but once she has made that factual deduction, I think a physicalist would have to say she learns no *additional* new facts when she actually alters her own circuits in a matching way. Whereas one who believes in additional phenomenal facts might say that after she has made the calculation, but before she has altered her own circuits, she does not really know what color qualia are like first-hand, but after the alteration she does know that. – Hypnosifl Jul 15 '21 at 20:46
  • @Hypnosifl If everything is experience, couldn't a physicalist say RoboMary could not have knowledge of post circuit-altering because that is a new experience? Just like we don't experience the sun rising once and claim the sun always must rise. Rather we experience the sun rising, and experience the sun rising twice, and experience the sun rising a third time and so on, then change our beliefs and knowledge. It is the perpetual novelty of every experience that changes our state of knowledge. RoboMary may have new knowledge from the new experience of confirming prior knowledge. – J Kusin Jul 15 '21 at 21:00
  • @Hypnosifl In essence, even in a physicalist, deterministic world, in the process of acquiring epistemic knowledge, *every* experience strengthens or denies some prior knowledge or beliefs, even those we *believe* to know how they will turn out. Until we have *the final theory of all*, this experience-first approach approach seems possible even in light of RoboMary. It seems like the default mode when we have incomplete knowledge imo. – J Kusin Jul 15 '21 at 21:09
  • @Hypnosifl Yes, you are actually recapitulating the point I was trying to make, perhaps not very clearly, in the last part of my previous reply. In RoboMary's case, she has undergone the same physical changes as occur when a robot learns what it is like to see colors, and so the physicalist premise is that she has done so - it does not matter whether she has learned a new fact (which I am still extremely skeptical of on the grounds that no-one can articulate it, despite their knowing it), or gained a new ability, or a phenomenal concept, or whatever you want to call it. – A Raybould Jul 15 '21 at 21:45
  • @Hypnosifl To put it another way, suppose RM learns what is indisputably a new fact, such as an acquaintance's new phone number, not by simply being told what that number is, but by being told what state would correspond to being exactly her current state, except that it has a memory of the new phone number of the acquaintance. After she sets her state to correspond, she will now recall the new number when she asks herself what that acquaintance's number is. The change of state has given her knowledge of a new fact, yet the instructions for how to set her state did not explicitly state it. – A Raybould Jul 16 '21 at 01:54
  • @ARaybould I think that last argument is a red herring. How is "being told what state would correspond to being exactly her current state, except that it has a memory of the new phone number of the acquaintance" possible without..well...being told what that number is? Whether you encode it as "change memory register XY to 01001110101..." or via processing of vocal input of the number does not make any difference, it's just two different representations of the same information - for her. Otherwise, she would not understand (know how to process) the instruction with the same outcome. – Philip Klöcking Jul 16 '21 at 10:13
  • @PhilipKlöcking It is not a red herring, as it is a response to the suggestion by Hypnosifl that "I think a physicalist would have to say she learns no additional new facts when she actually alters her own circuits in a matching way."... Supposing that what Mary learns is a fact appears to be a way to avoid Churchland's charge, in "[Knowing Qualia: a reply to Jackson](https://philosophy.stackexchange.com/a/58939/33812)" that the Knowledge Argument equivocates over "knows about", but it only works if this supposed fact is communicable to Mary via words, which, empirically, it is not. – A Raybould Jul 16 '21 at 10:29
  • @PhilipKlöcking Putting it another way, suppose one is updating a database by writing directly to disk. One could do "change memory register XY to 01001110101..." without knowing that so-and-so's phone number is now such-and-such, but the "memory" would now be there. – A Raybould Jul 16 '21 at 10:35
  • The thought-experiment of a language where all new knowledge is communicated in terms of instructions about how to alter your brain-state to gain new understanding is an interesting one, I'm not sure how Chalmers would respond. Personally I tend to disbelieve in "fundamental" or atomic qualia (the 'pure redness of red' or something) and prefer a structuralist or relational understanding of qualia (see [here](https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00237/full)), so explicitly stating the updated mental relationships could uniquely define the qualia. – Hypnosifl Jul 16 '21 at 18:28
  • I do think that some kind of information theory like "theory of consciousness" might be needed for other reasons besides mapping computational states to atomic qualia though, for example precisely defining why some internal relationships are more salient and others more subliminal, and also perhaps to define some notion of subjective probabilities in identity-splitting cases like the many-worlds interpretation or the teletransporter thought-experiment. – Hypnosifl Jul 16 '21 at 18:45
  • @Hypnosifl Loorits' paper (the one you link to) sets out a position for which my answer here is merely an illustration. To its last sentence, "Namely, it is easy to understand and accept that having knowledge about some neural structure does not necessarily make that structure occur in one’s brains." one could add "...and physicalism does not imply that it must." ... It is not clear to me what your concerns are with respect to salient vs. subliminal (most animals and complex systems have a priority hierarchy for stimuli), or with many-worlds and teletransporter thought-experiments. – A Raybould Jul 21 '21 at 16:25
0

What's hard about it, is that no one can see an ontological distinction in the causal chain of sensory-cognitive processing that changes this objective relationship to a purely subjective one that we agree is "consciousness" (even if MRI scanners can correlate these together).

The solution is that there is such an ontological distinction akin to anti-particles and particles that allow a separate chain of processing, since each, in this example, would be in a separate dimension of Time and causality is related to a singular dimension. The relationship required is one between the neurons to the skin or membrane of the neuron. This is the ontological separation that allows subjective experience to be a different medium from the objective, even though they are distinctly and necessarily related.

No one has shown, scientifically, that consciousness could be riding on the surface of the neuron, but this is probably the case, since the need for separation must exist, if we are not robots.

Marxos
  • 735
  • 3
  • 12
0

@ChristianDumitrescu has the precise right answer in form of a commentary. I'll just expand it, hoping having got the precise point.

As synthesized by @ChristianDumitrescu, it's a problem of degree, not kind.

To start, complexity is essentially the quality of a system by which it is just difficult to understand. A complex system is not a system that has "more than 100 subsystems" (read once on the web, which would be equivalent to say that a circle is a figure formed by at least 100 arcs), or which exhibits a large number of intrincate relationships. A complex system is a system which is difficult to understand, a model which is difficult to assess as a simple concept (yes, I know that is quite the same definition of a standard system, group of interrelated parts, but there's no formal definition of a complex system, the classical systems theory has been developed precisely to address complexity).

A mathematical "simple" problem would usually be a problem that features a limited set of unknown variables. A "hard" problem would feature an unknown set of unknown variables. That is, precisely a complex problem, as defined above. Consciousness is a hard problem because the product exceeds by far a linear (or non-linear**!) product of its constituent functions, which in turn, are far from being comprehensible (i.e. ...are complex)

** Non-linear: essentially which features emergent behaviors. As said commonly, where the whole is more than the sum of the parts.

RodolfoAP
  • 6,580
  • 12
  • 29
0

The hard problem of consciousness is the "explanatory gap" between, on the one hand, the language of physics — which apparently governs everything that happens in the universe — and on the other hand the inner experiences that all sentient human beings have.

There seems to be no way to start with the laws of physics (as we know them) and the objects that they apply to (assemblages of particles and waves) and end up with a conclusion that any experience whatsoever is being experienced.

That is what the hard problem of consciousness is.

(The word "hard" is used to distinguish it from the so-called "easy" problems of consciousness, which are not really easy, but perhaps easier: These are the problems of describing the types of consciousness that occur and under what circumstances.)

Daniel Asimov
  • 646
  • 3
  • 11
0

I think you don't understand the problem.

There is definitely some correlation between parts of the brain and conscious activity. When you do mathematics or study IT, you use the left half of your brain more than the right, and when you paint it's the other way around. Alcohol activates part A of the brain, smoking dope activates part B, when we are in love, it's part C etc. Understanding those correlations is the easy problem, and it's a matter of time before we fully solve it (most likely).

Your example of software-hardware is dealing with the easy problem, it already pre-supposes consciousness, it does not deal with the most important question of how consciousness arose in the first place. Also, this example is bad because the mind is not all like a computer, those analogies come from cognitive science, which is full of wrong assumptions that go back to Husserl. Modern computers are just combinatorics machines.

The hard problem is very different. First of all, it has to do with spontaneity. In this context, spontaneous is used in the sense of chemistry. Sometimes you mix 2 chemicals and nothing happens (like oil and water), and in other cases you mix 2 things, and it blows up or bubbles spontaneously. Basically, you mix A and B (or add C to it, or add 1000 other chemicals), leave them on their own, and without any interference from your side, and a vigorous reaction occurs by itself.

What are you fundamentally? Just a bunch of atoms. Your brain is also a bunch of atoms. Add 2 atoms together, what will happen? Nothing, now add a 3rd one? Still nothing. We go on billions of times, nothing. We keep going in this way, and it just so happens that we mix 6,543,523,432,234 atoms of carbon, hydrogen etc, and spontaneously consciousness arises. Atom + atom + atom + billion more atoms -> consciousness. How? That's the hard problem.

This problem is hard for several reasons: It will probably never be solved. It is not even clear if we will ever solve 0.001% of this problem. Our modern science has no tools to deal with it and no conceptual framework to even approach it.

Consciousness, subjective experiences and other phenomena of the mind have nothing to do with atoms even though the brain is nothing but atoms. No matter how many atoms you mix in, no matter in what sequence, shape or form, you will NEVER get consciousness, and the fact that is exists is literally a miracle.

Once again, if we already have consciousness, we can find some physical correspondence between some feeling and some firing neurons, and that is the easy problem.

Dennis Kozevnikoff
  • 1,247
  • 2
  • 15
-2

Referring to your computer example I see this as the difference between a 1960 electronic system and modern computer. In the 60s switch A turned on light A. Now there are many layers and when you press the button in your app there is far more happening before the light comes on.

The easy problem is the hardware, the physical parts of the brain, how an event triggers a sequence of neurons to fire and generate a response. The hard problem is understanding the high level programs that are running. i.e. the sequences, timings, patterns and feedback loops running in that system.

Imagine trying to work out what specific action a user is taking on their smart phone by only looking at a fuzzy memory dump, then reverse engineer a web browser from that information.

Martin
  • 97
  • martin -- you have not understood the hard problem, which has nothing to do with the complexity of calculation algorithms. Our higher level programs are no more or less conscious than our simple ones. – Dcleve Feb 25 '21 at 23:36