0

Recently I read a comment that most physicalists believe that some threshold of complexity must be surpassed prior to any sentience being exhibited. I've heard of similar ideas a lot, but I've never seen anyone try to defend that view ontologically.

Correct me if I'm wrong, but I believe that the properties that things exhibit are rooted in their ontology. In other word, ontology constrains the range of possible properties that something can possess.

If that's the case, then it seems like there are primarily two possibilities for supporting the complexity theory, depending on whether sentience is a property of things or relations between things:

  1. Sentience results from amplifying some sentient property which things already possess (but is not easily detectable without amplification).
  2. Sentience is a property of complex relations between things. I mean a real property as opposed to being merely an abstract description of a relation (such a crystalline configuration, or the swirliness of a hurricane).

There might be other possibilities I haven't thought of. I thought of a hybrid theory in which it is a property of certain things as well as being a property of a certain relation, but that seemed kind of farfetched. The first possibility which I listed kind of sounds like a Leibnizean idea, so I'm guessing there wouldn't be many adherents to such a view.

I'm especially interested in the second possibility, because it's independent of any informational medium, and it seems to correspond to what people actually mean when they talk about supervenient properties arising from complexity. However, it strikes me as odd that a relation could be said to have any sort of ontology that could give rise to sentient properties (or any non-abstract property, for that matter).

My question: Are there any good arguments to support this theory; or, are there any philosophers who have investigated the ontology of complexity such that it could give rise to sentience?

Edit (in response to comments):

jobermark: Traditionally, there was little or no distinction between properties and powers; properties were seen as being able to bring about effects in people (such as sensations) or other things.

Temperature is a measure of energy, but it's really the energy that brings about physical effects such as transduction in humans. Similarly, acidity is a measure of a compound's potential to bring about certain chemical reactions, so it's a description of its reactivity. For that reason, I'd say that temperature and acidity are descriptive of properties rather than being real properties themselves.

However, whatever definition is chosen, the following should be kept in mind: If the theory explains pain, for example, on the basis of nothing more than abstract relations, measurements or descriptions, it denies that its ontology is anything more than an abstraction. You can suggest such a theory, but the question remains as to whether anyone would accept it. Most people believe that pain is something real that can't be explained away so easily.

Philip Klöcking: It would be more accurate to say that I'm speaking of secondary properties in the Lockean sense, which are those that do not resemble their causes. However, the causes to which they correspond are very real, so I would call those real properties of things. This is as opposed to abstract or conceptual properties which only exist in virtue of the understanding. Sensation are real because they are prior to conceptualization and not the product of it. As Wilfrid Sellars pointed out, "...there is no reason to suppose that having the sensation of a red triangle is a cognitive or epistemic fact." (Although I agree with Sellars on this, it should be pointed out that there's a distinction to be made between triangular form and the concept of triangle: The sensation of triangles is not cognitive, but the conception of them as triangles is.)

I'm speaking of real properties as either phenomena or the correlates of phenomena. This is similar to the way that Kant spoke of the "real in sensation":

"[Phenomena] contain, then, over and above the intuition, the materials for an object (through which is represented something existing in space or time), that is to say, they contain the real of sensation, as a representation merely subjective, which gives us merely the consciousness that the subject is affected, and which we refer to some external object." (Kant, CPR A166/B207)

John Forkosh: Here are five reasons why I'm certain that sensations are not merely conceptual:

  1. True vs. Theoretical — We say things like, "I saw it with my own two eyes," because our sensations provide the ultimate court of appeal for empirical truth. Concepts, on the other hand, don't even have to be perfectly true to be useful. In fact, they are more theoretical in nature. My concept of horse, for example, gets me by until I learn something new about horses, and then I update my concept. For that reason, it's reasonable to assume that almost all of our concepts are to some extent incomplete and inaccurate.
  2. Objective vs. Subjective — Sensations are the means by which we access the objectivity of the world, and so, they are beyond our control. Concepts, on the other hand, are subjective and, in order to be useful, must be subject to the control and revision of the understanding.
  3. Effectual vs. Ineffectual — Sensations could be thought of as being effectual in virtue of their role in engaging our dispositions and inclinations. In addition to that, they are the way that the effects of physical properties are made manifest in humans. Concepts, on the other hand, are tools of reasoning, useful for understanding the effects which come to pass, but they don't really play a role as being effectual themselves, except perhaps in a more abstract way by being intermediate between sensation and judgment. The role of concepts tends to be more descriptive or discursive.
  4. Ends vs. Means — This idea is related to the last in the way that sensations engage our inclinations toward material ends. Concepts play an auxiliary or intermediate role in this process and can be thought of as the means to rationally evaluate the ends which sensation presents us.
  5. Original vs. Derivative — Sensations provide original content, and concepts provide derivative content. Our understanding of empirical concepts is ultimately dependent on sensation to provide their meaning. In fact, even abstract concepts must somehow be rooted in one or more of our fundamental faculties (volition, sensation or moral sense) in order to be meaningful. Concepts can be understood by means of other concepts, but somewhere in the mix there has to be something to ground them in a meaningful way. This is perhaps the biggest problem with claiming sensations are merely conceptual; it leads to an infinite regress. That is to say that representations could only represent other representations without ever having anything which is originally represented.
  • 2
    So you reject the possibility of it being an *emergent* property of complex systems? I know that this is a slightly frustrating one, as is basically says 'We cannot explain it, but it happens', but afaik it is the most common. – Philip Klöcking Dec 03 '16 at 14:12
  • 1
    @PhilipKlöcking. I'm not concerned about how such properties are called, but usually when people speak of *emergent* properties, they are simply descriptions of relations and have no real ontology. You could, for example, speak of a *beautiful* face as a property, but that's merely a description of how the whole is perceived. –  Dec 03 '16 at 14:16
  • 2
    @PédeLeão By this criterion for 'realness' are temperature and acidity 'real properties', or a 'mere abstract description' of the corresponding molecular configurations. (They are the primary examples of emergent properties. And we measured them both before we realized of what they are or are not really properties. So it is hard to claim they are abstracted away from what we eventually discovered to explain them.) –  Dec 03 '16 at 16:18
  • 1
    @jobermark. I edited the question in response to your comment. –  Dec 03 '16 at 16:58
  • 1
    So your theory basically asks wether sentience could be understood as a *primary property* in the understanding of Locke as opposed to being a (mind/description/abstraction) dependent property, i.e. secondary? – Philip Klöcking Dec 03 '16 at 18:36
  • 1
    @PhilipKlöcking. I edited the question in response to your comment. –  Dec 03 '16 at 20:19
  • 1
    I'm unclear as to what the "real" properties referenced in the bullet numbered 2 are. What is the definition? or means of demarcation between real and abstract? examples? – Dave Dec 04 '16 at 04:25
  • 1
    @Dave. I use the term in the same sense throughout the post. My comment to PhilipKlöcking provides a definition and explanation. My comment to jobermark explains that any specific definition is secondary to the ontological implications that a given definition will have. What exactly is it that's not clear? –  Dec 04 '16 at 10:36
  • I think you're wrong about emergence constrained by ontology. Consider the following trivial example (too trivial for brain-->mind, but illustrating the underlying point). An irregularly-shaped rock doesn't posses the property of a circle. But you can take a bunch of such rocks and arrange them in a circle. So the circle property "emerges" from the complex arrangement (trivially complex) of rocks, though none of them ontologically possess it. –  Dec 04 '16 at 10:45
  • There's a lot of stuff along these lines, pro and con, discussed at https://www.closertotruth.com/topics/consciousness/consciousness (not sure whether or not the discussions are as redundant as the url:) –  Dec 04 '16 at 10:51
  • 1
    @JohnForkosh. The ontology of the concept of circularity is conceptual or abstract, so emergence in your example is also conceptual or abstract. My question is what is the ontological basis for sentience if it is said to emerge from complexity. Is it merely conceptual or can you claim that there is something more to it? –  Dec 04 '16 at 11:00
  • "merely" conceptual -- **all** emergent phenomena are ontologically conceptual/abstract, but I wouldn't (necessarily) characterize that as "merely", though it's sometimes merely, to wit the circle example I gave. Maybe somewhat less trivial would be -- all thermodynamic variables (P,G,T,A,V,U,S,H -- google, e.g., "thermodynamic square") are emergent/conceptual. A single atom/molecule doesn't possess temperature or pressure, etc, itself. It posseses momentum, from which all that other stuff is emergent when a large number of molecules is considered. –  Dec 05 '16 at 02:29
  • 1
    @JohnForkosh. I say "merely" because if it's true, sensation are merely conceptual. If you're happy with that theory, post an answer. You didn't convince me, but maybe you'll convince someone else. –  Dec 05 '16 at 02:54
  • Yeah, sensations must be conceptual -- if you accept/advocate physicalism/emergence. Otherwise, they're...whatever it is you accept/advocate. But I'm not trying to convince anybody of anything. I don't even completely accept (and certainly don't advocate) physicalism myself. But I do think it's the most well-grounded. Emergence is pretty well-established for all complex phenomena except consciousness, so Occam's razor suggests it's your best first bet for consciousness, too, unless you come up with good reason otherwise. Contrariwise, e.g., panpsychism is completely ab initio, without evidence –  Dec 05 '16 at 03:44
  • @JohnForkosh. I edited the question in response to your comment. –  Dec 05 '16 at 09:41
  • There's a distinction between "sense" and "sensation". A toy robot can be built with a photocell, such that when you shine a flashlight at it, it walks towards the light. So it surely "senses" the light, but I think we can agree it has no qualia-like "sensation" of what it's sensing. And that "robot-sense" isn't (pretty much can't be) conceptual. But the "qualia-like sensation", for which consciousness is prerequisite, is conceptual. Your body, sans consciousness, has "sense" mechanisms, from which your consciousness conceptualizes "sensations". (Or at least, that's how I'd interpret it.) –  Dec 05 '16 at 10:13
  • You seem to be making this needlessly difficult. Sentience doesn't just arise from complexity. It's necessary but not sufficient. Like a computer program requires a degree of complexity before being able to exhibit certain features, so does the brain. There's a lot going on to produce sentience. Ontologically it's reducible to constituent parts but that doesn't necessarily help understanding it any more than decompiling a program and we don't have the tools to do all the reverse engineering we can do with computers yet. – Tanath Dec 07 '16 at 21:01
  • @Tanath. Post an answer if that's what you believe, but you're right in that what you're saying doesn't explain anything. Parts + complexity = sentience? It sounds like you're just jumping to conclusions. –  Dec 07 '16 at 21:09
  • It's what fits with what we know and there's no evidence for it being any other way that I'm aware of. I'm not sure how to provide an answer for your question though. – Tanath Dec 07 '16 at 21:11
  • @Tanath. In my opinion it doesn't fit at all. –  Dec 07 '16 at 21:13
  • Only because you think sentience is somehow special. – Tanath Dec 07 '16 at 21:17
  • @Tanath Re "decompiling", that's not the relevant issue, since it just translates one language (machine code) into another (any higher-level language you have in mind). The real issue is semantics -- what does the program **do**, e.g., as a function integers-->integers. And the halting problem shows that even the much simpler question, does or doesn't the program halt, is unanswerable (at least by any other program). So the harder question, what does the program do as a function, is even less answerable. And consciousness would be what our brain **does**, regardless of language it's written in –  Dec 09 '16 at 04:21
  • What does your comment have to do with what I said? My point was that due to its complexity and our lack of tools the brain is currently too difficult to understand at that level. I'm not aware of anything that implies that sentience is magical or otherwise special in some way that makes it require something other than what we think the brain does and computers could do. – Tanath Dec 09 '16 at 06:53
  • @Tanath Yeah, I completely agree with your overall "My point was that...". I was only remarking on your "decompiling" remark. Oh, wait, I see your in-context remark was "doesn't help understanding any more than decompiling". Oops, my bad, don't see how I missed that context earlier. –  Dec 10 '16 at 03:18

1 Answers1

4

David Chalmers takes the reasoning you describe and flips on its head: Levels of complexity can never account for the purely ontological nature of consciousness (his famous "hard-problem of consciousness"), and therefore if physicalism is true, then consciousness must be fundamental, i.e. it has to be a basic property of matter like electrical charge or mass. Otherwise some for of dualism is true. I suppose this what you are alluding to in point (1).

For possibility (2) one can start from James and Russell's neutral monism. From what I understand, Russell arrived at this position from the idea that relations are more fundamental than substance.

The reason why I think Russell's monism is relevant to your question, is that the only way it would make sense to speak of a "real" ontology of properties is if one takes the stance that the fundamental constituents of nature are relations, not substances - see also structural realism. Then it becomes possible for different classes of relations (according to different levels of complexity) to have distinct "real" ontologies. To put another way: if relations are what the world is really made of, then differences between classes of relations are ontologically real differences, not mere abstract differences as you put it.

Alexander S King
  • 26,984
  • 5
  • 64
  • 187