5

Hard AI is one of the perennial problems of philosophy, but it immediately becomes mired down in notions like consciousness, qualia, etc. and whether a machine can have those. This usually skips right over whether animals have those qualities and whether animals are some variety of intelligence.

We talk a lot about mental content and its human dimensions, but if we are going to bother to ask about machines, there is no point unless we can come up with some functional limit on what biological processes constitute thinking.

From a rather direct Nietzschean point of view, power is the cutting point between what matters and what does not. From that POV a logical definition of thinking is very Freudian: 'The use of data to advance power.' (We do need to account for reserved power, so "advancing power" needs to involve developing the ability to do so if the power is never expressed, or doing things that attempt to make advances, but fail. And merely surviving also needs to be seen as a kind of power.)

The problem is that this pushes thought way down the scale to bacteria, who pursue chemical traces, and even genes. But it does seem to rule out mere chemistry or other active processes that are not goal-directed by nature. The chemical encoding in a gene clearly encodes data, and the phylogenetic fund of genetic variation seems to collect data in a way that seeks domination of an environmental niche, and increases the power of various genes to further shape the environment. The variations in a crystal or the specific shape of an oceanic bed may be data, but they are never collected up into anything that looks like information or decision making.

I am all for simply accepting that breadth, and coming up with some meaningful way to address a continuum of intelligence that extends that far down.

  1. Are there reasoned positions (involving non-circular, non-arbitrary definitions of thinking) that have been offered against so wide a notion of intelligence? (Before I get a dozen older philosophers: we live in a world with lying, cheating chimpanzees, and dolphins and dogs that are capable of reasoning by the process of elimination, so language and logic won't cut it anymore.)

  2. How much traction do we lose on other important philosophical considerations if we adopt that wide a definition?

  3. Has anyone yet bothered to think through AI or other questions by looking that far down? (I find traces of that in Dennett, but he is mostly in the debunking business when he goes there.)

  • 1
    Are you talking about intelligence, which obviously is given in animals as well (behavioural biologists know this for 100 years now) or sentience? Is it about adapting to environmental change other than through already learned reactions or intentionally changing environmental factors for achieving goals? Please be cautious regarding applying purpose-language down too far. – Philip Klöcking Dec 02 '16 at 19:18
  • 3
    "Thinking" is a first person "mentalistic" predicate, that is how it receives its meaning. I do not believe that conflating it with third order descriptions, whatever they are, will do much good. Certainly third person models of intelligence and correlates of "thinking" within them should be developed, but as we did with temperature vs "warmth" they are best kept apart and a clear sense of distinction between correlation and identification maintained. Much of the nonsense in the debates about philosophical zombies and the mind-body problem in general comes from overlooking the difference. – Conifold Dec 02 '16 at 19:19
  • 1
    @Conifold But the central related questions about hard AI, e.g. moral culpability for automated behavior, are third-person questions whose determinants are in first-person terms. So this has to be unified at some level. At some point, we are asking the same questions, so why pursue a divided vocabulary? –  Dec 02 '16 at 21:47
  • 1
    @PhilipKlöcking I am talking about whether there is a branch point where we can separate the two, or whether they are points on a single continuum. We can make all the artificial and somewhat arbitrary divisions we want between levels of a single continuum. But what is the unifying feature of the whole continuum? If we choose too broadly, we have presented ourselves an impossible territory to map, but if we just cut it off for convenience we are simply lying to ourselves. –  Dec 02 '16 at 21:50
  • Why do you say "pushes thought way down the scale"? I think most physicalists would argue that thought/mind/consciousness is emergent from brain, so there's a "branch point" because you need some minimal complexity before sentience is exhibited at all. Ditto for intelligence, but that's maybe emergent with less complexity. Manmade machines can already exhibit some degree of "intelligent behavior", but not "intelligence" which carries a connotation of consciousness. –  Dec 03 '16 at 07:21
  • @JohnForkosh Again, 'consciousness', 'sentience', 'intelligence' are all terms that endlessly evade clarity, and don't seem to actually help. Is Koko 'sentient'? Was she before we gave her a language? What would be evidence? Are we asking those questions for some actual reason, or because we made decisions based on a word and now we can't give it up? It all becomes subjective to the point of pointlessness without some focus. If one proposes that focus is on *data* and *increased control*, you end up in a kind of degenerate state, but at least you know where you are. –  Dec 03 '16 at 16:03
  • I wasn't commenting about what they "are", just about your remark "pushes thought way down the scale". Whatever thought,etc "is", the physicalism argument (as I understand it) would be it's emergent from brain behavior. So let's just define thought,etc as that which emerges from brain operation. Then "way down the scale" is wrong, because there's a minimal device complexity required such that what emerges from its operation can be characterized as "thought". And Koko would or wouldn't have "thought" depending on your minimal complexity requirement. (I'd say she easily passes the test.) –  Dec 04 '16 at 10:09
  • @JohnForkosh That just displaces the question onto "What is a brain?" Do insect brains count? What about basic bacterial chemical following mechanisms? And why then is the genetic selection process not just a distributed brain? It is surely complex enough to qualify, as it contains a lot of brains inside itself. So why bother with the indirection? It just feels better and does not accomplish anything. You still don't know whether a brain indeed has a minimum complexity unless you arbitrarily choose one. Indicating no branch point, and a single continuum. –  Dec 04 '16 at 23:34
  • Well, yes, but quoting your question somewhat more fully, "pushes thought way down the scale **to bacteria**", and I think the emergence view makes its clear that bacteria are way too far down the scale. But I agree you're right that exactly how far down isn't well-defined. Maybe the problem is you're thinking continuum, so no lower bound. But by mathematical analogy, a spectrum can have **both** a continuous part **and** a discrete part. We (humans and similar creatures) would be on the continuous part. But once you get down to the discrete part, then biff-boom-bang you can hit zero quickly. –  Dec 06 '16 at 00:06
  • @JohnForkosh First, I give the reason it gets to bacteria. Second, I also note a lower bound. –  Dec 06 '16 at 00:13
  • Oh, right. Okay, sorry. –  Dec 06 '16 at 00:17

7 Answers7

2

A brain state in the visual cortex is said to contain information in virtue of it being downstream in the causal chain from photoreceptors in the eye. In the same way, any state of matter (such as a crystal) could be said to be downstream from a causal chain and, thus, be said to contain information. However, the idea of information presupposes some means of interpreting its significance.

In a paper published by Francis Crick and Christof Koch, this point is made clear:

"An important problem neglected by neuroscientists is the problem of meaning. Neuroscientists are apt to assume that if they can see that a neuron's firing is roughly correlated with some aspect of the visual scene, such as an oriented line, then that firing must be part of the neural correlate of the seen line. They assume that because they, as outside observers, are conscious of the correlation, the firing must be part of the NCC. This by no means follows, as we have argued for neurons in V1. But this is not the major problem, which is: How do other parts of the brain know that the firing of a neuron (or of a set of similar neurons) produces the conscious percept of, say, a face? How does the brain know what the firing of those neurons represents? Put in other words, how is meaning generated by the brain?" (Crick and Koch, "Consciousness and Neuroscience." italics added)

In answer to your questions:

  1. The citation provided provides such a reasoned position: thinking requires some correlation between data and meaning. That, in turn, requires some means of perception.
  2. If we accept a wider definition, there is no reason we shouldn't say that any information bearing form of matter, i.e. anything downstream in a causal chain. (such as a crystal or a gene) "thinks". However, that's counterintuitive and stretches the word beyond any normal usage.
  3. Yes, it seems that Crick and Koch might be numbered among those who bothered to think about it
  • Is survival meaning or is it not? Is exerting control over the environment a meaningful activity? "Meaning" seems to be a convenient word for whatever you choose to assign it to. Talking about the parts of the brain, especially assuming thinking is whatever a brain does, without really defining a brain, is not the same question as talking about what constitutes thinking. –  Dec 04 '16 at 23:39
  • @jobermark. Survival is a concept, and like all concepts, it is made possible by generalizations about what can be predicated of a category. The basis of all conceptualization is ultimately rooted in the sensory information that we receive about the world and our dispositions to it. Therefore, meaning has its basis in whatever means that makes the perception of sensation possible, i.e. how the neural code (NCC) is decoded by the mind. That is what Crick and Koch were talking about. –  Dec 05 '16 at 00:16
  • @jobermark. This question of meaning defines thinking by being an essential prerequisite. –  Dec 05 '16 at 00:19
  • This is still completely arbitrary and circular. Meaning is whatever thoughts are made out of, and senses are whatever provide meaning. Does the presence of sugar molecules mean something to the bacterium? There is rudimentary sensation involved. Likewise, if the phenotype changes because of some aspect of the environment, has that aspect not been sensed by the process of producing bodies and seeing which ones survive? –  Dec 09 '16 at 04:32
  • @jobermark. On the contrary, meaning cannot qualify as such unless it is *terminal* (as opposed to circular), eventually the a referent must terminate the chain of references without itself referring to something else as does a sign, symbol or representation. Simple effects do not qualify, because their information content is minimal with respect to their causes and they also remain unperceived. One thing that is special about perception is that it is *terminal* in its capacity to provide meaning. And there is no reason to believe bacteria experience any sensation at all. –  Dec 09 '16 at 08:38
  • Yeah. If you discard the whole deconstruction of Logical Positivism, you get back the old theory. But I don't intend to. We know that human sensation depends on a state formed from earlier perceptions and current expectations. We know that you do not in fact see the environment and pick out of it what you care about, but that your eyes see only very small parts of the environment and those parts are sought out according to expectations. Intention is more basic than meaning. –  Dec 09 '16 at 17:28
  • @jobermark. Sensation depends on perception? Does that mean you have to perceive pain before you feel it? –  Dec 09 '16 at 18:14
  • What part of "Earlier perceptions and current expectations" is ambiguous? How do you get from my statement to yours? Sensation and the body do not cleany interface in such a way that information flows only one direction. The endorphin release that prevents pain from being experienced immediately after you break your arm, for instance, is real, and really prevents the pain receptors from triggering. This is a current expectation and a previous perception stopping a real sensation. Your theory requires a clear boundary that does not exist. –  Dec 09 '16 at 19:55
  • @jobermark. I didn't think it was ambiguous at all. Earlier perceptions are perceptions just as much as later perceptions are. Right? –  Dec 09 '16 at 20:04
  • @jobermark. The neural code representationally refers to meaning, and the *clear boundary* is the way the mind provides a meaningful referent for the code. Although neuroscience can't explain how exactly that is possible, there is nothing conceptually obscure about such a boundary. –  Dec 09 '16 at 20:32
  • (Ok, but there is no path from that idea to the idea you 'deduce' from it.) Just to give a very direct answer, and be clear. You do not need to perceive an event of pain before you sense that same event. But as in the example, you may not sense the pain at all if you perceive something that evokes shock before you encounter it. Perception affects state which affects sensation. So in what way is sensation, as part of this feedback loop, 'terminal'? We reach out into the environment in order to sense. So the OP presumes that *intention or need* precedes sensation, and involves information. –  Dec 09 '16 at 20:32
  • @jobermark. It's terminal in providing a referent for a reference, ending the infinite regress suggested by any theory that ignores the need for such a referent. –  Dec 09 '16 at 20:36
  • It is terminal by axiom. I have a referent -- the intention or need. I don't need a more basic referent that is ultimately less basic. We are obviously talking at cross-purposes here. I understand your axioms, I just think they miss the point of seeking a boundary instead of imposing one. –  Dec 09 '16 at 20:38
2

I am puzzled by why you mix what I view as three distinct concepts into one; namely consciousness, thinking and intelligence.

I believe that it only creates confusion. what is your justification?

I remember a scientist who said that as he was watching one day a seed of a Senecio vernalis floating in the air breeze he had the epiphany that it is a form of intelligence. He was not saying that it was a form of thinking nor a form of consciousness.

No one would protest if you insist that bacteria can be said to demonstrate intelligence, yet few people would claim bacteria is thinking, or that DNA is thinking.

And more specifically to the application of thinking to machines. Turing said in his famous paper Computing Machinery and Intelligence:

The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.

And he meant it to be meaningless not in the sense of being trivial or a truism, but in the sense meant by Chomsky who compared such a question to asking whether submarines can swim:

Thinking is a human feature. Will AI someday really think? That's like asking if submarines swim. If you call it swimming then robots will think, yes.

That said, Turing continues the above quote with the following:

Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

The way we use language may change, but not because of new strict definitions, rather because of new conventions.

Finally, consciousness and thinking are distinct concepts. Consciousness is something you can have or be while thinking is something you can do. A philosopher may be thinking of being conscious. A meditator may be conscious of thinking.

Many people believe that computers today can be said to exhibit a form of intelligence. A subset of these people believe that computer intelligence will continue to develop to the point that we may one day conventionally speak of computers as thinking, and yet a subset of these people believe that nevertheless computers may never be conscious in the sense that people are, or as Dennett* put it "conscious in the fullest sense".

*Dennett is not one of these people, though.

nir
  • 4,531
  • 15
  • 27
  • I'm not sure where this gets us as the same could be said the other way around, what justification is there for having the three distinct concepts? I think the point that's being made is that most uses of the terms beg the question. Chomsky is essentially saying thinking is that thing which only humans do, and then expecting those that wish to compare AI/animals etc to humans to come up with another word. This just renders the word "thinking" synonymous with "human" and so redundant as a useful word. I'm not sure I see the point in that. –  Dec 05 '16 at 11:38
  • This is what Wittgenstein had to say about the matter "When philosophers use a word — 'knowledge', 'being', 'object', 'I', 'proposition/sentence', 'name' — and try to grasp the essence of the thing, one must always ask oneself: is the word ever actually used in this way in the language in which it is at home? — What we do is to bring words back from their metaphysical to their everyday use." that is to say - you are entitled to invent your own private meaning for words and use them anyway you like, but don't complain when you generate confusion and no one else wishes to play along. – nir Dec 05 '16 at 12:01
  • There is a significant difference between what Wittgenstein was talking about in making sure we don't define words outside of their ordinary use, and the situation we have here where their ordinary use potentially has implication for a more accurate and insightful way of understanding the world. When Einstein changed the way we thought about Gravity, we did not require him to come up with a new word so that our intuitive version of Newtonian Gravity could continue to be called Gravity. –  Dec 05 '16 at 12:24
  • What's being argued here is that the line that's being used to define our word "thinking" doesn't exist. To my mind that would necessitate that we alter our use of the word. –  Dec 05 '16 at 12:25
  • @nir, your example is just as much a counterexample. Many people would object to the notion that the seed has intelligence in any sense other than the military one, and that 'intelligence' properly used outside that domain means 'capacity for thinking'. Others would say it is not actually even intelligence in the sense of coded communication (the military one), because although it conveys information, that information is not going to be interpreted by a thinking thing, but into a context where it plays out a program. So which is creating confusion? Ordinary use has to be **ordinary** use. –  Dec 06 '16 at 14:27
1

For (1) consider a Wittgenstein family resemblance type of analysis. It is pretty clear that intentional deliberate decision making is an activity that sits pretty close to the core of "intelligence", culturally transmitting acquired knowledge is another example. Automatic reflexes, e.g. jerking away from a pain, though still information processing in the general sense are not, by most people most of the time, really considered to be at the core of intelligence. Note that it is when animals exhibit behaviors indicative of these core facets that we most easily ascribe intelligence to them, when they do information processing that sits less obviously at the core, honety/bumble bee navigation comes to mind as a potential example, people are less will to ascribe these behaviors as relating to actual intelligence.

Of course with this type of analysis, there is no rigorous absolute boundary (the game of love? politics as a game? etc.) but an honest assessment of which aspects of information processing in humans (the only physical things to which intelligence has been universally ascribed) are/are not the prototypical exemplars of the class will put the boundary well into multi-cellular organisms, i.e. that "intelligence" is only a (proper) subset of the use of data to advance power.

Getting to question (2), beyond just a descriptive and taxonomic, function, the distinctions between intelligence and other form(s?) of information processing how we judge and address issues that arise from them. Someone knocking over your drink with an automatic reflex is a different situation (warranting different behaviors) that someone deliberately knocked over your drink.

Maybe there are contexts where the most effective way to communicate your concepts to an audience is to lump all information processing together as "intelligence" to emphasize that it is all a continuum, but there also seems to be contexts where painting in such broad strokes seems to obscure rather than reveal.

Dave
  • 5,261
  • 1
  • 18
  • 51
1

More of an extended comment than a proper response:

The original definition of thinking as information processing and symbol manipulation à la Turing was fine until functionalism came along and people started to see the metaphysical and ethical implications that it implied. Then they started moving the goal posts just because they were uncomfortable with the fact that humans lost their privileged status as the only thinking beings. "so language and logic won't cut it anymore." is more an expression of people's fears and insecurities than of a better understanding of human mental processes.

I would take your definition, and add a constraint that the data processing should at least partially involve symbolic representations which mirror the world in a way similar to Wittgenstein's Picture theory of meaning. This way a bacterium following chemical traces of nutrients isn't thinking, but a cheetah running after a gazelle is, giving that there is some homomorphism mapping the gazelle onto its neural patterns or memory states. This also allows us to say that advanced computers do some thinking but watches and light switches don't.


(Please note that my own thoughts on your question are evolving)

You mention in the comment: "Watches and light switches don't use data to empower themselves, so I am not sure I get the point, there."

The point is, a lot of energy consuming and transforming process can be recast as the use of data to project power: The light switch processes the binary input of on/off and then projects power in the form of visible electro-magnetic radiation onto an unsuspecting dark area. Similarly, the watch processes input data - the number of cycles of whatever periodic process drives it - and projects it as a signal with more elaborate information content.

What differentiates the above process from thinking is intentionality. To qualify as thinking, the data processing has to be directed at that over which empowerment is sought, and for that to occur some representation of the target object or concept has to be involved, hence my reference to Wittgenstein's picture theory.

Remember Meinong's Jungle? He assumed that the targets of our intention must have some level of reality, even if they were fictional. Well, he was right, except that instead of the metaphysical menagerie he conjured up, their existence was real because they existed as models/representations in the minds of those who conceived of them.

Alexander S King
  • 26,984
  • 5
  • 64
  • 187
  • Yeah, but when you are unconsciously drawn to someone, or you are simply experiencing pure time, do you have a model in mind? To me that still has to be thinking. If you have the full ability to model, and yet do not always use it when thinking, how wise is it to take it as a prerequisite? Watches and light switches don't use data to empower themselves, so I am not sure I get the point, there. –  Dec 09 '16 at 04:53
  • 1
    "when you are unconsciously drawn to someone" one might argue that you do have a model in your mind it is just running in the background - isn't that what subconscious states are all about? -- for the rest of your comment see edit to the reply. – Alexander S King Dec 09 '16 at 07:39
  • Hmmm. Don't like the idea of a target (sheesh, men! ;) ) Power is not always (or even very often) over something, or immediate. Survival is empowering in that it prevents your losing all power sooner rather than later. But I do see how objects are always involved when a subject acts (even if the self is the object), and you can go for the whole Klein bundle where objects are necessarily internal models... I have been tempted to do so (much of my previous profession presumes this) but what's the (preferably falsifiable) proof? –  Dec 09 '16 at 17:01
  • @jobermark I was a little queasy about the word target myself, I was just aping Dennett in his book "Kinds of Minds" which makes the claim that intentionality started out as the ability of creatures to engage in "target seeking behavior". – Alexander S King Dec 09 '16 at 21:50
  • Yeah, I am generally a big fan of Dennet, but as I read him I constantly feel 'surely you know seven counterexamples to the oversimplification you just made'... –  Dec 11 '16 at 03:19
  • As to the switches and watches, channeling power is not creating power. Genes, bacteria, and humans take actions that cause them to *have (or maintain) more effect on the environment over time*, light switches don't. –  Dec 11 '16 at 03:26
0

I can't usefully answer 1) because I agree with your definition of intelligence and I too have found nothing in the literature that really provides anything more than circular arguments. In support, however, you might find the work of Anthony Trewavas of interest (if you haven't already come across him). He works out of Edinburgh University, and has presented quite a forthright argument that plants should be seen as intelligent organisms for exactly the same reasons as you are suggesting we should expand our current definitions.

There has been a history of "moving the goal posts" with the definition of intelligence, it's a notion with which people have a peculiar relationship. We want it to be the preserve of humans only and allow others into the club only on the basis of overwhelming evidence, but on the other hand, we don't want to allow it to be responsible for anything negative. People frequently ascribe their less worthy actions to "base instincts" and their more noble ones to "intelligence" without any neurological basis for distinguishing the two. In fact, the emotional responses of 6 month old babies (tantrums) have been shown to be first processed through the cerebral cortex by the use of modern fMRI nets.

As for 2) I don't see any traction lost outweighing the advantage of removing the shackles of a poor definition, but I'm not optimistic about this ever happening. In a sense I think a good deal of epistemological questions may even evaporate when we take a wider view of what thinking means, certainly the more anthropocentric forms of dualism would have less traction.

With regards to 3) have you considered the work of James Lovelock to fall into the category of people who have considered the idea of ascribing a thinking kind of motive to organisms without brains. He was looking up the scale, rather than down it, but definitely considered that there was no good reason not to think of the global ecosystem (and each sub-system within it) as being alive and capable of intent by emergence. Taking the popular physicalist view of conciousness as an emergent property, this would require we accept that ecosystems "think".

  • If bacteria and phenotypes qualify, surely plants do... And yeah, Lovelock is an influence here. Another is Tavistock group work, where one backs off and looks at the degree to which a group reacts as a single mind (which is open to much more direct manipulation than an individual.) But the nature of mob action is seldom part of even political philosophy. And neither Lovelock nor Bion are doing philosophy. They just describe what they are seeing as facts, and they are not very careful about it. –  Dec 09 '16 at 04:43
0

This are original ideas, I hope, cause I write about them.

  • understanding is the capability of modeling a system in an abstract structure.

  • thinking is the capability of applying causal mechanisms (cause-consequence) to the models of understanding.

But your issue is not thinking, thinking is just making cause-and-effect calculations. It is more related with understanding, and even then, it will not be enough. Your problem is the concept of intelligence. Following my last bad book (ydor.org), intelligence can be generalised, not only to a mechanism of thinking, or a mechanism of getting profit. A person who steals is not intelligent, due to stealing gives profits, but only on the short term. Intelligence is related to permanence in time. Therefore,

  • intelligence is the capability of using thinking mechanisms to increase the probability of persistence in time (on the book, it is also related to the increasing of physical order).

So, can it be artificial? If so, that would be fantastic, and really able to be understood as intelligent. But there is more: individual existence can conflict with society, there is where moral enters the game:

  • moral is a set of rules that improve positive interactions (where the word positive is related to physical order, again, permanence in time).

Moral and ethics(individual rules) are always considered a social set of conventions invented by grandparents. But maybe not: Moral is the set of rules we follow to survive in groups. Moral is so important we try to formalise it as law (formal rules that allow coexistence), but as it is intended to be objective, can only be applied to punish in proven cases. Moral is the most important social regulation -based on thinking-.

AI should also include moral rules in order for machines to persist socially.

RodolfoAP
  • 6,580
  • 12
  • 29
-1

In order to come up with the "most useful" boundary on the definition of "thinking," a committee (as large as possible) should get together and 1)put forth their individual definitions of how "broad" the definition of "thinking" should be, and 2)obtain the best definition that includes as many of the definitions that are in "close" agreement with each other. In other words, the most useful boundary definition, is the one that is used the most, or by the largest group of people!

Guill
  • 1,744
  • 8
  • 5
  • 1
    Politics is not logic. –  Dec 11 '16 at 03:15
  • Downvote: a clear case of Appeal to Majority. If you apply it over the whole america, you will spend millions to get a definition like "thinking is using intelligence[most people would use that word] to follow God's[religious people need to be present]/Godesses'[feminists will achieve a milestone here] will". – RodolfoAP Jan 02 '17 at 09:22