8

A lighthearted topic after I came across this funny quote:

Intelligence is knowing that a tomato is a fruit, wisdom is knowing not to put tomatoes in a fruit salad.

This had me wondering what are the philosophical definitions of wisdom and intelligence?

More specifically:

  1. Given that philosophy is translated as love of wisdom, what is the definition of wisdom for the purposes of defining philosophy?
  2. Have philosophers of mind examined the question of intelligence? (as opposed to the question of consciousness, which seems to be the current big topic)
  3. In the above quote "Knowing that a tomato is a fruit" is simply a question of quality of information storage and retrieval. If information storage, retrieval and processing are the main measures of intelligence, doesn't that imply a functional/computational theory of the mind?
  4. How would a Cartesian dualist deal with the question of intelligence? Is it based on the physical substrate of the brain or is it based on the non-physical mind? Is the loss of intelligence due to intoxication or injury an argument against Cartesian dualism?
  5. Has anyone suggested that proper measures of intelligence and wisdom would be the keys to solving the mind-body problem and the question of strong AI? (As opposed to Turing's proposal that behavior is the answer as implied in his Turing test).
Alexander S King
  • 26,984
  • 5
  • 64
  • 187
  • I can only contribute an observation, that whenever we see behavior that we earlier thought required consciousness and intelligence, such as playing chess above master level, or such as walking on two legs in difficult terrain, whatever, we then reduce that not to intelligence, but neither intelligence nor consciousness, just smarts. So I think that what we *mean* by these terms may have slightly different connotations, but is at core the same. I.e., that you can't have true intelligence without consciousness, and vice versa. This way things also get simpler. No more really big questions. :) – Cheers and hth. - Alf Jun 08 '15 at 17:42
  • Oh, one more thing, I just read the last point. There are no really proper measures of intelligence yet. To wit, some years ago researchers at Uppsala created a program that scored sky-high on IQ tests, and it had not a shred of intelligence. ;-) Ditto for the Turing test. Depending on who you ask it was passed last year (England, I believe), or year before last (Singapore), or even earlier, well all the way back to Eliza (1965). The Turing test is just too vague and short and dependent on intelligent examiners. There is apparently lack of such. – Cheers and hth. - Alf Jun 08 '15 at 17:45
  • out of curiosity, are you a reductionist? that is, in your mind there is nothing in your inner experience that cannot be reproduced by a correctly organized set of cog-wheels? or are you a dualist? – nir Jun 08 '15 at 19:44
  • @nir I am undecided. I started out as a materialist/physicalist reductionist, but I have since evolved. At this point the most likely position to me seems to be property dualism (as opposed to substance/Cartesian dualism). But I don't have a definitive position yet. Check these two questions if you want to know more about my opinions: http://philosophy.stackexchange.com/questions/22263/does-claiming-that-strong-ai-is-impossible-imply-a-belief-in-substance-dualism http://philosophy.stackexchange.com/questions/21983/can-someone-be-an-atheist-and-subscribe-to-substance-dualism-at-the-same-time – Alexander S King Jun 08 '15 at 19:52
  • according to the links, you subscribed to dualism because of your belief in free will, is that correct? does that mean that if, say, free will turned out to be an illusion, then in your mind there would be no other barrier for reductionism? for example, I am undecided on free-will; it would not shock me if it turned out to be an illusion, but I don't see how that would affect in anyway that thing in my experience which makes me a dualist; would you say that resonates? or would you disagree? – nir Jun 08 '15 at 20:12
  • @nir as I mentioned I am undecided. The things that I am stuck on are the issue of freewill and the question of multiple relizability. – Alexander S King Jun 08 '15 at 20:31
  • Can you explain what you mean by multiple realizability, and in what way you are stuck on it? – nir Jun 11 '15 at 18:37
  • Multiple Realizability is the idea that the exact same mental state, for example pain, or the belief that 2+3=5, can exist in brains that are different from ours. If that is the case, then the mental state "believing that 2+3=5 is true" cannot be identical to a human neural configuration, since in an AI or in an alien life form can have the same belief while having a totally different brain structure. Check this question: http://philosophy.stackexchange.com/questions/23037/why-is-multiple-realizability-considered-to-have-refuted-the-type-identity-theor – Alexander S King Jun 11 '15 at 19:03
  • It seems MR is usually mentioned in terms of pain across different species; pain is more clearly an experience than a belief which is arguably a concept or a cognitive function (what is qualia of belief?); when it comes to pain I don't see why the pain an animal is having nor that of another human should be exactly **identical** to mine, so I really don't get what MR is all about and why it generated so much literature... in what way are you stuck on it? why does it force you into dualism? – nir Jun 12 '15 at 20:01
  • @nir Agreed about the pain: In fact I have come up with a better analogy, the functional state of being blond can be reduced to a specific physiological state and yet be multiply realizable, for example in lions and barbie dolls. The same could be said about pain. What stumps me is more abstract beliefs. The mental states "knowing that 2+2=4" and "Believing that Sam is the person who ate the pizza that was on the table" seem to have some reality beyond the specific neural configuration they correspond to, and as such should be multiply realizable. Continued in next comment. – Alexander S King Jun 13 '15 at 03:33
  • Thus "knowing that 2+2=4" is not the same thing as "The neural configuration of a person who knows that 2+2=4", so per Leibniz's and more recently Kripke's results regarding identity, dualism holds. Why is "knowing that 2+2=4" different than pain or blondness ? Pain and blondness can be similar between different organisms, but still not identical. "knowing that 2+2=4" is identical, whether in a human, dolphin, robot or alien mind. I realize as I write this that I am veering towards a sort of platonic theory of forms here. I will have to think further about this. Still undecided :-(. – Alexander S King Jun 13 '15 at 03:40
  • This reminds me of the former Gov of Texas Rick Perry's comment about Austin, TX. He said, Austin is the blueberry in the tomato soup. Kinda funny... – Ronnie Royston Jun 19 '15 at 01:48
  • It's not a full answer to your question, but I personally find it meaningful to define intelligence as the ability to arrive at the correct answer/action given unlimited time and energy budgets, while wisdom is the ability to arrive at the best answer/action at the moment an answer is needed using what you have. – Cort Ammon Apr 27 '17 at 23:06
  • Wisdom is like experience where as intelligence is like becoming aware of it. In your tomato example, Knowing tomato as fruit indicates the awareness about Tomato. But about salad part, It gives behavior of tomato in given circumstances. Wisdom is like that. It can give detailed awareness about the thing's behavior in a given situation. It makes wisdom boss compared to intelligence and makes man happy. – syamal Feb 25 '16 at 07:35

2 Answers2

2

A few possible (very partial) answers:

On 1.: 'wisdom' if it can be anything in philosophy is probably something like being able to intuit the most applicable philosophical framework to understand a given situation in its context, potentially within a societal and long-term moral framework rather than a narrow short-term self-interest way.

This might be knowing how to use a tomato in cuisine (see 3 below); developing a workable analysis of ethical use of animals for food (e.g. using Peter Singer's journey-of-life framework, rather than some humans-are-superior-beings idea); knowing when post-modernism in the social sciences needs to be topedoed a la Sokol's 'hoax'.

On 3. knowing 'tomato' is a 'fruit' is a simple ontological inference (following IS-A relations from the former to the latter), taking these terms as entities in any reasonable biological ontology. The information storage / retrieval (memory) is a side-issue. It's the ontology you need. The implication of this is most likely a theory of mind based on scientific realism, i.e. one that allows for epistemic activities as well as true (approximate) knowledge of reality, roughly described by ontologies, aka textbook knowledge of mind-independent phenomena, as best we know them - some of which these days are formalised and computable.

A post-modernist alternative approach might refuse to accept the innateness of tomato being a fruit (in the botanic sense), and say that its 'fruitness' is no more real that its 'utility in Italian cuisine-ness'.

I think on intelligence v wisdom, 'intelligence' can reasonably be understood in some objective computational sense (e.g. ability to attain better quality outputs / solutions for a goal and a given set of inputs). Many problems-to-be-solved involve an 'I' term, and/or judgements about external stimuli (is this good ice-cream?), and/or needs to generate an output in the real world (raise my hand in an auction). As soon as we say this, we are talking about an embodied mind, i.e. one whose ability to relate to the world requires awareness of a containing body, identity, boundary, and abilities to receive input signals and generate output. Creating a truly intelligent computer means (for many, including myself) creating an embodied mind with both complex computation to handle interface to the real world (the robotics & sensors part) as well as high-level abstract modules to do what we typically think of as 'intelligence' which is usually some kind of statistical inferencing.

If we could do all of this well enough that the resulting cyborg could a) successfully physically interact with the world, b) know what 'I' means, c) perform abstract computing using various representational forms, including language and d) could learn to create new knowledge about itself and the world over time, then I think we might have a machine that could theoretically be capable of 'wisdom'.

wolandscat
  • 151
  • 4
1

This is a response to question 3:

The first sentence is intelligence as theoretical rationality, and here keyed to science as a symbol or image of it as such.

The second sentence is intelligence as practical rationality, and there keyed to the practical common-sense of knowing how to use; this is in a sense, phenomenology.

Wisdom is a problematic term to define; but given the nomenclature of 'wisdom literature' applied to a certain genre of literature situated in the Levant, and elsewhere; an example being Proverbs in the Bible; it is closer to the second than the first.

Mozibur Ullah
  • 1
  • 14
  • 88
  • 234