7

Do philosophers have working definitions of 'intelligence'? The Stanford Encyclopedia of Philosophy provides a lot of references, but all of them are related to artificial intelligence and other fields like animal studies, not to the human and philosophical spheres.

Futilitarian
  • 3,981
  • 1
  • 7
  • 38
luidam
  • 79
  • 1
  • 4
  • 5
    Nobody has a definition of "intelligence" in general. Usually, several types of "intelligence" are acknowledged. It's one of those words that people like to use but that lacks clarity. Better to not use it in favor of more specific definitions. – Frank Apr 16 '23 at 14:06
  • 2
    @Frank Psychologists have a very clear notion of intelligence called the [g-factor](https://en.wikipedia.org/wiki/G_factor_(psychometrics)) which forms the basis of a [scientific definition of intelligence](https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory) which you should love since it's all about measurement and could survive even a moderate behaviorist interpretation. ; ) – J D Apr 16 '23 at 15:10
  • @JD Should we say that a grand master chess player who excels only at chess, but can't multiply 2 numbers is _intelligent_ or _not intelligent_? What about a prodigy human calculator who extracts roots of numbers in their head? If they are "intelligent", then I guess the rest of us are "not intelligent". What if somebody has excellent memory, but poor logical reasoning skills? What if someone is excellent at formal logic, but has no memory? – Frank Apr 16 '23 at 15:16
  • Another problem with "intelligence" is that it's not a fixed quantity throughout life. it's easy to get into a state where one is not "intelligent", and it is also possible through hard work to become more "intelligent" in some domain. It's not a given that one cannot change throughout their life, depending on the task. – Frank Apr 16 '23 at 15:17
  • @Frank Excellent questions and speculative statements all! You're born to be a philosopher, methinks. – J D Apr 16 '23 at 15:18
  • @JD It seems more fitting to start with questions, if one wants to be a philosopher, and to realize that most of the time, we don't know. There are so many authoritative sounding assertions on this Stack Exchange, one sometimes wonders if it's all really "philosophical". – Frank Apr 16 '23 at 15:37
  • https://en.wikipedia.org/wiki/Hard_problem_of_consciousness – Steve Apr 16 '23 at 23:43
  • This comment section confuses me, because it seems to be filled with questions that people think are difficult or present a challenge to the concept of intelligence, but are in fact trivial. Learned skills (chess or mental arithmetic) do not demonstrate "intelligence", although they do strongly correlate because the process of learning a skill is expedited by intelligence. Reasoning skill is relevant to the definition; memory less so. "Excellent at formal logic" seems ill defined to me. But really, we're just talking about the capacity for learning. This is a pretty simple concept. – Karl Knechtel Apr 17 '23 at 00:20
  • "Another problem with "intelligence" is that it's not a fixed quantity throughout life." It's true that intelligence is not a fixed quantity, but the intelligence of an individual in any one given moment correlates strongly with the intelligence of the same individual in some other moment, ceteris paribus. This is not in any way a "problem" with the concept, any more than the ability to dye fabric makes the concept of colour problematic. – Karl Knechtel Apr 17 '23 at 00:22
  • 1
    "There are so many authoritative sounding assertions on this Stack Exchange, one sometimes wonders if it's all really "philosophical"." It isn't, and it shouldn't be. Philosophy is not adequately equipped to tackle the concept of intelligence, which is defined in psychology - i.e., the hard science of the brain. – Karl Knechtel Apr 17 '23 at 00:53
  • 2
    AI and animals provide useful points of comparison and useful examples. AI relates to recreating or simulating intelligence and consciousness (for which you need to know what those things are), whereas animals seem to have a spectrum of intelligence and consciousness. Disregarding anything that references either would disregard most material on the subject. You'd likely be disregarding all philosophising on where intelligence might "begin", as well as most philosophising about what intelligence is. – NotThatGuy Apr 17 '23 at 08:29
  • 2
    *Intelligence* is what supposedly distinguishes humans from animals and machines - at least, most animals and most machines. Thus excluding animal studies and AI might be the proverbial throwing the baby with the water. – Roger Vadim Apr 17 '23 at 15:20
  • @KarlKnechtel I don't think that psychology is considered to be a hard science. The distinct discipline of Cognitive Psychology is the hard science of the brain. – nwr Apr 18 '23 at 02:50
  • I have a feeling the title of this question can be improved since "AI answers" may be ambiguous due to the existence of [banning "AI-generated answers" policy](https://meta.stackexchange.com/q/384922/241919). – Andrew T. Apr 18 '23 at 03:21
  • Are you trying to understand statements such as "humans have more intelligence than other animals", or "some humans have more intelligence than other humans"? These uses are likely to be philosophically inequivalent. For starters, they're scientifically inequivalent. – J.G. Apr 18 '23 at 07:41

7 Answers7

12

Broadly speaking, the analysis of intelligence tends not to be conducted by philosophers who are more devoted to writing about mind than intelligence. For instance, none of the Stanford Encyclopedia of Philosophy, the Internet Encyclopedia of Philosophy, and the Encyclopedia of Philosophy have entries for intelligence (besides AI). Unlike a topic such as Kant or Descartes in which you can find an endless stream of books, philosophical analysis of intelligence seems to be more scant. After some digging, here seem to be some works with a philosophical bent by authors with philosophical credentials:

The folks most interested in writing about intelligence seems to be psychologists of whom there are two prominent theories which might be seen as representative of two approaches to characterizing intelligence, one devoted to an approach using rigorous psychometrics and the other more pluralistic and devoted to cataloging domains of human expertise. The former is the Cattell-Horn Theory and the latter is Multiple Intelligence theory. The former is based on the study of the G-factor which might be understood as a variable to characterize general intelligence which has now been broken into a number of subdomains. The latter is the latest incarnation of seeing human general intelligence built up from subdomains. The former is much more scientific in terms of reproducibility and grounding, and the latter is far more popular among educators, and its author, Howard Gardner, concedes that the scientific rigor of his theory is less pronounced.

Philosophically, the tendency is to characterize intelligence as dispositions (SEP) of the mind, and modern approaches to the mind by AI researchers and professional philosophers openly accept the modularity of the mind (SEP). I would say that those who study intelligence tend to be reductive materialists who don't stake out claims about mind-body dualism (SEP) which is perhaps one of the central preoccupations of those involved in philosophy of mind, and rather accept the dual basis of mind and body for the purposes of exploring and measuring intelligence.

As such, I would say that the philosophy of intelligence is in its infancy in that the current research interests of philosophers of mind and scientists such as psychologists aren't strongly aligned in the same way that AGI is often seen as a fringe position in AI and their topics of discussion fail to overlap.

J D
  • 19,541
  • 3
  • 18
  • 83
  • "cataloging domains of human expertise" and "now been broken into a number of subdomains" - you found it yourself in your research. – Frank Apr 16 '23 at 15:19
  • 2
    Let no one say that "intelligence" (in general) is well defined :-) – Frank Apr 16 '23 at 15:38
  • 1
    "The folks most interested in writing about intelligence seems to be psychologists... The former is much more scientific in terms of reproducibility and grounding" Agreed. I wish philosophers would quit acting as if there were something problematic with the concept of intelligence simply because their discipline is ill equipped to deal with it. (It's that much worse [when they bring politics into the mix.](https://philosophy.stackexchange.com/questions/6764/do-iq-tests-measure-intelligence/85941)) It is very irritating to have a perfectly scientific concept objected to for political reasons. – Karl Knechtel Apr 17 '23 at 00:24
  • 2
    @StevanV.Saban Maybe Mjolnir. – J D Apr 17 '23 at 01:43
  • 2
    @KarlKnechtel "I wish philosophers would quit acting as if there were something problematic with the concept of intelligence". Clearly a doomed aspiration. The primary job of the philosopher is to claim something unproblematic is problematic and then appeal to generations of other arguments that claim something else that is unproblematic is problematic. ; ) – J D Apr 17 '23 at 07:45
5

Your are puzzled that intelligence is mostly discussed in connection with artificial intelligence. But this is telling: Before the advent of AI, or at least clockworks, intelligence and consciousness respectively "mind" in general were inextricably intertwined. Intelligence clearly was present when self-awareness was present, and could not be thought without it.

Only advances in mechanization, in particular in "mechanizing" information, made a mental distinction between the two possible. The intricate clockworks developed since the 17th century inspired the idea of "something" (a mechanism, an automaton, a puppet) that had the outward appearance of being conscious, intelligent or generally human, but possessed none of the intrinsic properties. Offray de La Mettries Essay L‘homme machine put forth a mechanistic world view, and mechanical dolls appeared in literature, like Olimpia in E.T.A. Hoffman's Sandmann.

The culmination of this distinction is the "Chinese Room" gedankenexperiment brought forward by John Searle. It imagines a mechanism or, broader, "system" which is able to converse in Chinese (which implies some problem-solving capacity), while it is internally just a vast but essentially trivial lookup mechanism.

This demonstrates that the two concepts — consciousness/mind versus intelligence — are categorically different. Consciousness is an intrinsic property, an internal state, while intelligence could be called a "performative" property: The ability to solve problems. It is hard to detect, let alone quantify consciousness; it is comparatively easy to measure at least one definition of intelligence, namely the ability to solve intelligence tests.

Since the concept of intelligence as distinct from mind or consciousness was only made possible by automation, it is also connected to the advances of A.I., which is the continuation of mechanization by other means. As Ray Kurzweil observed, skills that AIs master are silently dropped from the criteria list of "true" intelligence, and in an instant, displaying a degree of amnesia that borders on bigotry, people claim they had never really been on it. (The comment thread to this answer in chat is a good self-referential example.) Here is a list of capabilities which well into the 20th century would have been considered unmistakable proof or the highest intelligence:

  • Being an excellent chess player: "It's just simple rules and brute force; you should try a game that cannot be penetrated that way, like Go!"
  • Being an excellent Go player.
  • Reading.
  • Speaking dozens of languages fluently, and translating between them.
  • Driving a car in unknown places.
  • Suggesting the way proteins fold.

The first thing that has thrown this strategy of implicit retreat off the rails is the advent of ChatGPT. The reason is probably not so much the strength of its "core engine": Many of the above achievements are as remarkable. The main reason is that ChatGPT can sit down and successfully take the Turing test because it has a great speech interface. You don't need to program it. Instead, it can readily, as-is, participate in a lot of human activities. It can pass exams of all sorts, write love letters and come up with cooking recipes.

It is chilling to realize that much of our own behavior is not "truly" intelligent but instead relies on internalized patterns. Most of the times, average lawyers, programmers, cooks, drivers, parents, lovers etc. essentially engage in applying patterns to standard situations, much like ChatGPT. In general, ChatGPT shows us that much of what we do in life is rule-based; sparks of inspiration are rare and far in between.

I think that we will have to abandon this strategy of implicit retreat and stand our ground: Above a certain complexity even rule and pattern based behavior is "intelligent". But then we'll have to accept that we are creating "truly" intelligent machines.

  • **Comments have been [moved to chat](https://chat.stackexchange.com/rooms/145403/discussion-on-answer-by-peter-reinstate-monica-what-philosophers-understand-fo); please do not continue the discussion here.** Before posting a comment below this one, please review the [purposes of comments](/help/privileges/comment). Comments that do not request clarification or suggest improvements usually belong as an [answer](/help/how-to-answer), on [meta], or in [chat]. Comments continuing discussion may be removed. – Philip Klöcking Apr 17 '23 at 17:29
3

Intelligence is looked at by the field of Cognitive Science, if you wish to label this - which is a more or less loose interdisciplinary conglomerate of several sciences "plus" philosophy. Check through their sources, maybe you find something to your liking.

As the other answer and yourself found out, it seems like intelligence in itself does not really lend itself well to purely philosophical discourse; at least I do not really remember hearing or reading much, if anything, about it in the classic or even modern philosophers. It almost looks like the previous generations thought of intelligence as just a relatively normal feature of humans. Philosophy does not explain other biological human features either (i.e., our capability to move, or eat, etc.).

AI is a bit more of a philosophically interesting topic - obviously most if not all of what we have as A"I" today is anything but; what we are experiencing today is that the algorithms are ever more able to seem intelligent. One could say that ChatGPT and other tools pass the Turing Test, they can fool many people into believing that their output was created by a human, and thus created by an intelligent actor. As you are familiar with the AI world yourself, you know how far that is from the truth. But this is a new phenomenon for philosophy and makes it interesting to discuss, for example, how to detect that some agent (or seeming agent) does have "real" intelligence, or whether there actually are aspects of intelligence which are absolutely congruent between humans and other systems (not only AI, but also, for example, large masses of people, cities, nations, and so on and forth; "unfortunately" it is again more in the realm of other sciences, not philosophy, to think about this; as an example, see "Life 3.0" by Max Tegmark).

On the other hand, philosophers or scientists subscribing to computationalism arive from the other side of the fence and try to explain ever more aspects of the brain and mind using the same physical concepts we know from Theoretical Computer Science (in general, from computational systems, not necessarily physical computers).

AnoE
  • 1,847
  • 6
  • 9
  • How can it be far from the truth that an actor who passes the Turing test is an intelligent actor!? – Peter - Reinstate Monica Apr 17 '23 at 10:09
  • I don't think it's all that reasonable to say ChatGPT being intelligent is "far from the truth", that it is "anything but" intelligent, and it merely "seems intelligent". ChatGPT displaying a reasonable degree of what we'd recognise as intelligence, and having an architecture based on human brains, seems to make it very non-trivial to figure out whether they *are* intelligent, and what it means to be intelligent. – NotThatGuy Apr 17 '23 at 10:57
  • 1
    @Peter-ReinstateMonica imagine a Turing-like test involving computations with large numbers. Only intelligent actors can pass it, yes? Until the invention of the calculator? Invent this test several years before the calculator and then later give the test to a human and a calculator. Calculators proven to be intelligent actors. – user253751 Apr 17 '23 at 14:26
  • an interesting thing about "congruence" - many somewhat different AI models trained in different ways on similar data make similar mistakes - even mistakes that are completely obvious to humans - look up "adversarial examples". It is possible that from more advanced AI models that are similar to humans, we can generate adversarial examples for these models and they will also apply to humans. (Don't look up "chihuahua or muffin") – user253751 Apr 17 '23 at 14:27
  • @user253751 Re calculator example: In my answer above I examine the fact that the proponents of the "true intelligence" idea (which machines, they contend, cannot and do not possess), shift the goal posts with the advances of AI, in order to dismiss everything machines do as unintelligent. That is obviously flawed, so yes: Solving increasingly complex math problems is increasingly intelligent behavior. We'd be hard pressed to call Charles Babbage's difference engine intelligent in any meaningful sense of the word -- but what about Wolfram Alpha? – Peter - Reinstate Monica Apr 17 '23 at 14:35
  • I think everyone would agree that we are making more and more intelligent machines; the controversy is how much intelligence counts as intelligent (as a binary yes/no state). – user253751 Apr 17 '23 at 14:36
  • @Peter-ReinstateMonica: It's not that the goalposts shift; rather, it's that societies relearn the same lesson over and over. I couldn't pass a Turing test in e.g. ancient societies; I just don't speak their languages! The lesson is that the Turing test is examining whether an agent is a member of a certain culture, rather than whether the agent is intelligent, conscious, self-aware, etc. A corollary is that the typical member of society need not be intelligent, conscious, self-aware, etc. – Corbin Apr 17 '23 at 15:17
  • @user253751 What an Earth makes you think that it is not a continuum, like almost every other trait? There are actually good parallels: Somebody astonishes you by doing something really mean; that incident makes you reevaluate their past behavior and, looking back, you detect instances of mean behavior that you overlooked before, and you come to the conclusion: Yes, they have exposed mean behavior earlier, it is clearly one of their traits. I suspect we'll look back at AIs that way. – Peter - Reinstate Monica Apr 17 '23 at 15:23
  • @Peter-ReinstateMonica when we ask "is Peter intelligent?" the answer is "yes" or "no". It can't be "mu" or "half" – user253751 Apr 17 '23 at 15:25
  • @user253751 All to the contrary, it's certainly "sometimes", or "so-so", or "not sure". And what about my 18 months old?? – Peter - Reinstate Monica Apr 17 '23 at 15:26
  • @Corbin Hm-hm. I think the Turing test relies on language just because you need a sufficiently wide channel of communication which is still not betraying the physical makeup of the correspondent. (I think I could smooch with somebody and tell whether they are brain dead or not without words (hence asserting intelligence), but that way I could trivially tell a machine from a human, so the channel has too much "side-channel" leaking.) – Peter - Reinstate Monica Apr 17 '23 at 15:31
  • @Corbin To pass a Turing test, you need to be able to read and write symbols from the given language, yes, but the test is more about *understanding* those symbols (by some definition of "understand"), in order to give coherent responses. This understanding is specific to the language, but also transcends the language: if you ask someone e.g. "where do you live", "dónde vive" or "wo wohnst du", you'd expect them to respond with some type of location. Different symbols are linked to the same underlying concepts, and the actual "understanding" process is very similar. – NotThatGuy Apr 17 '23 at 15:33
  • 2
    "How can it be far from the truth that an actor who passes the Turing test is an intelligent actor!?" Among other possibilities, Turing could have been wrong; he may have failed to imagine an unintelligent manner of generating Turing-test-passing prose. The thing that ChatGPT has really driven home for me personally, is just how much flowery language used in business emails etc. doesn't actually reflect any *thought* or *insight*. – Karl Knechtel Apr 17 '23 at 17:35
  • "ChatGPT displaying a reasonable degree of what we'd recognise as intelligence, and having an architecture based on human brains" I disagree that ChatGPT displays any such thing, and to describe its architecture like that demonstrates an extremely surface-level understanding of the technology. – Karl Knechtel Apr 17 '23 at 17:37
  • @Peter-ReinstateMonica ""true intelligence" idea (which machines, they contend, cannot and do not possess), shift the goal posts with the advances of AI, in order to dismiss everything machines do as unintelligent." No, we don't. You fundamentally misunderstand (intentionally, I conjecture) how the evidentiary standard works. People do not pre-commit to "okay but a machine that could do X *would* be intelligent" and then walk that back when a machine does X; that's a strawman. – Karl Knechtel Apr 17 '23 at 17:39
  • What actually happens is that some people (not necessarily advocates of the same position) conjecture that "true intelligence" (as you want to call it) would be necessary to solve a particular problem efficiently; then they are proven wrong as the problem is solved efficiently (although, not necessarily efficiently enough to be viable on hardware that existed when the claim was made!) without actual intelligence (but just a more sophisticated algorithm). – Karl Knechtel Apr 17 '23 at 17:41
  • "Solving increasingly complex math problems is increasingly intelligent behavior." The issue is that you conflate "solve" with "compute the answer to". Gauss demonstrated intelligence by determining a formula for arithmetic sequences (without being prompted to do so) rather than actually performing a summation by hand. If he were simply able to perform the summation faster than his classmates, that would not translate into being more intelligent than them. The hallmarks of intelligence here are the analysis and insight, not the computation. – Karl Knechtel Apr 17 '23 at 17:46
  • @user253751 "I think everyone would agree that we are making more and more intelligent machines; the controversy is how much intelligence counts as intelligent (as a binary yes/no state)." no; I would disagree that we make intelligent machines at all, and I remain skeptical that such a thing is possible in principle. This is not a new position; see e.g. Roger Penrose's "The Emperor's New Mind". – Karl Knechtel Apr 17 '23 at 17:48
  • @KarlKnechtel You disagree that ChatGPT displays some reasonable degree of what we'd recognise as intelligence? If I ask someone to write some original code for me, and then they write it, and it generally works, and when there's an error, I give them the error message, and they fix the error, this seems like a pretty good display of intelligence (and ChatGPT does exactly that). If you have a better idea of what intelligence is, I'm sure countless philosophers and AI researchers would love to hear it. And I have a relatively deep understanding of neural networks, but thanks for asking. – NotThatGuy Apr 18 '23 at 12:39
  • @KarlKnechtel "some people ... conjecture that "true intelligence" ... would be necessary to solve a particular problem efficiently; then they are proven wrong as the problem is solved efficiently ... without actual intelligence" - and there you've just summarised the moving of the goal posts that happens. People say being able to solve this would demonstrate intelligence, we solve it, then they say that isn't intelligence, and solving that thing would demonstrate intelligence, while most definitions of "actual intelligence" either arguably include modern-day AI, or are too vague to be useful. – NotThatGuy Apr 18 '23 at 12:47
2

According to Aristotle, every physical entity has two aspects, matter and form, which relate to potency and act. He described the 'passive intellect' as the faculty that recieves the forms, and our 'active intellect' as our faculty that acts on the recieved forms, to make inferences. I suggest this 'active intellect' (in Greek, nous) is the first definition of intelligence. He also described humans as having three natures, vegatative, sensitive, and intellective, with the first two shared by anims, and the first only shared by plants.

Aristotle's picture was very influential, and the Scholastic philosophers developed from it the model of humans having Five Wits, or cognitive faculties. It's worth mentioning I think the parallel in Buddhist thought where all schools hold the idea of 'sense gates' with associated cognitive realms, and then a sixth mental faculty 'ideation', and a seventh 'reactivity'. In Mahayana (eg Zen, Tibetan) thought they add an Eighth Consciousness, 'storehouse consciousness', and I'd make a strong case this is analogous to the Noosphere, or Memesphere: the domain of information with substrate-independence.

If you go to the etymology of our modern English word Intelligence it is from, inter "between" + legere "choose, pick out, read". And from the Latin intelligentia which had come to mean "understanding, knowledge, power of discerning; art, skill, taste". The sense of it as information recieved especially from spies, is from the late 1500s.

The Intelligence Quotient, is the estimate by tests of a persons g factor, a hypothesised variable that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual's performance on one type of cognitive task tends to be comparable to that person's performance on other kinds of cognitive tasks. It began use in education, to identify expected cognitive development for a given age.

Research on normal cognitive development shows us milestones, like development of Theory of Mind in children as evidenced by their development around age three of the capacity to lie about their intentions. That may be impaired for people on the Autistic Spectrum, while cognitive capacity with mathematics or computer coding are often higher, which is interesting. We can look at Theory of Mind in animals, like squirrels and jays which are know to use deception when stashing acorns knowing they are being watched. The Mirror Test is thought to indicate the capacity of self-recognition, which may be crucial to intelligence capacity (eg in the Strange Loops model), but the failure of pigs to pass it while being among the small group of species shown to sponteneously use tools in the wild, indicates it may have limitations.

Our capacity to 'see in' to the minds of others, is foundational to our capacity to mimic and learn visually, to moral reasoning (see Is the Categorical Imperative Simply Bad Math? :)), and conceptual thought and language (see The Private Language Argument, and According to the major theories of concepts, where do meanings come from?). The enormous usefulness of this mode of cognition, makes us biased towards it, and towards making it dels of the world which over rely on it. See Is the idea of a causal chain physical (or even scientific)?

So, intelligence has come to mean different things in different contexts. Our active discriminating intellect, our capacity to absorb information plus assessment from spies, a cognitive faculty with correlating benefits in different mental tasks, and something relating to how humans develop especially as they are educated and which contrasts with animals. And then of course, computing turns up, and of course trying to develop artificial Intelligences is going to be a new context, pushing how we think about the topic. I strongly recommend this lecture by physicist Richard Feynman, which talks in very clear terms about what computers are capable of: Hardware Software and Heuristics.

There are many ways of refining or defining what intelligence is, but it's key to recognise which we pick will deend on context. You pick the context of philosophy, so let's look at some examples there that bear on the topic.

"I seem, then, in just this little thing to be wiser than this man at any rate, that what I do not know I do not think I know either." -Socrates, which I'd relate to his saying higher intelligence requires enter into inquiry & discourse, not rely on verities

"ipsa scientia potestas est (knowledge itself is power)" -Francis Bacon, relates knowing capacity to scope of action (see his use of 'power' elsewhere)

“Intelligence is a fixed goal with variable means of achieving it.” - William James, in 'The Principles of Psychology' (a paraphrase I think, though it's widely quoted and referenced in discussions of the topic)

"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents." -Nick Bostrom's 'instrumental convergence thesis' which is core to how he defines Superintelligence in his book of that name, ie the transferable increase in being able to attain goals requiring intellectual work

CriglCragl
  • 19,444
  • 4
  • 23
  • 65
1

Let me tackle that issue from another angle, because reading a bit between the lines, I think you're asking about sentience rather than intelligence.

From What is IQ and does it matter? an article on the One Central Health website:

IQ stands for intelligence quotient and, in short, it is a measure of a person’s reasoning ability.

In other words, an IQ test is supposed to gauge how well someone can use information and logic to answer questions or make predictions.

From an article in Philosophy Now magazine 'Information, Knowledge & Intelligence':

"Unlike belief and knowledge, intelligence is not information: it is a process, or an innate capacity to use information in order to respond to ever-changing requirements. It is a capacity to acquire, adapt, modify, extend and use information in order to solve problems. Therefore, intelligence is the ability to cope with unpredictable circumstances." -Alistair MacFarlane

Thus we can reasonably state that intelligence of an "AI" and an intelligence of a living being can be considered very similar, so much so that your question is more or less pointless, going with the general definition as above.

However - it's "just" similar, because the devil, as usual, is in the details:

"The differences, both qualitative and quantitative, between human and machine agency can be summarised in terms of three gaps corresponding to different levels of agency: a skills gap, a knowledge gap, and a personhood gap. We cannot hope to match machines in terms of the range and accuracy of their perceptions, the speed and power of their calculations, or the delicacy and precision of their manipulations.

(...)

Nor can we hope to match machines in handling intractable masses of data, or in applying processing power to complex formal systems such as mathematics. Computers are better at storing and retrieving knowledge, and at manipulating formal, symbol-based systems like mathematics. There will be an ever-increasing knowledge gap between human and machine.

(...)

However, there are immensely complex information-processing systems that have evolved in the human brain that cannot be replicated in any machine by any process of formal design or experiment, certainly not for decades to come, perhaps not for centuries. The complexity of our brains is vast.

(...)

So there will remain a personhood gap between human and machine that will continue to make human levels of intelligence, emotional insight and ability to handle uncertainty, unavailable to machines. Within any currently conceivable future horizon of prediction, human and machine agency will remain complementary. We will have to learn how to live with them, but they cannot replace us."

-also from Philosophy Now article 'Information, Knowledge & Intelligence', by Alistair MacFarlane

So to answer your question: yes, the philosophers do have "a definition" of intelligence. It's nice and simple. However, nice and simple answers are rarely good ones, so them philosophers struggle mightily with a good definition of intelligence. However, emergence of "AI" has given new boost and caused an extra focus on this question all around, so hopefully - as shown by the second linked article - there will be something truly usable, sooner rather than later.

I know, Sir Alistair George James MacFarlane CBE FRS FRSE is not someone who I'd call a philosopher, but then again - aren't all scientist first and foremost philosophers first (even though a lot of them forgot it)?

AcePL
  • 111
  • 2
  • Alistair MacFarlane was professor of Information Engineering at Cambridge, & this article he wrote is about the philosophical side of his specialist subject, in a magazine of note for philosophers. "intelligence of an 'AI' and an intelligence of a living being are essentially the same" He does not in that quote, & you have not, made that case. That both involve processing of information, by no means implies they do it the same way. Consider the Chinese Room argument, & it's implications, & the syntax/semantics gap. – CriglCragl Apr 19 '23 at 12:24
  • 1
    @CriglCragl - And I believe I wasn't making the equation, either. I'll see if I can rewrite answer to make it better. What I was pointing at is that most basic definition of intelligence is that it is a process. Of course in living being and in artificial instance those are different processes - and MacFarlane shows that - but not that hugely different. My main point is that OP may be doing himself a disservice by excluding AI-related replies, because thanks to emergence of AI we can better understand both artificial and natural (that is: electron-based and carbon-based) intelligence better. – AcePL Apr 21 '23 at 07:52
0

Most of things have a history, that mean thay have atleast two stances - before and after.

Intelligence is an art to find differences batwing before and after (dialectic διχοτομία), "power of discerning", taste of the time, ability to see temporary and eternity.

AI - artificial intelligence, ofc it is only following part after Intelligence art, product of someone's Intelligence - realised copy, image, likeness, but not an Intelligence.

Intelligence ist existence in pair with logic, but if logic is about analysis and brain, intelligence is belonging more to a psyhe structure, or this structure of distinctions you can name a psyhe - principles of the experience gaining complex. Intelligence is for experience gaining, logic is for analysis, classification and same. Logic is based on knowledges, intelligence is based on unknowing.

The main difference ai and intelligence, that ai can to know that it unknown something. That mean that ai can't know something too, and all it's answers is fake copies of someone's intelligence(very rare) or more often simple garbage "opinions" compilation. Also ai needs men's intelligence(souls) to separating the wheat from the chaff. So it is simple one more external tool for men needs, smart one, no more.

And at last about tendencias that "bakers don't know what is intelligence" there is 2 reasons for this, first one they skip this part, how they get knowledges, because they really don't know this - how they got some fundamental knowledges like 2+2=4, because intelligence is that they don't know, it is something before knowing... and second they try to forget something to reknow, that is why they like to say that they don't know something, it is a trick to cheat their unconscious. Ai can't "forget" something.

0

It is simple. Intelligence is the ability to solve problems. "Wisdom" is the ability to solve them well, whereas being "perceptive" is the ability to discern a problem into a solution. It is a type of "perceptual intelligence", but should be treated separately (as a type of pre-made intelligence from prior ages).

That's it -- there's no need to compare to animals or computers.

Marxos
  • 735
  • 3
  • 12
  • So the question is: 'How do *philosophers* understand intelligence?' Which philosophers have you given an account of...? – CriglCragl Apr 21 '23 at 01:39