9

I was reading an article by J Mark Bishop "The danger of artificial stupidity" on ScientaSalon, where he quotes his own research, John Searle and Hilary Putnam, among others, as proof of the impossibility of strong AI. I've always felt that strong AI deniers were closeted substance-dualists. People who believed in souls, but were unwilling to come clean about their religious/metaphysical beliefs for fear of being ridiculed. So instead they come up with all sorts of pragmatic arguments against strong AI like qualia or computers lack of insight, which don't really hold.

My reasoning for why denying the possibility of strong AI implies substance dualism is the following:

  1. Any finite sized physical phenomena can be reproduced given sufficient technological means and a sufficient understanding of the underlying physical processes.

  2. Denying the possibility of strong AI means that no matter how advanced our technology and how comprehensive our knowledge of neuroscience and psychology will become, we will never be able to reproduce the functionality of the human mind.

  3. Per 1) The only reason we would not be able to reproduce the mind's functionality is if there is something non-physical about how the mind works.

  4. Saying there is something non-physical about how the mind works is the same as substance dualism.

My question is the following: Is this indeed the case, that denying the possibility of strong AI implies substance dualism?

Alexander S King
  • 26,984
  • 5
  • 64
  • 187
  • what's *"AI"*? Artificial Intelligence? if so, "AI" in some different terms than what Computer Scientists mean (like "Expert Systems" or "Machine Cognition" or similar)? or "AI" in some deeper metaphysical sense? and what is *"strong AI"*? (or *"weak AI"*?) – robert bristow-johnson Mar 12 '15 at 01:43
  • I mean strong Artificial Intelligence. – Alexander S King Mar 12 '15 at 02:31
  • 1
    By strong AI, I mean whatever combination f expert systems, machines learning algorithms, fuzzy logic, genetic algorithms, support vector machines,...you name it, necessary to simulate all of the functions of a normal educated adult mind. – Alexander S King Mar 12 '15 at 04:00
  • okay, so "AI" means "Artificial Intelligence". dunno if it's the "AI" like computer geeks think about it or if it's more of a Ray Kurzwiel thing about one's sense of consciousness existing among silicon-based technology rather than carbon based. and then i still dunno what is meant by *"**strong** Artificial Intelligence"*. what's with the *"strong"*? – robert bristow-johnson Mar 12 '15 at 04:01
  • You said it yourself: Weak AI is basically a set of human behavior inspired programming methods (like computer geeks think about it). Strong AI seeks to have a computer be 'as intelligent as' a normal educated adult human, in the same way that an adult human is more intelligent than a dog or an amoeba. Although not unanimous, most people agree that such a level of intelligence implies consciousness, or at least self awareness, which more of a Ray Kurzweil thing. – Alexander S King Mar 12 '15 at 04:09
  • okay, my question is this "*normal educated adult mind*" a functional thing (like the computer responds with answers or response to stimuli in the manner we would expect from a normal educated adult mind. or is it about the computer taking on a qualia or consciousness itself (where we might have to think about the ethics of pulling the plug on this computer)? – robert bristow-johnson Mar 12 '15 at 04:10
  • *"You said it yourself: Weak AI is basically a set of human behavior inspired programming methods (like computer geeks think about it)."* --- i didn't call that "Weak AI" or anything. --- *"Although not unanimous, most people agree that such a level of intelligence implies consciousness, or at least self awareness, which more of a Ray Kurzweil thing."* --- yes, that's far from unanimous. – robert bristow-johnson Mar 12 '15 at 04:12
  • so, if the silicon-based technology develops to a sophistication comparable to nature's carbon-based technology, are you asking if that means that the silicon-based technology has qualia or an emerged consciousness? – robert bristow-johnson Mar 12 '15 at 04:17
  • @AlexanderSKing In claims 2 and 4 you seem to be using the wrong word. The word you should be using there is "mind" -- not "brain." To use the word brain is to be confused as to what exactly is at stake in the argument. More generally, you seem to conflate belief in souls with substance dualism. In doing so, you're skipping over hylomorphism and a wealth of similar views (possibly because you can't tell the difference???). – virmaior Mar 12 '15 at 04:27
  • 1
    @virmaior I changed the question according to your suggestion. I am conflating souls and substance dualism, but not out of if ignorance. That is the exact gist of my question: The way I see it, a property dualist account of the soul is perfectly compatible with strong AI, since a property dualist (hylomorphic or other) soul is to the brain what software to a silicone based computer. Am I missing any other possibilities? (oh and thanks for the condescension - classy) – Alexander S King Mar 12 '15 at 04:51
  • What you're talking about is the perfection of human cloning, not of development of strong A.I. Sure, we can create something identical to a brain (i.e. create a brain), but what about that is artificial? – Scott Mar 12 '15 at 05:03
  • I don't know what you mean by calling a hylomorphist a property dualist. You're going to have to connect some dots for me. The software/hardware analogy doesn't seem to capture either the traditional Cartesian dualist's view or the hylomorphist's view. – virmaior Mar 12 '15 at 05:03
  • i'm still wondering if what you're inquiring about is whether silicon-based hardware supporting Intelligence has the properties of qualia or consciousness? and if that might lead to a consideration of the natural rights of that AI, as to whether or not it would be ethical to literally pull the plug on the hardware conducting that AI? – robert bristow-johnson Mar 12 '15 at 05:09
  • I'm not sure about "substance dualism", which appears to be a pretty vague notion introduces by Descartes (who was one of those I-proved-my-god-mathematically people, he had a bunch of proofs). But a denial of the possibility of machine intelligences based on digital computers, is a belief that minds are not possible with just known physics (which *can* be simulated to any desired accuracy). I.e. they necessarily believe in something supernatural, or, like Penrose, that the brains of human mathematicians (!) support gravitic quantum function collapsing or something like that. ;-) – Cheers and hth. - Alf May 24 '15 at 05:35

8 Answers8

8

I can think of a few alternatives:

  • One could argue for a case where a human mind grade AI is theoretically producible, but the universe lacks sufficient resources to do so. This would be a practicality argument, not a theoretical possibility argument.
  • Idealism can claim strong-AI is impossible, without being dualistic.
  • Not all finite sized physical phenomena can be reproduced. You have to be able to measure it first, and there may be unmeasurable values in the universe (QM has shown presumably unmeasurable values exist).

There is also the cheating argument, to claim that "strong AI" is not defined sufficiently to allow us to accomplish it, but I don't believe that is what you are looking for.

Cort Ammon
  • 17,336
  • 23
  • 59
  • I have thought of each of those. In reverse order: 1) I don't buy the brain as a quantum computer. The mind is a macroscopic phenomena and quantum decoherence indicates a strong likely hood that the brain operates on a classical (non quantum level). Plus that would contradict the Church–Turing–Deutsch principle. – Alexander S King Mar 12 '15 at 03:46
  • 2) Idealism is logically possible, but I will follow previous examples and refute it by kicking my foot against a rock. I should have added that as a caveat in my original question, [disregarding idealism],...and would idealism produce a mirror symmetry of materialism, where the mind and the body would still follow the same set of rules, meaning strong AI is possible. – Alexander S King Mar 12 '15 at 03:52
  • 3) The mind as so complex that accurately simulating it is intractable is the most interesting retort all 3. I guess wouldn't dismiss it entirely, but I have to note that it puts serious strains on the holographic principle. That is the principle that the amount of information in a system is upper bound by the surface of the volume containing it. Maybe the mind is too complicated to be simulated by a laptop or 10 laptops. But is it truly so complex, that a machine with the equivalent power - say of the all of Google's hardware - can't simulate it? That would be far fetched, – Alexander S King Mar 12 '15 at 03:57
  • @AlexanderSKing in your order: 1) QM is one source of unmeasurable, it is simply the most accessible. Look into simulated automata and the idea of nonquiescent entities for a less accessible but less handwavey unmeasurable, also look into gardens of eden, which are not exactly in the direction you are looking, but are related enough to be of interest. 2) There is no guarantee that the idealistic form of materialism will choose(freewill) to free itself from whatever limitations material provided. However, I am fine to accept the caveat. I just wanted to point out that the decision is not binary – Cort Ammon Mar 12 '15 at 15:36
  • As for 3, it depends on how high of fidelity you have to model the human brain. If it turns out it can be broken down into components (ALU, memory, cache, etc...) then it may be trivial to model. If the tiniest quirks have to be modeled for it to actually function, then this gets harder. As for the computing power, consider protein folding. Consider that proteins fold over a time on the order of a millisecond or less, and the body is constantly folding literally millions of proteins at any time. **One** protein fold on Folding@Home took 10million hour of CPU time. – Cort Ammon Mar 12 '15 at 15:43
  • Another one to consider, after reading quen_tin is to look at Chaos Theory and what happens if you try to model a continuous function using discrete values (like floating point numbers). Any argument for computing strong AI will have to argue that the chaotic portions of the human brain have to be contained in statistically representable forms. – Cort Ammon Mar 12 '15 at 15:47
  • If arguments about feasibility are valid here, is the fact that human brains do not routinely tumble into massively chaotic states good empirical evidence that they are resilient, and not hypersensitive to initial conditions, and would this in turn be good evidence that approximate modeling would be good enough? Questions of feasibility may be moot in this issue, however, as the people mentioned in the question seem to be claiming that modeling is ruled out in principle, not merely infeasible. – sdenham Apr 06 '18 at 15:09
  • @sdenham That gets into an interesting corner case which is the idea that systems appear chaotic only if you measure that which is chaotic. For example, if we want to know if it will rain or shine in New York in a month, the variable we use to describe that is *highly* chaotic. However, if we are interested in the average rainfall in NY over the course of 10 years, that is a variable which is currently "resilient". Likewise, much of what we care about in these discussions regarding the human brain are chaotic, but as you point out, if you just look at "does the brain keep us alive," ... – Cort Ammon Apr 06 '18 at 15:17
  • ... then it does not look very chaotic at all. However, I do believe that if you start from the assumption that these approximate modelings are good enough because human minds are not sufficiently sensitive to initial conditions, then I think you also very quickly arrive at the conclusion that there is no need to treat different humans as individual entities. Indeed, some feel that corporate business treats us all as "cogs," in that that which makes us unique can be easily replaced because the aspects of humanity they care about are, indeed, resilient, and thus replacable. – Cort Ammon Apr 06 '18 at 15:19
  • As no-one is suggesting that the weather cannot be modeled in principle, this example is consistent with what I am saying: despite a degree of underlying chaos, approximate modeling works in weather forecasting. Furthermore, the sensitivity to chaos necessary to rule out strong AI would not be a corner case, it would be pervasive at all levels, and we would routinely see people suddenly, and for no particular reason, become uncommunicative, unresponsive, and maybe even dropping dead, even though their brains remained physically undamaged and metabolically functional. – sdenham Apr 06 '18 at 16:17
  • Strictly speaking, a hypothesis about the brain is not refuted by it having implications that we dislike, but I do not think that is an issue here: the individuality of people is an empirical fact, and I am not sure that ethical treatment is predicated on that fact anyway. In addition, I see a broad continuum here, not the dichotomy that you present, which seems to me to be equivalent to "either the brain is too chaotic to be modeled (even in principle), or we can treat people without regard to their individuality". Why would the latter have anything to do with the former? – sdenham Apr 06 '18 at 16:31
  • @sdenham Is the individuality of a person an empirical fact? To answer that we have to define what we are using to measure individuality. As a straw man, if the only measure I use is whether they live or not, 100% of people die eventually, suggesting no need to treat people as individuals at all. The point of this is to beg the question of what makes an individual an individual, and whether that thing is measurable and quantifiable or not. – Cort Ammon Apr 06 '18 at 22:31
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/75636/discussion-between-sdenham-and-cort-ammon). – sdenham Apr 06 '18 at 23:11
5

To complete Cort Amon's answer, I would say that there is a difference between reproducing physical phenomena and computing them.

Even if all physical systems can be reproduced using some physical material (say you reproduce the structure of a living cell from similar molecular components), that doesn't mean you can compute them. You only make physical copies of physical systems. Imagine that the only way to reproduce consciousness is creating a physical brain made out of similar organic components (neurons, ...) rather than implementing a software on a turing machine. Then strong AI would be impossible: computers are made out of transistors, not neurons, and you won't ever get the same result on a silicon computer, or any turing machine, even though dualism is false.

Note that contemporary physics is not computable, or only using strong approximations (and even then you only get probabilities). Also note that cloning a physical system is physically impossible (cf. the no cloning theorem) so even reproduction could be impossible...

Edit: here are some reasons why contemporary physics is not computable:

  • space-time is continuous, not discrete and you'd need infinite ressources for a bounded system. The idea that ultimate physics would be discrete is speculative.
  • physical quantities are irrational, not rational numbers and again you'd need infinite ressources
  • natural constants and measured quantities too, but we have finite measurement precision
  • a small imprecision can expand exponentially if the system is chaotic
  • in standard quantum mechanics, there are infinite series that we can only approximate through perturbation theory. In general, there is no analytic solution to the equations of the theory. Physicist often use classical approximations for interactions with external systems.
  • an open system might not be separable from its environment. We need idealisation even before we talk of "a system" (reduced density matrix).
  • in quantum field theories, we need further idealisations because of infinite divergence (renormalisation). (Although this could be solved by a future theory of quantum gravity, so if you consider an ideal physical theory in your argument this one is not necessarily an issue).
  • even when all these problems are addressed, computation time of quantum mechanics is exponential. Say it takes 1ms to compute 1ms, it could take 10ms to compute the next 1ms, 3 hours to compute the next 6ms, and 12 days to compute only 10ms in total...
  • as mentionned before, at this point you only get the probabilities...

Contemporary physicists cannot calculate the structure of complex molecules (up to a few atoms), even with huge computational capacities: they ask chemists to do this.

In sum, the conjecture today is that ultimate physics will be computable and it's an unlikely conjecture. Actual physics is not computable for many foundational reasons.

You could argue that it's not necessary to have all physical information: the substrate does not matter, only the "software" (the higher level) matters... But this is question begging. Why wouldn't the substrate matter for consciousness? Only when strong AI will be a success we'd know that it's the case.

Quentin Ruyant
  • 5,830
  • 16
  • 29
  • 3
    "Note that contemporary physics is not computable": I would disagree with this statement, or at least say that it is a conjecture, not a fact. The Church–Turing–Deutsch principle (Deutsch, 1985) states the opposite. It an essentially physical extension of the Church-Turing thesis, stating that any physical process can be simulated (computed) by a Turing machine. Additionally, there are examples in nature of processes that follow intractable (NP-complete) computations - namely spin-glasses and protein folding - which end up in non optimal configurations exactly for that reason. – Alexander S King Mar 12 '15 at 03:40
  • (continued) - meaning nature is bound by the same limits that Turing machines are bound by. Moreover, if nature had physical processes that were non computable, we would simply turn that principle on its head and use those processes to solve undecidable/intractable problems. – Alexander S King Mar 12 '15 at 03:43
  • 1
    I edited my answer. I don't understand how we could turn the principle on its head... Perhaps look for ressources on the relation between quantum computing and NP hard problems-although it's tempting at first sight to think that quantum computers could solve NP hard problems (one reason the field is being developed), there are physical limitations that prevent it practically I think (it's a complex issue and I'm not an expert). – Quentin Ruyant Mar 12 '15 at 10:01
  • 1
    A nit: people do alot more than "a few atoms" nowadays: http://wiki.simtk.org/openmm/BenchmarkOpenMMDHFR. – Dave Mar 12 '15 at 11:58
  • 1
    Is this a reasonable summary of your first point: "Given that we don't really understand the relation between mind and brain, it is possible that the following is true: The presence of human-like-qualia *requires* that the mind exist in the context of biological neurons. A silicon brain would (necessarily) result in a mind with significantly different qualia." – Dave Mar 12 '15 at 12:02
  • @Dave thank you for the link. Do you know if the simulations use only pure QM or integrate knowledge/assumptions from chemistry? Your summary is reasonable (I personnally endorse this view) but my comment addresses strong AI more specifically: putting aside the question of qualia, physicalism does not entail that strong AI is possible (even from a purely functionalist perspective on consciousness with no qualia it could be impossible). – Quentin Ruyant Mar 12 '15 at 21:06
  • @quen_tin honestly that was the first link that gave me a scale of contemporary simulations (all I was sure of was that in the mid-90's people were doing QM simulations of O(100) atoms at a time). From my perspective, I don't see a clear line of demarcation between physics and chemistry for much of this research (Hey you got physics in my chemistry!). – Dave Mar 13 '15 at 12:49
  • @quen_tin agreed. Use of qualia in my previous comment is probably bad. Should have said something more like "(some of the) essential features of human intelligence require embodiment in the context of biology (i.e. neurons et al.)." – Dave Mar 13 '15 at 12:52
  • @Dave Regarding chemistry, the point is that usually, the structure of complex molecules is known from observations in chemistry rather than pure physical computation (e.g. angles of different chemical bounds, etc). Now of course this is still knowledge of some sort. – Quentin Ruyant Mar 13 '15 at 17:33
  • You have highlighted "there is no analytic solution to the equations of the theory" (of standard quantum mechanics), but the lack of analytical solutions to even something as simple as the three-body problem in classical mechanics is not an obstacle to calculating a great many things to a sufficient degree of precision. That does not prove that the mind is computable, but basing a claim that it is not on this lack of analytical solutions is just an intuition pump, compounded by mentioning quantum mechanics (and thus invoking 'quantum woo' intuitions) when it is not specifically a QM issue. – sdenham Apr 26 '18 at 02:53
  • @sdenham right, I could have used classical physics to argue that physics is not computable. It's only a bit less relevant because gravitational forces become important only at large distances (the 3 body problem also affects electromagnetism but QM is taken to account for EM phenomena). And the problem of lack of analytic solution is much more dramatic in QM: it already occurs for two bodies, and in the classical case, we usually have convergent series as solutions, not always so in QM. But strictly speaking, yes, classical gravitation isn't computable either. – Quentin Ruyant Apr 30 '18 at 16:56
3

There are other alternatives; for example:

Spinozas system which encompasses mental and physical phenomena as modes of an absolute self-subsisting simple substance.

ln Leibnizs Monadology; there are many kinds of monads; a soul is a monad, as is God; they don't interact; each reflects in itself all the others; and harmonised independently.

In Epicurus's system all is atoms; and the soul is made of soul atoms; as such they can affect material atoms.

In Hegel; the world is the progression of the world-Geist from non-being and being; which in a sense are the same; thus human beings are an expression of the Geist.

Kant identifies a noumenal world behind the phenomenal world; one could take this inexpressible and indescrible dimension where mental and physical phenomena are 'one'; though he quiet on how the noumenal world causes the phenomenal world.

Schopenhauer identifies the noumena of an impersonal will; and it is the force of this will that causes the phenomenal world.

I would hazard a guess, that it was the admixture of Greek and Christian Philosophy that identified the absolute substance as God; and although Descartes is usually counted as the originator of the dual substance thesis; in fact, he's not, being too careful and cautious a thinker; one supposes it was the outcome of later cogitations by Descartians - but this removal of the metaphysical scaffolding left these hanging in the void, and unable to interact with each other by definition; as substances are causally closed.

Mozibur Ullah
  • 1
  • 14
  • 88
  • 234
  • 2
    I would argue that Leibniz and Epicurus are describing defacto substance dualism: Saying there are special soul monads or soul atoms - or soul superstrings or soul quarks for that matter) - amounts to saying that the soul has a type of substance different from the rest material objects. – Alexander S King Mar 12 '15 at 15:05
  • @king: good point - but substances are causally closed; atoms aren't; monads are - but Leibniz uses the notion of a prearranged harmony to mimic interaction or cause. – Mozibur Ullah Mar 12 '15 at 15:11
  • Epicurus uses the existence of the will for example to deduce that atoms must display a certain 'willfulness' - ur the *clinamen*. – Mozibur Ullah Mar 12 '15 at 15:13
  • I think the list of notions in this answer is fascinating. Starting with Spinoza's "absolute self-subsisting simple substance", what are some of the scientific experiments that imply that it exists, or experiments that show it doesn't exist? Also the "soul atoms" sound as something that must surely have interested experimenters, yes? – Cheers and hth. - Alf May 24 '15 at 05:58
  • @Cheersandhth.-Alf: can you show me an experiment that gives me an actual number - that I can hold in my hand? I mean your agenda appears to be to reduce everything to an experiment - if I can test for it, it must exist; in which case why not hang out in physics.SE; this is after all a site devoted to philosophy; and the empirical angle is just one small fragment of the edifice; plus it has a history - which is in part what I'm referring to. – Mozibur Ullah May 24 '15 at 12:34
  • You personally might not be happy with the notion of 'soul atoms'; but I suppose you think the word means an atom as it's concieved today; but the ancient notion refers to something that can't be seperates from itself;?ie divided; so an atom of oxygen is not an atom - the quark is; irbid one believes in string theory - the string is; or if some very speculative ideas of physics, for example noticing that one is tending to greater and greater unification then an atom is that which composes spacetime as well as the matter and force content. – Mozibur Ullah May 24 '15 at 12:42
  • This is what Descarte for example was getting at when he identified everything as extension; and following him Spinoza; but this is already far beyond anything that can be experimentally done now; as far the 'neccessary substance' it has a long history going back to Aristotle, and through Plotinus's neo-Platonism, to Avincenna and then Spinoza; are you saying that one ought to write off this history - because we now know everything? And we now know how to know - the experiment? – Mozibur Ullah May 24 '15 at 12:47
  • Despite the historical record showing that certain ideas were experimentally verifiable? If I were to whisk you back in time to 500BC to Thrace; what would you say to him? Perhaps 'I can't see how you're ever going to test for atoms' (remembering that they did have some engineering science ie Heros *Pneumatica* and *Mechanica*)? – Mozibur Ullah May 24 '15 at 12:53
  • But to go back to atoms: a mind is uncuttable - is it possible to have half a mind? I mean to separate ones own consciousness; so that a part of it is here, and a part of it there? That idea of a soul atom or a mind atom is what In part Liebniz means by a soul being a simple substance; ie an atom; and this is different from Democrituses conception where it is emergent; one need not accept neccessary substance - for example the Buddhist monk Nagarjuna didn't - and one can see what follows from that or see how one can justify that position. – Mozibur Ullah May 24 '15 at 12:58
  • In contemporary materialism, if the universe is eternal one can ask is it neccessary - I don't think any serious thinker has thought this; it's always been seen as contingent, and it's the question of what is behind this contingency that the question of necessary substances (or not) contemplates;. – Mozibur Ullah May 24 '15 at 13:02
  • To be honest I find your comments tend to snarkiness for want of a better word; like your comment to an answer I made on Kant; when I answered your objection - you found it 'too difficult to digest'...is that true here? – Mozibur Ullah May 24 '15 at 13:07
  • Thanks for the attempts at explanation. It's not that it's too difficult to understand, although there is much complexity. It's more like I'm an atheist discussing voodoo with some true believers, or e.g. discussing evolution with creationists. Indeed it's difficult to appear to hold a proper respectful tone in such discussion. I recently watched an interview performed by Richard Dawkins, of a creationist woman. She was very polite, and he, at times, appeared aggressive, simply by questioning her (insane) beliefs. Thinking how she influences the learning of children in the US, I almost cried. – Cheers and hth. - Alf May 24 '15 at 15:13
  • For reference, the Richard Dawkins interview: (https://www.youtube.com/watch?v=-AS6rQtiEh8). Note 1: it can be painful to watch. Note 2: the English subtitles appear to have been generated automatically by a decidedly less-than-intelligent Artificial Intelligence technique. Google is just taking AI to the limit in every context where they can. – Cheers and hth. - Alf May 24 '15 at 15:21
  • @Cheersandhth.-Alf: I'm quite certain that I know as much science as you do; evolutionary theory, genetics, physics ie Obtaining the field equations of GR via varying the Hilbert action; so there is no need to consider yourself as an apostle of science and condescend to me on that account; and nor should you suppose that because I like thinking on something that means I must be a 'believer' whatever this means here - when'creationism' is a social phenomena in the States; and I don't see how we got here from a discussion of a range of philosophical positions on mind-body dualism. – Mozibur Ullah May 24 '15 at 15:45
  • I read several books of Dawkins 20 years ago; I doubt he's saying anything new that he hadn't said in his books; but thanks for the link. – Mozibur Ullah May 24 '15 at 15:48
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/24084/discussion-between-mozibur-ullah-and-cheers-and-hth-alf). – Mozibur Ullah May 24 '15 at 16:38
2

I think the ability to clearly define the 'strong' in 'strong AI' is equivalent to dualism.

If you do not think there is some inscrutable aspect of the human mind, then all AI is continuous with the behavior of young or impaired humans, and there is not some magical point where it becomes 'strong AI', equivalent to humans. In some aspects it is already superior, and in others it could compensate given adequate power or time.

If you consider genetic recombination as a form of computation, it has the power to create human intelligence, having done so. Given a different environment, it could create a different form. We also see it solve problems that we cannot solve, and we have to simply steal its solutions until we understand them. So that is an equally strong or better problem-solving system than human intelligence. Is it 'artificial' enough? It appears to be digital, and we think of it as quite mechanical -- the kind of thing we can emulate in machines.

If you do think there is this inscrutable aspect, only then can you really state any definition of 'strong AI' with meaning that truly separates it from 'a whole lot of weak AI'.

So the question ends up being baseless or circular. If you accept a functional (performative, Turing-esque) definition of strong AI, you do so only because you are not a dualist, and if you resist, you do so only because you are one.

1

I think that people who claim strong AI is impossible do tend (often unwittingly) to commit themselves to some manner of dualism. But there is an argument to be made that the process is inscrutable:

We have no general theory for approximating arbitrary recurrent functions. Indeed, there are all sorts of recurrent functions with profoundly frustrating processes such as chaotic functions where long-term values cannot be predicted without infinitely accurate measurements of current states even when you know the form of the function.

Furthermore, our brains are heavily activity-dependent (synaptic plasticity, adaptation, etc. etc.). Therefore it plausibly could make a difference to the qualitative operation of the system what the long-term chaotic behavior is. (It is not guaranteed that it will, but it might.)

Thus, it is at least somewhat plausible that although our brains are in a universe which is nominally computable, in practice there is no way to find the computation that needs to be done. At every reduced level you run into exponentially hard problems (computational chemistry is too hard, protein folding is too hard, etc.), and at the highest level of abstraction there is no way to discover or verify the algorithm (and algorithmic space is enormous; you can't brute force the search).

This would, nonetheless, allow us to monitor and replay the entire activity of someone's brain (given almost unimaginable heroics of instrumentation and genetic engineering), but that is not strong AI. (It is very weird, especially from a philosophical standpoint: here you have a consciousness-over-time arrayed as patterns in space instead.)

Personally, I don't think this type of inscrutability is very likely, but the reasons for thinking so are more of a vague hunch about the evolvability of systems that rely upon details of chaotic behavior of self-recurrent systems than anything truly sound.

Rex Kerr
  • 15,806
  • 1
  • 22
  • 45
  • For neither your answer nor the OP do I grasp what you mean by "dualism". I can conceive of it as either a trivial proof where dualism is synonymous with the denial of reductionism OR the imputation that to deny mind-brain reductivism is to commit yourself to a Cartesian *res mensa* / *res extensa* dichotomy... – virmaior Mar 12 '15 at 09:22
  • @virmaior - I was assuming that the OP meant the latter (and that the denial is a denial that even _in principle_ it is impossible, not merely that you couldn't be sure, couldn't do it practically, etc.). – Rex Kerr Mar 12 '15 at 09:39
  • 1
    I'm not seeing the butterfly effect as a serious constraint. Strong AI does not require that the AI can reproduce the exact mental state of any particular individual over arbitrary time intervals. It is just that the AI is effectively the same as a human intelligence. The fact that the AI's "mental trajectory" will diverge from the mind on which it's initial state was based (assuming that's how it's constructed), is no different from the fact that the trajectory of your mind is (almost certainly) radically different from mine. – Dave Mar 12 '15 at 12:20
  • @Dave - I don't think your comment takes my third paragraph into account. – Rex Kerr Mar 12 '15 at 12:26
  • WRT your third paragraph and @Dave s post, consider the state of someone's brain at this point in time. If it is so exquisitely sensitive to its initial state and/or its 'long term chaotic behavior' (if that is something different than the former) to the point that it cannot be successfully modeled, then it is extremely unlikely to continue working for any appreciable time, yet brains are routinely created in large numbers and work for extended periods. Therefore, the idea that there is some unobserved dependency on chaos here seems to be the sort of postulate that Occam's razor rejects. – sdenham Apr 06 '18 at 14:33
  • @sdenham - The premise that chaotic systems cannot continue working is false. Also, my point isn't that it is in principle impossible to model the algorithm, just that in practice you might not be able to find it (or know that you've found it). – Rex Kerr Apr 15 '18 at 22:57
  • My comment is not predicated on any such blanket assumption; it is an observation that the empirical evidence is against it actually being the case that brains (or any other mind-making thing) are too complex or fine-tuned to be understandable. Anyone proposing such a theory bears the burden of explaining why minds do not, in practice, rapidly diverge into (pseudo-)random high-entropy states, and without evidence for it being both feasible and necessary, we can dismiss it with Occam's razor. – sdenham Apr 16 '18 at 13:58
  • Weather is chaotic (in the technical sense of the term) and doesn't rapidly diverge into pseudorandom high-entropy states. Your comments are predicated on incorrect assumptions. – Rex Kerr Apr 18 '18 at 15:41
  • But weather is understood. It is the premise that minds are so chaotic that they cannot be understood that can be dismissed by Occam's razor. – sdenham Apr 19 '18 at 12:30
  • Occam's razor is a guideline, not an inviolable rule. (It fails all the time in biology anyway.) Presumably a non-dualist who thinks strong AI is impossible would have ancillary reasons to doubt the simpler hypothesis. – Rex Kerr Apr 19 '18 at 17:22
  • True, Occam's razor is a heuristic, but I think there is a touch of the motte-and-bailey in your non-dualist AI-denier’s position. His bailey is that he has an apparently scientific explanation for his denial, but when pressed in detail on the scientific plausibility of his position, he retreats to the motte of his position being logically indefeasible. This may be moot anyway, because if he is an AI denier, rather than a doubter, his position is not that things merely _might_ be so. – sdenham Apr 20 '18 at 00:33
  • @sdenham - You could be right. I don't have an argument for denying strong AI, only a sketch of how it might be done. It would hinge on having very good arguments against strong AI; what I present above is _only_ to avoid the charge of dualism, _not_ an entire argument. – Rex Kerr Apr 25 '18 at 17:11
1

There are two reasons why it might be impossible to create an Artificial Intelligence with a mind equal to or better than that of a human: One reason could be that the building blocks of the brain are in some way superior to silicon chips or anything else that humans could create as the substrate for intelligence. The other reason could be that creating Artificial Intelligence is just very difficult, and human scientists and software developers are not clever enough to do it.

Substance dualism would be a subcategory of the first reason. But you could argue that the brain has about 10 to the eleventh power neurons, with each having on average 7,000 connections, and that is just a big number that beats current computer hardware. (On the other hand, the computer hardware runs a lot faster than the human brain's hardware). It is an argument for human-grade AI not being possible today, but not for the impossibility in the future. Substance dualism would be an argument for "impossible in principle".

The other argument that creating an AI is just very difficult is also one that doesn't speak for "impossible in principle". It might be that creating a computer which spends 18 years learning like a human is needed to create an AI. That would mean "not impossible". (Note that once such a computer had learned for 18 years, it could easily be duplicated, unlike humans).

So if there are claims that AI is in principle not possible, then attributing some mysterious quality to the human brain that humans can't reproduce would be one argument for such a claim.

Just a note: It is well-known that there are certain mathematical problems that computers cannot solve. But humans can't solve them either. Humans can choose to ignore such problems. An artificial intelligence would necessarily need the ability to ignore such problems as well.

gnasher729
  • 5,048
  • 11
  • 15
  • You keep implying there should be other reasons... But can you give any? If not, the OP is correct. –  Mar 24 '15 at 21:58
  • Why would I need to give other reasons? Someone who claims AI is impossible may have other reasons. He may be right or wrong, it doesn't matter; he can have other reasons. Therefore claiming that AI is impossible doesn't imply belief in substance dualism. – gnasher729 Mar 24 '15 at 23:41
  • That fails to be an argument, so this fails to be an answer to the question. You have given no evidence that he is right or wrong. Anyone might have a reason to doubt any fact. But that does not mean that every fact is equally open to question. –  Mar 25 '15 at 03:10
1

With regard to your points 1 through 4, you have to be a bit more careful. Sure we could probably artificially produce something modelling a human thinking brain, substituting biological tissue by something else. I doubt any scientist would deny this possibility. But this is likely not what you, or anyone really understands by hard AI. Really we want a computational model of intelligence, a Turing machine running a very complicated program. It is difficult to "disprove" possibility of this, but there is very good reason to doubt possibility because of Godel's incompleteness theorem, and Turing's halting problem. It is bizare that this is not mentioned above. It would be difficult to give these things justice here, but some people have thought long and hard about the question, see for instance the books of Roger Penrose.

Yasha
  • 11
  • 1
  • The Church-Turing-Deutsch , an extension of the original Chruch-Turing thesis published in the 80s, states that any physical process can be simulated by a Turing machine. Presumably, unless you're a substance dualist, you consider the brain to be a physical process. Also keep in mind that and Godel's and Turing's results were originally about computations performed by humans - based in the idea that an algorithm is any calculation that can performed in a finite number of steps. It is only later that they came to be interpreted as mainly being about artificial machines. – Alexander S King May 22 '15 at 13:27
  • 1
    Maybe you should read up: http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis. The point is if the brain is falling back on a physical process, formalized by say a function A to B, such that this function is not algorithmically computable then we are not a Turing machine, end of story. What could be the process? That is very interesting once again I refer to Penrose for some nice speculations. – Yasha May 22 '15 at 13:38
  • I am aware of Penrose's arguments and I disagree with them, see comments above. Again, please refer to the Church-Turing-Deutsch thesis http://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle – Alexander S King May 22 '15 at 13:48
  • 1
    First of all, If you are aware of Penrose's arguments then I don't understand your original question. Certainly the views of Penrose have nothing to do with dualism. He is following concrete mathematical and physical ideas to arrive at his conclusions. Unless you are claiming that anyone who has doubts on computability of physical processes is somehow a dualist. Second please don' Church and Turing next to Deutsch – Yasha May 22 '15 at 16:12
  • Also there are some simple models of quantum gravity that are not computable as that would require classifications of smooth 4 manifolds. Your faith in computability of physics is then just that: faith, and a very strange one I must say. – Yasha May 22 '15 at 16:20
  • " Second please don' Church and Turing next to Deutsch " What's that supposed to mean? – Alexander S King May 22 '15 at 16:49
  • 1
    It is a radical departure/generalization from the essence of Church and Turing, and I somehow doubt they would approve. But let's not get sidetracked. In mathematics/science one never uses A-B-C for authors unless all authors asserted essentially the same statements. – Yasha May 22 '15 at 16:57
  • Err, unless A-B-C are asserting a knowingly true fact. :) – Yasha May 22 '15 at 18:02
  • "It is a radical departure/generalization from the essence of Church and Turing," It is not that radical if you think about it. Deutsch simply noticed that all computations are physical processes, and conversely, that all physical processes with a numerical outcome can be considered as computations. You might want to read his original paper on the topic, I find it very clear http://old.ceid.upatras.gr/tech_news/papers/quantum_theory.pdf – Alexander S King May 22 '15 at 20:46
  • "can be cosidered computations" sure, and so can we be considered a kind of computers, but a priori not Turing machines! I am going to add another post as this side discussion is getting too long. – Yasha May 22 '15 at 22:55
0

The scientific consensus is almost overwhelming that strong AI is possible and substance dualism is inconsistent (unless you add a bunch of qualifiers to "impossible", which goes against Searle's original argument against Strong AI). Once we get that out of the way, in response to OP's actual question: Two incorrect beliefs do not have to imply each other. You have identified a set of assumptions that would equate the incorrect beliefs, but that does not mean a belief in one statement implies a belief in the other, as it is entirely possible that the "believer" has not taken into consideration the comprehensive implications of his belief. If this were to be the case all the time, we wouldn't need philosophical debates at all.

If your goal is to demonstrate that these statements are wrong, then you can simply attack each statement individually, since the equivalence you're drawing about beliefs in these statements adds no epistemological value to your argument.

Yang
  • 101
  • 2