2

I am struggling to understand the meaning of some of the terminology John Searle uses in "Mind, brains, and programs." For example, right before "IV. The combination reply," he writes that

The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.

What does "causal properties" really mean? I don't think it has to do with causation in the traditional sense (or if it does, I can't exactly see what he's going for here).

J D
  • 19,541
  • 3
  • 18
  • 83
Vasting
  • 163
  • 3
  • 2
    Intentional states are those mental states such as beliefs, desires, and thoughts which have some sort of representational content - that is, they are about, directed at, mean, or represent something. So here, Searle is expressing the view that the simple formal simulating of a bunch of neurons firing cannot cause intentional mental states to be simulated. He is using *causal* in the traditional sense. – nwr Dec 14 '21 at 01:37
  • 1
    "Causal properties" are those that allow the brain to interact with its environment, and hence establish connections between its structures ("intentional states") and what they are about (what's "intended"). Formal properties, on the other hand, are causally indifferent (which is why they can be detached and abstractly represented), and hence irrelevant to establishing intentional connections. – Conifold Dec 14 '21 at 04:08
  • @Conifold So when I see a tree and associate it with the word "tree," is that what the causal property is? What differentiates from semantics? – Vasting Dec 14 '21 at 04:14
  • The association mechanism involves causal properties (of the brain and body). "Semantics" is a very ambiguous word, *formal* semantics is of a kind with formal syntax, at best, it "relates" one abstraction to another, not to anything real. – Conifold Dec 14 '21 at 04:55
  • Suggested tags. – J D Dec 14 '21 at 11:31

3 Answers3

2

Short Answer

John Searle accepts the human brain is a computer of sorts, but rejects that it is like current digital computers. He believes there's something inherently different between the biological causality of the brain with the universe and that of the digital computer with the universe, and so while both brains and computers have similarities, only brains can manifest "aboutness" (intentionality) about the state of affairs, he believes that physical causality is larger and different than the digital models, read Turing machines, we build of it. That difference is best understood as consciousness. He believes computers only dissimulate consciousness, which is the default position of many philosophers and can be traced back to Descartes and his views on the exceptional nature of the human mind. One particular view is that only humans have a soul, and not even animals truly reason. (See "Quotations from Descartes on Animals as Automata" (PhilSE).) Philosophical debate of Cartesian dualism is fundamental to an understanding of contemporary philosophy of mind.

Long Answer

Alan Turing once proposed his Turing Test and a timeline to show computers would quickly mimic the capacities of the brain. He would be profoundly disappointed that Hubert Dreyfus convincingly argued in his What Computers Can't Do: The Limits of Artificial Intelligence published in '72 that Turing was wrong. Searle is a relatively orthodox philosophical thinker who is skeptical that digital computers can ever fully represent the human brain and manifest consciousness, taken to be a necessary condition of intentionality. He is famous for his philosophical problem, the Chinese Room to advance his skepticism.

In his book The Mystery of Consciousness he says on 294:

[T]he essence of consciousness is that it consists in inner qualitative, subjective mental processes. You don't guarantee the duplication of those processes by duplicating the observable external behavioral effects of those processes.

This notion of computers possessing intentionality has even more challenge to it than overcoming radical solipsism where philosophers like David Chalmers have come up with philosophical ideas like philosophical zombies and the hard problem of consciousness to continue the tradition of the Ancient Greeks of making the fallibilism of knowledge evident through skeptical argumentation.

On the other end of the spectrum are more progressive thinkers like Alan Turing who believed in broad notions of artificial intelligence that might qualify him as a believer in artificial general intelligence which is a body of thinking that computers can be every bit as sentient and possess intentionality as people, broadly in the spirit of functionalism in the philosophy of mind. (Warning, my biases are with the AGI crowd: see See my response to Computers, Artificial Intelligence, and Epistemology). It's a matter of fact that a small but active community of cognitive scientists and philosophers are trying to advance a philosophical thesis to prove Searle wrong about how consciousness can be simulated.

As such, John Searle is largely in line with the orthodoxy in the broader analytic philosophical community expressing skepticism of the possibility that digital computers have or will ever have intentionality. Some interesting proposals have been advanced by thinkers who agree with Searle to come up with the missing part, Roger Penrose and his The Emperor's New Mind being an excellent example. In his thesis, there are quantum properties to neurons that are not represented by deterministic state machines that merely manifest computability.

Part of whether one believes Searle is wrong or not has to do with metaphysical presuppositions and first principles that are involved in the philosophy one does. Since computers are essentially composed of microprocessors ALU-CU-MMUs that compute formal systems, one's philosophy of mathematics can be used as a bellwether of sorts. For instance:

The association mechanism involves causal properties (of the brain and body). "Semantics" is a very ambiguous word, formal semantics is of a kind with formal syntax, at best, it "relates" one abstraction to another, not to anything real. - Conifold

It is arguable that formal semantics doesn't somehow relate to "real things", as every model-theoretic construction using the formalism of truth-conditional semantics is can be thought of as an abstract mathematical object used in the spirit of applied mathematics to model something real in the physical universe. Philosophically, mathematical constructivists see all formal systems including formal semantical models as rooted in the psychological experiences of the mind.

Whether or not it is possible to simulate a human brain is controversial in artificial intelligence, with one crowd believing that AI will never consist of anything more than symbolic systems and machine learning that will twiddle bits, and the AGI crowd believing that as soon as cognitive science provides insights that will allow machines to approach, attain, or even pass human intelligence, it will come to pass. The last position was made most famous by a futurist and computer scientist Ray Kurzweil who in transhumanism and the singularity as argued for in his The Singularity Is Near.

J D
  • 19,541
  • 3
  • 18
  • 83
  • "As such, John Searle is largely in line with the orthodoxy in the broader analytic philosophical community expressing skepticism of the possibility that digital computers have or will ever have intentionality" What's the basis for saying this is position is the orthodox one? The SEP article on the Chinese Room lists many analytic philosophers who advocate the [systems reply](https://plato.stanford.edu/entries/chinese-room/#SystRepl) which says there would be consciousness/intentionality in the system as a whole, among them David Chalmers who is very influential in analytic philosophy of mind. – Hypnosifl Dec 14 '21 at 19:54
  • @Hypnosfil Well, the question is an empirical one, so the anecdote of a list doesn't in anyway imply anything about a measure of central tendency pro and con.I certainly concede within the philosophy of mind, there's strong support that Searle's experiment is a failure. I have a while book devoted to replies of which my favorite is Haugeland's. I still if you take a broad cross-section of all self-described analytical philosophers, the predominant support for the physicalist paradigm prejuduces skepticism against, I suspect. But I concede my claim is speculative and anecdotal... – J D Dec 14 '21 at 20:10
  • Of course Im open to any evidence to support a claim definitively one way or another. I live in a world of cyberpunk dreams and science fiction generally, but I don't know that the sort of optimism espoused by Ben Goetzel is supported by academia generally. – J D Dec 14 '21 at 20:13
  • I suppose I should do my homework on the question. – J D Dec 14 '21 at 20:13
  • *the predominant support for the physicalist paradigm prejuduces skepticism against, I suspect* By physicalist paradigm do you mean eliminative materialism about consciousness/intentionality (in which case the point would be somewhat trivial since these philosophers wouldn't believe either computer programs *or* human brains have the type of special 'intentionality' advocated by non eliminative materialists, i.e. they wouldn't say AI must lack something we have like Searle does), or something weaker like belief in the causal closure of the physical world (which Chalmers advocates as well)? – Hypnosifl Dec 14 '21 at 20:51
  • @Hypnosifl. No, I'm not calling out any specific flavor of physicalsism so much as intuiting that the skepticism that is concomitant with naturalism, or maybe better the proclivity of physicalists to rely on fallibilistic epistemology dictactes a certain attitude towards claims about the mental. If claims are not grounded in some sort of emergence or superveniece and can't be shown to be theoretically reductive in some sense, then it borders on woo. I think PoM thinkers are more of the exception bc the dialog builds from putative solutions to dualism in a way an ethicist or philosopher... – J D Dec 14 '21 at 22:44
  • Of physics or math might be familiar with. Like I said, my intuition about a behalf skepticism towards claims. Certainly refuting CRA is different than endorsing claims about replicating human intentionality, and not even ML has offset the feel that both symbolic and connectionist approaches are logicomathematical parlor tricks. – J D Dec 14 '21 at 22:47
  • Are you in possession of evidence or argumentation to the contrary? – J D Dec 14 '21 at 22:48
  • I wouldn't say w/ any confidence that *most* analytic philosophers of mind who reject eliminative materialism about consciousness/intentionality would also accept consciousness/intentionality in AI if behaviorally identical to humans--I just don't see any good reason to think most would reject it, given that it's easy to name plenty of prominent ones who'd accept it. So that's why I was asking what your basis for asserting that was. The [PhilPapers 2020 survey](https://survey2020.philpeople.org/survey/results/5010) does indicate more favor functionalism than dualism or mind/brain identity. – Hypnosifl Dec 14 '21 at 23:04
  • Also, the "Other minds" question asked "for which groups are some members conscious?" and the [results](https://survey2020.philpeople.org/survey/results/5106) showed that when asked about "future AI systems", 39.19% chose "accept or lean towards", a bit more than the 26.83% who chose "reject or lean against" (presumably the other 33.98% were undecided). But the PhilPapers survey goes to a broad group of mostly analytic philosophers, would be interesting to see a more narrow survey focused on those in analytic philosophy of mind. – Hypnosifl Dec 14 '21 at 23:11
  • @Hypnosfil I'm running on fumes, but I think you're right to call out the claim as unfounded. I'll revise when I get the chance. I was being a bit ungenerous I suspect. Thx for the counterweight. – J D Dec 14 '21 at 23:33
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/132346/discussion-between-j-d-and-hypnosifl). – J D Dec 15 '21 at 07:14
  • I do not believe Turing suggested or believed that computers could be sentient or possess intentionality; rather, he believed that such questions were meaningless, since we can never know anyone or any thing's internal mental states. I believe, though I did not go re-read his 1950 paper to check, that he explicitly made this point. He said (my paraphrase) that if something acts intelligent then it's intelligent. He did not say it's sentient or self-aware or has intentional mental states. As far as my understanding goes. – user4894 Jan 27 '22 at 00:13
  • @user4894 First, the paper has a response to The Objection from Consciousness. Clearly, he recognized that it was not shown that machines are not conscious, but that's not the same as rejecting consciousness. "Animals minds seem to be very definitely sub-critical. Adhering to this analogy we ask, ‘Can a machine be made to be super-critical?’" So, he is clearly open to the idea that machines can be "super-critical" which might be read as human-level intelligence... – J D Jan 27 '22 at 08:15
  • Then he goes on to say... "Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain." So, he clearly believes that computers can learn like people given adequate programming... he uses the term machine learning, in fact... – J D Jan 27 '22 at 08:16
  • He concludes, "We may hope that machines will eventually compete with men in all purely intellectual fields." Obviously, he believes the potential is there for a machine to rival a human, and this necessitates awareness. Now, he makes no claims about intentionality explicitly, this is true... but I think you are refuting a point I did not make... I said... – J D Jan 27 '22 at 08:24
  • "Alan Turing who believed in broad notions of artificial intelligence that might qualify him as a believer in artificial general intelligence which is a body of thinking that computers can be every bit as sentient and possess intentionality as people". I simply no claims that he did believe such things as you deny; I only claimed that his optimism about the machines capacity for mimicking general intelligence might place him in the realm of an early proponent of AGI. Remember, during the 50's, behaviorism was the norm, so it is certainly true that his emphasis was on imitation and not on... – J D Jan 27 '22 at 08:28
  • proving an equivalence. In fact, extreme behaviorists rejected the notion about sentience, consciousness, and intentionality even in humans. I'd wager that if Turing had lived into the rejection of behaviorism, he would have been as optimistic as the GOFAI crowd that human-level intelligence and awareness was possible with mere symbols, and if he were alive today, he'd embraced an embodied notion of cognition. So, more plainly, I never claimed that he suggested or believed such things. Only that he was optimistic of parity, a fact made obvious by his closing... – J D Jan 27 '22 at 08:32
  • "We may hope that machines will eventually compete with men in all purely intellectual fields." – J D Jan 27 '22 at 08:32
  • @JD Thanks for the detailed response, I will go back and reread the paper. Perhaps I misremembered it through my own anti-AGI biases. – user4894 Jan 27 '22 at 09:01
  • @user4894 I believe the important thing to consider is that the differentiation between AI and AGI in the contemporary sense didn't exist as a broader issue until at least Dreyfus published his paper at RAND, and then followed up with his book; it would be my contention that the failure of successes after the positing of the physical symbol system hypothesis by Newell and Simon likely served as a catalyst to drive connectionist approaches, and that the symbolic-connectionist debate, which I think has somewhat dissolved itself, traces an arc of optimism from Turing... – J D Jan 27 '22 at 19:23
  • today, embodied cognition seems to be the best contender for reconciling human-level intelligence with machine intelligence, and I would argue that machine learning strategies, which have produced some serious successes, bolster that hypothesis. I really do believe that there's a pro-symbolic bias in philosophy that because of the primacy of logic and formal systems that tends to bias thinkers against AGI, and lead AI thinkers in the wrong direction... but that's because I come from CS where all symbols inhere to computational systems, and it's simply not possible to follow... – J D Jan 27 '22 at 19:26
  • solipsism into extremes because computers simply can't assert themselves they have intentionality. If you have any other objections, please lodge them! I'm not smart enough to know everything. :D – J D Jan 27 '22 at 19:27
  • I should also like to have the chance to persuade you here: https://philosophy.stackexchange.com/questions/68915/computers-artificial-intelligence-and-epistemology/68956#68956 – J D Jan 27 '22 at 19:30
1

I think that with causal properties Searle is referring to the brain's ability to cause physical actions by the muscles. Producing intentional states means configuring the motor cortex neurons in such way that they will send the intended control signals to the muscles.

This actually is something that a simulation cannot do. Simulations have no intentions, they don't intend to do anything. They only do what they are programmed to do, they only follow the programmer's intentions.

A computer program with intentions would no longer be a simulation. It would be a digital life-form.

Pertti Ruismäki
  • 1,625
  • 3
  • 11
1

He is using “causal” in a conventional way, and “intentional” to refer to intentionality (https://plato.stanford.edu/entries/intentionality). He’s claiming that our primary interest in mental activity is not on the level of the brain’s formal structure or the behavior of its neurons and synapses, and that simulating these things doesn’t teach us anything about the brain’s ability to produce (i.e., cause) intentional states.