3

Take the sep definition of functionalism:

Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.

According to this view, to identify the same mental state in two different physical systems, we only need to check that a part of each system plays the same role in its corresponding system. This is the term I want to mathematically ground: what does it mean, mathematically, for a part of one system to play the same role as a part of a second system?

I'll explain below what I consider an adequate answer to this question. There's a lot of theory in functionalism that attempts to answer this question but it all seems to be a bit informal and high-level. I have not encountered any that is mathematically grounded. Ramsey sentences do not seem to meet my criteria, due to vagueness in the phrases "tends to be caused by" or "tends to produce," unless I am missing something about them.

An adequate answer to the question would be a function f:

f(A, a, B, b) = true or false, depending on whether part a serves the same role in system A as part b serves in system B

I do not know how to write this function f, and if I did my question would be answered.

One difficulty of writing f is the domain of discourse. What exactly is a system, and what is a part of a system? To make things simpler and more concrete, and to filter out non-rigorous handwaving explanations, let us restrict ourselves to patterns in Conway's game of life. A pattern in Conway's game of life, evolving over time, is an example of a system. So, what does it mean (mathematically) for one part of a pattern in Conway's game of life, to serve the same role as a part of a different pattern in Conway's game of life? In other words, can you mathematically define f for the special case of Conway's game of life?

Or is it impossible to mathematically define f? Perhaps f needs a couple additional parameters to tell it an interpretation of each system. Although, what such an interpretation would be, as a mathematical object, is also a mystery to me.

J D
  • 19,541
  • 3
  • 18
  • 83
causative
  • 10,452
  • 1
  • 13
  • 45
  • By the way - to give an idea of how I'm thinking about f, consider the related question of when an element e1 of a mathematical group G1 has the same role as an element e2 of a group G2. For this, the answer is easy: is there an isomorphism phi:G1 -> G2 where phi(e1) = e2? If so, they have the same role. But it is not so easy to answer the question when it comes to time-evolving systems, instead of groups. – causative Jun 15 '23 at 19:48
  • 1
    Not sure if this reasoning is mathematically rigorous enough for you but in your life example if `A` and `B` are the same pattern and `a` and `b` are the same pattern then `f` is true. Same pattern would allow transition and rotation (90 degrees anyway). But system and part would need to have the same relative position. This doesn't conclusively say others cases wont have the same role. Just that these cases are certain to. – candied_orange Jun 15 '23 at 20:35
  • @candied_orange That's certainly part of an answer. Suppose A and B are each a simulation of a Turing machine within Conway's game of life, but implemented using different game of life patterns, and suppose a is a representation of a particular Turing tape cell within A, and b is a representation of the corresponding tape cell within B. We'd like to say that f(A, a, B, b) = true, but how? – causative Jun 15 '23 at 20:41
  • Could it be something like how a subroutine in one programming language can be used to trigger the same hardware effects as a different subroutine in a different programming language? Like if you could code an audiovisually equivalent version of a game in two separate game engines? – Kristian Berry Jun 15 '23 at 20:45
  • 1+1 = 2. 3 - 1 = 2. There are many ways to do the same thing. Making these into Turing Machines just makes the issue sound complicated. All Turing Machines satisfy your function if they're complete. They might use game of life 1's and 0's. +5v and 0v. Or even [dominos](https://en.wikipedia.org/wiki/Domino_computer). But given A and B are both correctly working Turing Machine simulations then f is true. – candied_orange Jun 15 '23 at 20:58
  • Re life patterns: reflections would also work. – candied_orange Jun 15 '23 at 21:02
  • A pattern conditional situation would be if the part was reflected and the system wasn't. If the system happens to be symmetrical the role played remains the same. – candied_orange Jun 15 '23 at 21:10
  • I can't get my head around how to ever establish that the role can't be the same other than "run em and see what they do". But I think this establishes that what things do can be identical even when how they do it isn't. – candied_orange Jun 15 '23 at 21:17
  • The problem of deciding whether two functional roles are the same is closely linked to the problem of identifying whether something is a functional role. The notion of a functional role is essentially teleological, and teleology is not reducible to physics or to anything else. You have to start with teleological notions. – David Gudeman Jun 15 '23 at 21:52
  • 1
    Interpret the systems as relational structures. For example, a set of states with transition maps, as in models of computation, or with some relations on it. Then part A in X "plays the same functional role" as part B in Y if, under a structural isomorphism between X and Y, B is the image of A. This sidesteps the teleology of functional roles, but structural isomorphism might be too weak a condition, as observed in the theory of computational implementation, see [Sprevak](https://marksprevak.com/publications/triviality-arguments-about-computational-implementation-2018/). – Conifold Jun 15 '23 at 22:49
  • @Conifold I think this is along the right lines, but what, exactly, is a "structural isomorphism" between two systems A and B? Your Sprevak link proposes model "M" which is incomplete, not even dealing with input to the machine. Furthermore, a deterministic finite automaton involves a potentially infinite sequence of perfect state transitions, and the article does not describe how such an infinite, perfect process would correspond to a physical object that exists for only a finite duration. – causative Jun 15 '23 at 23:45
  • @Conifold Another thing M fails to explain is how you get the states of the physical system to begin with. You need some mathematical description of extracting the system state from the universe state, despite the system moving from place to place and perhaps changing its shape or size. Having tried to do this myself I can tell you there are even more "triviality" pitfalls in that. The article alludes to more sophisticated models than M; what's the best you've heard of? – causative Jun 16 '23 at 00:23
  • All models are wrong, but some of them are useful. K=JTB does not quite work, but it is a fruitful starting point for more elaborate formalizations of knowledge. Triviality arguments are like the Gettier problem for model M. Sprevak does discuss its extensions at the end, but none would deliver all that is asked for. "Infinite and perfect" idealizations are probably not a critical issue, but ultimately, I think, one cannot sidestep teleology entirely. – Conifold Jun 16 '23 at 00:31
  • @Conifold Every issue is a critical issue, when we're talking math. If you don't have a way to deal with the fact that physical systems are both time-limited and error-prone, then your correspondence of the physical system with the DFA simply fails; there is no correspondence. There may be ways to deal with it - for example, we might look ahead only a certain number of state transitions before we admit to a correspondence - but you have to be explicit about that, or your correspondence fails. (Also how many transitions do we look ahead? That's a very arbitrary and unsatisfying decision.) – causative Jun 16 '23 at 00:35
  • @Conifold A better way to deal with finite time is to treat the end of the system as a special inescapable terminal state of the DFA. But, then we find that physical systems can only correspond to DFAs that have that terminal state (with transitions to the terminal state set up in the right way). With this interpretation, physical systems can't implement arbitrary DFAs. – causative Jun 16 '23 at 00:48
  • I agree. I just have a sense that all of these issues are more technical and less problematic than the principal problem of defining functionality without either trivializing it, or invoking some sort of user-relative pragmatics. That looks like the "hard problem" of functionalism. But it doesn't mean that "easy problems" are literally easy. – Conifold Jun 16 '23 at 06:17
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/146702/discussion-between-causative-and-conifold). – causative Jun 16 '23 at 07:39

1 Answers1

1

The TLDR is Yes. There are mathematical models of minds that use functions. The paragraph covers them under the heading Machine State Functions and systems like the Turing Machine were modeled on the mind of a mathematician; the TM uses mathematical functions to change state. But let's exercise caution.

It's easy to confuse the notion of mathematical functions, a specialized type of relation which provides a guarantee on mappings from inputs and the expression 'serves a function' which is a notion rooted in agency (SEP) and activity, particularly one that is literally intended and not metaphorical in usage such as functions as related to topics like teleological language used in biology (SEP) despite that evolution has no literal intention. It is fair to note that a Turing machine which is considered a functional equivalent of a mathematician and her paper-based algorithms (at the time) certainly invokes both senses leading to some confusion. Thus, according to the article machine state functionalism uses both ideas which is particularly handy when a philosopher of mind advocates a computational theory of mind (SEP).

After psychological behaviorism reached a fevered pitch in academia, there was a reaction in the philosophy of psychology called cognitive psychology where instead of dismissing subjective testimony as unscientific since it was measurable in the same way behavior is. From WP:

Cognitive psychology originated in the 1960s in a break from behaviorism, which held from the 1920s to 1950s that unobservable mental processes were outside the realm of empirical science. This break came as researchers in linguistics and cybernetics, as well as applied psychology, used models of mental processing to explain human behavior.

So, if you think about Noam Chomsky's work on generative grammar was considered revolutionary, it was because it attacks the behaviorist program that began with Skinner, Watson, and others. So, using a grammar as an example, according to functionalism, one doesn't have to have the same nuts and bolts, so to speak, in order to process a grammar. A contemporary example would be an LLM like ChatGPT. Today, one might argue that functionalism is an observation of commonalities of different sorts of physical computation (SEP) that achieve the same results.

ChatGPT produces relatively sophisticated natural language passages (which of course suffer from semantic hallucinations) despite the fact that computers don't have literal biological brains. That is, a passage from ChatGPT might be the functional equivalent of a passage from a university student (it's all the rage these days to have an LLM just do the work), despite the fact that one is produced by software and hardware and one is produced by wetware. Much like Turing himself anticipated, if computational strategies become sophisticated enough, and people are isolated from the production over a communication relay, it becomes difficult to decide if a paragraph is written by a computer.

The Turing Test, proposed by the British mathematician and computer scientist Alan Turing in 1950, is a test designed to evaluate a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. The test involves a human judge engaging in a conversation with both a machine and a human, without being aware of which is which. If the judge cannot consistently distinguish between the two based on their responses, the machine is considered to have passed the Turing Test. The test aims to explore the concept of machine intelligence and the potential for machines to achieve human-like thinking and communication abilities. It continues to be a significant benchmark in the field of artificial intelligence and raises fundamental questions about the nature of consciousness, cognition, and what it means to be human.

So, ChatGPT is an example of a functional equivalent because passages of text generated by electro-mechanical brains and those of organic brains are becoming increasing indistinguishable. (Disclaimer: the previous paragraph was written by ChatGPT with the prompt "write a brief paragraph on the Turing Test". If you didn't notice, you should probably take notice at how sophisticated these tools have become).

So, functionalism in philosophy of mind is, according to SEP:

the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.

Does it make a difference that transistors produced the paragraph above instead of my neurons? Yes and no. But what is on the table with this philosophical precept is that the notion of mentality itself goes beyond the human brain (which itself demonstrates neurodiversity demonstrating two brains don't have to be identical to have the same abilities).

Can you create function-based models of the mind? Sure, they're called automata and include Turing machines. In fact, GOFAI itself is nothing but a series of models of the mind built of procedures that often have a mathematical functional description. Do they fall short of actual brains. Absolutely. But functionalism argues that whatever a mind is, it doesn't have to be a typical human brain, and might include links to computers a notion captured by terms transhumanism and extended intelligence, where a person's information resources might be considered part of the mind.

What is true is that functionalism doesn't allege the mind is a mathematical function. That's would just be confusing two senses of function.

J D
  • 19,541
  • 3
  • 18
  • 83
  • Machine state functionalism doesn't satisfy me, because it takes the mental state to be the *entire* tape state of the Turing machine. According to that, two different Turing machines in different states are always going to result in two different mental states. But, what if the Turing machines are simulating the same thing, just with different implementation details? (Think of running two different, but functionally equivalent, programs on your computer, such as one written in C++ and the other written in Haskell.) Thus, machine state functionalism does not give me my f(A, a, B, b). – causative Jun 15 '23 at 23:22
  • 1
    TMs can be simulated by TMs. Therefore, go second order. If a [TM is M(Q,G,b,S,d,q0,F)](https://en.wikipedia.org/wiki/Turing_machine), then f(M1(),M2()))=T IFF M1()~M2() where ~ is defined as needed comparing various parameters of the TM according to whatever schema you want. – J D Jun 16 '23 at 00:40