4

The Stanford Encyclopedia of philosophy states that -

"Because abstract objects are wholly non-spatiotemporal, it follows that they are also entirely non-physical (they do not exist in the physical world and are not made of physical stuff) and non-mental (they are not minds or ideas in minds; they are not disembodied souls, or Gods, or anything else along these lines). In addition, they are unchanging and entirely causally inert — that is, they cannot be involved in cause-and-effect relationships with other objects"

But instantiations of any form belong to that form according to proper platonism, so why aren't the "non physical" forms considered causally effective by modern Platonists?

J D
  • 19,541
  • 3
  • 18
  • 83
  • 2
    Hello: by 'modern Platonists' do you mean (a) present-day or recent scholars of Plato's philosophy or is your reference to (b) Mathematical Platonists, for example, who adopt or adapt only a portion of Plato's work? – Geoffrey Thomas Nov 24 '19 at 13:44
  • 3
    Because abstract objects are not platonic forms, and modern platonists are not Platonists. Plato's animated forms are deemed too magical and metaphorical to be taken at face value today. This said, Plato did not operate with the modern notion of causation, so ascribing "causal powers" even to them is a stretch. – Conifold Nov 24 '19 at 13:46
  • @Conifold Abstract objects are platonic forms according to at least some mathematical platonists, if i am not wrong? – ramseysdream111 Nov 24 '19 at 14:47
  • @GeoffreyThomas I meant mathematical platonists like Gödel and so forth – ramseysdream111 Nov 24 '19 at 14:47
  • 1
    If there are Gödel is not one of them, his philosophical source was Husserl. And "mathematical platonism" generally is a very generic idea that there is something objectively existent about mathematical entities that takes little specific from Plato, even when the expression "platonic form" is used. – Conifold Nov 24 '19 at 14:59
  • @Conifold I simply read it on Stanfords Encyclopedia - " In his philosophical work Gödel formulated and defended mathematical Platonism, the view that mathematics is a descriptive science, or alternatively the view that the concept of mathematical truth is objective. On the basis of that viewpoint he laid the foundation for the program of conceptual analysis within set theory (see below)." https://plato.stanford.edu/entries/goedel/ – ramseysdream111 Nov 24 '19 at 15:09
  • @Conifold I agree with the latter part of your comment though, Plato himself had a lot of other stuff. – ramseysdream111 Nov 24 '19 at 15:10
  • Many modern philosophers of science would simply reject any metaphysical notion of "causality" beyond the notion that the universe obeys mathematical laws, such that the physical state of some region at one time (or a probability distribution on possible states) can be derived mathematically from the physical state of a region at another time--see the discussion of [this question](https://philosophy.stackexchange.com/questions/70930/is-the-idea-of-a-causal-chain-physical-or-even-scientific) along with my answer to another question [here](https://philosophy.stackexchange.com/a/65046/10780). – Hypnosifl May 29 '20 at 23:56
  • I had no idea that the term {abstract object} had a term-of-the-art meaning that drastically diverged from its compositional meaning. According to the account implicit in Frege’s writings, An object is abstract if and only if it is both non-mental and non-sensible. https://plato.stanford.edu/entries/abstract-objects/ I would postulate this this definition is absurd. Everything that does not exist physically exists mentally and the set of things that does not exist physically or mentally is the empty set. – polcott May 30 '20 at 21:23
  • Causality is deeply suspect https://philosophy.stackexchange.com/questions/70930/is-the-idea-of-a-causal-chain-physical-or-even-scientific/72055#72055 I draw your attention to a modern version of Plato's forms https://en.m.wikipedia.org/wiki/An_Exceptionally_Simple_Theory_of_Everything A substrate-independent pattern can be both an identity, and represent a tendency, without being strictly causal. I suggest this is to do with complexity, like making predicting other minds tractable. – CriglCragl Jun 05 '20 at 22:39

2 Answers2

2

The relationship between abstract objects and platonic forms is obviously difficult, as is the relationship between modern (specifically mathematical) platonists and adherents of Plato's philosophy of forms. While the latter may seem to have a bigger problem denying the causal efficacy of forms (per your argument) I am not sure whether the mathematical platonists (or philosophical platonists modelling their platonism on the mathematical one) cannot also be maneuvered into problems concerning their claim of the causal inefficacy of abstract objects.

https://plato.stanford.edu/entries/abstract-objects/ points out the problem in claiming "that abstract objects are distinctively neither causes nor effects", simply because abstract objects like novels "come into being as a result of human activity." So there seems to be a clear one-way causal efficacy that the mind has in creating abstract objects/artefacts. If you now consider what happens if you read such a work, then (at least from a platonist perspective) the abstract object that is Dante’s Inferno causally interacts with your mind via its instantiation (your copy). So in that sense the causal efficacy of abstract objects (or forms) could be argued for.
I am sure that specifically non-platonists would want to deny that this kind of "interaction" can be considered causal in nature, but I am not quite sure what their main argument would be.

Julius Leist
  • 116
  • 8
0

As I understand, this question: a) pertains to modern Platonists' thinking, and b) has a false assumption behind it.

I'm going to take a chance and make an assumption of my own: if I succeed in explaining (b), then (a) becomes a moot point. If, however, you want to know (a) even if it's hypothetical -- you don't need to read any further.

The false assumption (mentioned earlier in the comment by polcott) is about abstract objects exiting in their own realm as the non-physical timeless forms. And in Plato days it was perfectly reasonable to envision this kind of magical cloud storage for concepts. In purely design terms was is way more efficient than the actual implementation. Unfortunately, we cannot ignore the fact that it needs magic to work, not anymore.

The recent advances in Artificial Intelligence accidentally shed the light on the most mysterious aspects of the human psyche -- emotions, intuition, consciousness, qualia -- things that we perceive as magical or supernatural. Now we have a model that explains away the magical stuff.¹

For the abstract objects specifically (and just as the rest of the concepts/ideas we develop), it means each of them is but a piece of information:

that was Google.. last time I checked

We can always think of it as a binary string -- because no matter how it is actually encoded, it can always be translated into the famous sequence of ones and zeros. And those things are very much real, existing as records on some physical media. In this particular case, that physical media is our brains. Every person that has acquired an idea of a square, has their very own copy.

And they are copies -- very often they are actually replicas, as it is the case with the square. Other times every person comes up with their own ideas, as Mary did after seeing red for the first time. And, given the (very observable) fact that we all share, and are parts of the same objective reality, we end up sharing many identical (like ) or similar (like ) or, at times, even predominantly wrong² ideas about it.

Incidentally, it also creates an illusion of every concept existing as a singleton that we all share.

¹ available upon request

² like 'cmon, what do you mean that wasn't funny?!

Yuri Zavorotny
  • 590
  • 2
  • 10
  • 2
    "we have a model that explains away the magical stuff." is not true. Whether you want to belittle qualia as "magical" or not, the hard problem of our experiencing them remains. My guess is that you are thinking of Integrated Information Theory, which one of its founders described as based on the proposition that "consciousness is what information feels like when it reaches a certain level of complexity." That "magic" feeling is known as the Hard Problem. You have not explained it away, merely brought it into sharper focus. – Guy Inchbald Jun 05 '20 at 10:48
  • What is this model that explains away the "magical" stuff? – ramseysdream111 Jun 05 '20 at 14:50
  • @GuyInchbald -- I never heard of Integrated Information Theory and by the sound of it, it takes the opposite approach to explain our minds. In my model, we have two minds and their designs and principles of operation are nothing alike. One is the rational/conscious self.¹ It allows us to understand things by creating and running a c̸o̸m̸p̸u̸t̸e̸r̸ simulation of the world around in our heads. We understand something when we acquire a mental model for it. We become self-aware when we model ourselves as part of that simulation. ...end of part 1 – Yuri Zavorotny Jun 06 '20 at 00:23
  • ramseysdream111 -- it was too long, so here is a [link to a Google doc](https://docs.google.com/document/d/1uwZ9dbejbwtsDC0MhoVgeRLZisjtHTF2RZ6m49IcQVc/edit?usp=sharing) – Yuri Zavorotny Jun 06 '20 at 00:36
  • @YuriAlexandrovich That is very old news. You do not address the subjective qualities of experience, of what it feels like to run a self-model within a world model. That, specifically, is known as the Hard Problem. It is far older and more well established than IIT. Where Platonism proposes an Ideal realm, IIT proposes that we consider thoughts as information and you simply ignore the Hard Problem as if it did not exist. – Guy Inchbald Jun 06 '20 at 07:06
  • @GuyInchbald -- did you read the document that I linked? All it does is explaining the hard problem. And I did mention it: >> So that's your hard problem. Mary can understand color vision by having a mental model for it in her conscious mind. And she will have the concept of "red" derived from that model, in its "abstract form", if you want. But she will never experience the colors until she sees them with her own eyes. And only when she does, she will integrate it into the qualia concept of .< – Yuri Zavorotny Jun 06 '20 at 10:57
  • @GuyInchbald -- I explain qualia as native capability of neural networks: >>Now how exactly does neural net processes and stores new experiences? Simply memorizing individual images and sifting through them (looking for similarities with the next one) would be terribly inefficient. So, instead, a neural net is designed to integrate its new experiences into past ones, thus developing general ideas or concepts. Like the idea of how a dog looks like (and how it does not). Or, specifically, what patterns would make it likely an image of a dog, and what patterns would suggest that it isn't.< – Yuri Zavorotny Jun 06 '20 at 11:20
  • Perhaps you did not mean "explains away" but merely "explains"? Otherwise, your answer and your comments appear inconsistent. – Guy Inchbald Jun 06 '20 at 12:26
  • @GuyInchbald "what it feels like to run a self-model within a world model?" -- it feels like a dream because it means you think for yourself. No, actually -- thinking process really means using imagination to envision the outcomes of our actions or a model of how things [might] work in reality. If you have ever daydreamed, you know how it feels (and very different from the internal monologue of your "thoughts" -- the latter are not even yours). – Yuri Zavorotny Jun 06 '20 at 14:26
  • @GuyInchbald -- my model explains away the magical cloud storage where Plato's forms would live. In reality, we keep our own copies of every concept, be it a square, or a dog, or color red. – Yuri Zavorotny Jun 06 '20 at 14:35
  • @Yuri- Just a minor point. One of the pioneers of AI recently pointed out in an Economist Science and Technology article that once the hype (or all your claims for the wonders of AI), are scraped away, all it really is in Pattern Recognition. He went on to say that the entire 'self-conscious' aspect of human experience is nowhere in reach for AI now and probably never will be! –  Nov 04 '20 at 04:39
  • Having patterns stored locally and being able to recognise them (as modern AI systems do) is not the same as having a conscious experience of them and feeling them. – Philip Klöcking Nov 04 '20 at 08:34
  • @CharlesMSaunders > "He went on to say that the entire 'self-conscious' aspect of human experience is nowhere in reach for AI" -- would you say it's a good news, or bad? – Yuri Zavorotny Nov 04 '20 at 14:00
  • @YuriAlexandrovich -- "Its Information" does not solve the materialists problem with abstract objects, as information has neither mass nor volume, and is not plausibly material. How to address a world in which abstractions are casually relevant with an ontology which denies this possibility remains a problem for materialists whether one tries to translate all abstractions into information or not. Meanwhile, the "consciousness is identical to processing" Identity Theory of AI is tested and refuted because abacii and lines of code are clearly not conscious. Same for 99% of brain processing. – Dcleve Nov 05 '20 at 15:59
  • @dcleve -- consciousness IS brain processing, one specific kind of it.Trouble is, that's not what most people do. Not being conscious themselves, how are they supposed to understand anything at all, much less the process that creates it? – Yuri Zavorotny Nov 05 '20 at 16:47