1

The Turing Test is basically the idea that when (a) computer and a human can't be differentiated, that computer is an AI.

Identity of Indiscernibles boils down to if A is indistinguishable from B then A = B

Suppose now that AI is realized/actualized. Since this AI can't be told apart from humans, it follows that AI = humans.

If so per The Indiscernibility of Identicals rule, a dilemma on our hands:

  1. AI is sentient because we're sentient.
  2. We're not sentient because AI is not sentient.

What are the implications of The Leibniz-Turing Dilemma?

Agent Smith
  • 2,488
  • 6
  • 22
  • Maybe that we can use it as a negative criteria for discernibility: if a specific behavior can be replicated by a computer, than it is not an "essential" trait of human intelligence. – Mauro ALLEGRANZA Feb 05 '23 at 08:18
  • The Turing test is just a minimal standard that a "human-like" AI should pass. And even if the AI passes something much stronger the dilemma will much depend on how exactly it was "realized". If we grow humans in a Petri dish few will have a problem with ascribing "sentience" to them. If the "realizing" of the AI is similarly opaque (it already is opaque even for commercial ANN today), there will be no dilemma either. The dilemma will only emerge if the realization is such that it gives us strong reasons to disbelieve the AI's "sentience". I have no idea why that would be the case. – Conifold Feb 05 '23 at 08:52
  • @MauroALLEGRANZA, is not *thinking* an *essential feautre* of being human? – Agent Smith Feb 05 '23 at 09:07
  • @Conifold, the *dilemma* arises if we give our nod of approval to *the identity of indiscernibles* and subsequently *the indiscernibility of identicals*. – Agent Smith Feb 05 '23 at 09:09
  • The Turing Test involves a test to see if you can tell a human from a computer only by interacting through an interface. The identity of indiscernibles doesn't allow for such limited testing. If there is any possible way of detecting a difference, then it doesn't claim identity. – David Gudeman Feb 05 '23 at 09:21
  • Obviously, Leibniz rule does not apply: AI are not human because there are many properties that differ from human being, despite intelligence. – Mauro ALLEGRANZA Feb 05 '23 at 13:18
  • It's funny how "AI" has become a name of some entity like "human", when originally it was just the name of a sub-field of computer science. – Frank Feb 05 '23 at 16:42
  • @DavidGudeman, interesting point. We could say that Leibniz's rules don't apply in the limited sense in which it is in he Turing Test. – Agent Smith Feb 06 '23 at 02:17
  • @MauroALLEGRANZA, I concur, but if we were to allow a more restrcited application of Leibniz's rule, we do have a dilemma. – Agent Smith Feb 06 '23 at 02:18

2 Answers2

3

The comment you appended to the excellent answer by Paul Ross shows that you are missing an important point. You can make almost any pair of objects indistinguishable by imposing constraints on one's ability to distinguish them. If I expose only a square inch of my old Isuzu pick-up truck you might find it indistinguishable from a corresponding glimpse of a Rolls Royce- does that mean my truck is a Rolls Royce? Clearly not. The example might appear trivial, but it illustrates the principle- if AI seems indistinguishable from humans only in a limited sense, then you cannot assume that AI has all of the properties of humans. You therefore need to rephrase the question as follows: given that AI has a given subset of capabilities that are observably indistinguishable from those of humans, does that mean AI is sentient? To answer that question with any degree of confidence you would need to have an accepted theory of consciousness.

Marco Ocram
  • 8,686
  • 1
  • 8
  • 28
  • That's a good analogy. The question is what is the correct yardstick for AI-human comparison? – Agent Smith Feb 05 '23 at 09:29
  • @AgentSmith Marco is right: the equality does not hold, because only one little property of both was considered. There is no generalization that holds from that. But, why even try to compare a piece of software to humans? – Frank Feb 05 '23 at 16:21
1

The limitation here is that the identity of indiscernibles is a metaphysical principle, not a principle of any form of distinguishability. A machine constructed from fabricated silicon wafers is distinct from a human who has been born and grown organically, and while you can construct a linguistic communication framework in which the two can produce identical behaviours, that doesn't mean you can apply the Leibniz principle to it.

Now, if you're asking about the question of building an ontology e.g. for machine learning, then there is something interesting to ask about whether you might treat a person as essentially a very sophisticated form of AI agent. But that works only in a particular context of sense - when they report a malfunction with their internal process, you send a doctor rather than an electrical engineer (at least, if you wanted to help!)

Paul Ross
  • 5,140
  • 18
  • 37
  • What *distinction* we zero in on matters then. However, Alan Turing's genius shines through ... we're to ensure that the playing has been leveled so to speak - no *physical* features will be revealed, only *mental processes* are *accessible* to the tester/examiner. – Agent Smith Feb 05 '23 at 09:06
  • Mental processes are notoriously *not* accessible to external examination - we read them indirectly through their supervenience on the physical. That's exactly why a system of physical symbol exchange is appealed to as an intermediary over which to conduct the test, and this invites the problem of engineering the context to favour a certain model of distinguishability. – Paul Ross Feb 05 '23 at 09:13
  • True and hence the dilemma, oui? – Agent Smith Feb 05 '23 at 09:34
  • But it doesn't seem to be a well structured "metaphysical dilemma" (in the force of the Leibniz principles) if the distinction at work can be pulled apart in multiple directions. You don't have to say "humans aren't sentient" in order to think that there might be a plurality of competing, disagreeing sentience concepts under evaluation, and that no one of them takes metaphysical priority over the others. And just as there might be a plurality of concepts, so too could feature a plurality of Turing tests emphasising the respective sentience notions. – Paul Ross Feb 05 '23 at 10:03
  • I intelligo. I'm, let's just say, working within the *metaphysical limitations/boundaries* Alan Turing and Leibniz were. – Agent Smith Feb 05 '23 at 12:37