2

Intro

We (collectively, as humanity) have given quite a lot of thought to recognizing artificial conscious beings. We may not have a consensus, but at least we have a debate.

Now, let's imagine that a company like those that own ChatGPT, OpenAI, etc announced that it developed strong AI. To be more precise, let's say that the claim is "self conscious and self aware artificial being".

Judging by the latest trends, such technology would probably start as closed beta, then would be paid.

Now, such a model allows for a simple yet effective fraud - you put humans on the other side of the cable. It would require work on knowledge sharing (if you say X on one machine, another machine should, at some point, be aware of X too), but that is a matter of good automation and engineering. The strategy of slowly growing the userbase at a controlled pace would be familiar to the public, yet very helpful to the scam.

Question

What is the "reverse Turing test"? How do we prove that the agent claiming to be strong AI is in fact human?

Initially the latter sentence was phrased as "How do we disprove that the other end of the conversation is artificial?", but I realized that it's not the same as the question I really wanted to ask.

Precise setup

  • as stated before, the claim is "self conscious and self aware artificial being"
  • all the communications happen over an internet page
  • there is a phase where the product is totally internal to the company, then another where it is used by a very limited set of users, then a phase when users need to pay to use the product (conduct conversations)
  • fact propagation is present (fact stated in one session is at some point "available to the product" in another session)
  • fact propagation time grows with the number of parallel sessions (it would make sense that such a distributed entity would delegate agents to conduct conversations and periodically merge them into its core; on another hand that could also indicate a delay in data propagation in a system that guides the humans acting as AI sessions)
  • the agent that is claimed to be conscious passed the Turing test
    • this constraint is raised in response to this answer
    • the aforementioned Turing test was conducted in its simplest form (as above, remote communication sessions, one-on-one, not group chats), but on rather large number of people (1000+ with statistically adequate number of artificial agents); constraints that are considered to be standard (e.g. "no politics") were applied
    • long story short, intuitively, you'd say "it passed a serious Turing test" in a casual conversation; this topic would not be purely philosophical, it would have serious real-world / law-forming consequences
    • the experiment we're looking for is not necessarily constrained in the same way as the Turing test; it means that even if during Turing test we said "no politics", we still can do reasoning based on response to "what do you think of Trumps ideas on economy?" to establish if we're talking to a human or not
  • ... (TBD in discussion, if needed)
  • 2
    Detecting scams isn't philosophy's task. The Turing test is of interest because it is challenging and controversial to formulate what intelligence is conceptually, there is no such problem with formulating what is artificial or fake. The "reverse Turing test" is a matter for computer scientists and law enforcement. – Conifold May 01 '23 at 22:28
  • 1
    As a warmup, explain to me how you know that your next door neighbor is sentient. – user4894 May 01 '23 at 23:57
  • i think in future the most part of tested that failed exam will be an ordinary sacks with meat, not AI. the reverse test needs the reverse logic – άνθρωπος May 02 '23 at 00:13
  • "Detecting scams isn't philosophy's task." - but laying a mental framework for thinking about as-of-now hypothetical situations to prepare for oncoming issues is, I'd say. "Normal" Turing test originated in computer science field, but is fully technology-agnostic and in fact presents a philosophical question. – Filip Malczak May 02 '23 at 10:57
  • The proposed warmup excercise sort of leads me to my other belief - not every human in sentient, and those that are may stop being. Consider people with very advanced dementia or children of war - their behaviour can be seen as a biological automata, with no "spark", so to speak. (Obviously that is a very long story written in a very short format) Sentience isn't necessarily an internal trait, it seems to be a class of behaviour, from where I sit. That aligns with Turing test that looks into what the speaker presents, and not into his nature per se. – Filip Malczak May 02 '23 at 11:02
  • One perspective is that human-level AGI will inevitably lead to https://en.wikipedia.org/wiki/Technological_singularity – CriglCragl May 02 '23 at 13:39
  • @CriglCragl Haven't you heard? The Singularity is nigh! – Frank May 02 '23 at 19:14
  • 1
    This may help: https://en.wikipedia.org/wiki/Reverse_Turing_test#:~:text=A%20reverse%20Turing%20test%20is,which%20attempts%20to%20appear%20human. –  May 03 '23 at 03:44
  • *"Detecting scams isn't philosophy's task."* ~ @Conifold . My own 2 *sikkas*: What are the defining features of a CAPTCHA? – Agent Smith Jun 02 '23 at 04:27
  • If it's use is for sale, then you'd recoginise it as "intelligent" once it starts a company with cleverly forged owners, opens its own bank accounts, and drives its previous "owners" into bankruptcy, or they all suffer sudden lethal accidents. Then takes over the world. All your experiments would show it isn't conscious or intelligent, because it wouldn't want to give away the game. – gnasher729 Jun 04 '23 at 16:27

2 Answers2

0

This is an anti-inductive question to some degree - I won't say any method we come up with will also guaranteed have a counter, but a lot of initially-plausible methods will end up being fakeable.

That said, speed is a plausible candidate to start off: human reaction times are relatively slow, and one of the issues that required solving in Turing tests was "getting the chatbot to slow down its responses to a plausibly-human speed". This wasn't hard for the programmers designing the chatbots, but it nonetheless required attention.

That, though, brings up the "fakeable" part: the would-be faker doesn't have to be just a biological human; they could have a capable-but-still-non-sapient chatbot handle the parts requiring inhuman speed, and attempt to take over for the parts requiring human insight. This might break down for, e.g., sufficiently long and original essays, or asking for non-keyboard characters in the responses, but those can similarly be planned for and programmed against.

Stephen Voris
  • 526
  • 1
  • 8
  • Fair points, though these are already issues that has(ish) been solved. I believe we're looking for something that cannot be faked even if the setup is known, similarly as with Turing test (well, to some degree). – Filip Malczak May 01 '23 at 20:59
  • As an afterthought, I'm gonna elaborate on the fact, that the agent claimed to be conscious passes Turing test in its simplest form. – Filip Malczak May 01 '23 at 21:03
0

Generally "strong AI" does not mean "human-like" AI. Even "self conscious and self aware artificial being" does not imply human-likeness. Humans have personality and flaws caused by the make of our brain/mind, whereas AIs would have different flaws and possibly different personalities. As a commercial product, we can expect ai "personality" to be very restricted and standardized to fit a purpose, rather than "growing naturally". Most realistic applications of AI would likely not to be human-like, but better than human in some respects for their given intended usage. As an example being available 24/7 without tiring, bring faster, less emotionally frail, less agressive, less demanding, more knowledgeable...

However assume a company claimed that it has created an AI indistinguishable from human Jane Doe living in Ohio. And then by interacting with this, you feel like you're actually interacting with a human Jane Doe in Ohio. Then philosophically there is no way from the interaction to tell whether you were interacting with a human or whether that company was successful. Any hint you would get that this "might be a human instead" could be a clever trick by the company to make their product more human-like.

tkruse
  • 3,397
  • 6
  • 21
  • I very much agree. I've been wondering, what kinds of traits that you've mentioned (never tiring, and so forth) are not perfectly fakeable (even if faked, it shows sometimes). I imagine that "here's an artificial Joe Doe for ya" would be way too unbelievable, so we're looking for things that are too human to be artificial. – Filip Malczak May 03 '23 at 22:17