1

A user on Reddit was told by the artificial intelligence ChatGPT that solipsism is true. Why did he say that?

Is there any evidence of solipsism that ChatGPT knows about?

Should ChatGPT be trusted or is it wrong?

  • 6
    [WSJ, ChatGPT Needs Some Help With Math Assignments](https://www.wsj.com/articles/ai-bot-chatgpt-needs-some-help-with-math-assignments-11675390552):"*While the bot gets many basic arithmetic questions correct, it stumbles when those questions are written in natural language. For example, ask ChatGPT “if a banana weighs 0.5 lbs and I have 7 lbs of bananas and nine oranges, how many pieces of fruit do I have?” The bot’s quick reply: “You have 16 pieces of fruit, seven bananas and nine oranges”*". I wouldn't pay much attention to what ChatGPT has to say on more complex matters. – Conifold Mar 03 '23 at 14:46
  • 4
    The first thing to do would be to not listen to ChatGPT - in general, it's equally likely to generate misinformation as accurate information, due to the way it operates. Also check out: https://www.bigmessowires.com/2023/03/01/oceania-has-always-been-at-war-with-eastasia-dangers-of-generative-ai-and-knowledge-pollution/ – Frank Mar 03 '23 at 14:55
  • 5
    If you think solipsism is true, who do you think is going to answer this question? Russell once said that a woman wrote to him to say that she was a solipsist and she was surprised there weren't more of them. – Bumble Mar 03 '23 at 15:03
  • If ChatGPT were recognized as an authoritative source, would we have to start believing in solipsism? – Robert Antoni Mar 03 '23 at 16:06
  • 3
    Does this answer your question? [Are there any philosophical arguments to disprove or weaken solipsism?](https://philosophy.stackexchange.com/questions/260/are-there-any-philosophical-arguments-to-disprove-or-weaken-solipsism) – David Gudeman Mar 03 '23 at 16:35
  • First, the question about whether ChatGPT should be trusted is better at CrossValidated or affine SEs. Second, you can ask ChatGPT for its sources, as @Frank points in the answer, and, as with any other state-of-the-art artificial intelligence system, outputs should be treated with extreme caution. We are far away still from completely, error-free AI. –  Mar 03 '23 at 16:50
  • 1
    @eirene infinitely far away, most likely. – Scott Rowe Mar 03 '23 at 21:00
  • That is, we should not believe ChatGPT who says that solipsism is true? – Robert Antoni Mar 03 '23 at 21:20
  • 1
    I think that the process of asking a math question to ChatGPT is about exactly as trustworthy as typing the question in a web search engine, copying the numbers from the first 100 answers, and returning the most common number. – Stef Mar 04 '23 at 15:54
  • That is, we should not believe ChatGPT who says that solipsism is true? – Robert Antoni Mar 04 '23 at 16:46
  • Solipsism means never having to give your sources. – Boba Fit Mar 05 '23 at 16:53
  • Is your question about whether ChatGPT can be considered evidence of *any* claim it makes, or of solipsism specifically? Because if the latter, the question is essentially "Which of the following is true: 1) I imagined that a bunch of people (who don't exist) wrote text (that I haven't read and therefore doesn't exist), and some other people (who don't exist) trained a statistical model on it (which sort of exists, since I imagined using it), that correctly claims that solipsism is real, or 2) Real people generated real data and a model trained on it falsely claimed that solipsism is true" – Ray Jun 26 '23 at 17:57
  • Whereas the question of whether ChatGPT claiming some statement P to be true is evidence of P is much more easily answerable (It technically is, but only because people are slightly more likely to make a claim if it's true than if it's false, and therefore a model trained on text people write will be slightly more likely to generate a claim in a world in which it's true, all other things being equal. It's really *really* weak evidence, though.) – Ray Jun 26 '23 at 18:01

1 Answers1

3

Unequivocally, due to the way it currently operates, ChatGPT should not be trusted at the moment. It is about as equally likely to produce information as misinformation. In fact, it currently has no sense of what is true and what is wrong, something I have experienced personally, and which is surfacing more and more.

In my personal experience, I've seen ChatGPT produce:

  • Incorrect computer code, with eg. loop variable name changed midway through the loop in a nonsensical way (just an example)
  • Incoherent mathematical proofs, where the result to prove was used in the body of the proof itself
  • Philosophical verbiage that looked good on the surface but was either a barely logical collage (with a tinge of patronizing), including some slight misses in logical reasoning

and more ...

Here is a list of references about "solipsism" generated by ChatGPT just now: enter image description here

I was unable to find some of the books mentioned in that list on Amazon.

In the end, what ChatGPT does is only a collage of what it has seen in its training data, with no verification of whether the result is accurate, coherent, consistent, logical, meaningful or trustworthy. Check the article I point to, it's illuminating: ChatGPT will generate any scientific paper you want, with extensive list of ... entirely fake references. That should give pause to anybody who wants to use ChatGPT as an authoritative source.

It's possible those problems all get overcome in the future, but right now, they are glaring issues that can't be avoided.

Frank
  • 2,347
  • 1
  • 14
  • 17
  • That is, when the ChatGPT artificial intelligence says that solipsism is true, we should not pay attention to it, because it is an incorrect statement. But in the future, if artificial intelligence becomes more reliable, will we have to believe in solipsism? I can't understand it. That is, artificial intelligence will be able to find some evidence of solipsism in the future and we will have to accept solipsism? – Robert Antoni Mar 03 '23 at 17:49
  • If AI "finds some evidence", but currently, AI is not really set up to do that, except maybe in some domains where it is used to sift through data. It is certainly not the case in ChatGPT. ChatGPT would not be able to find anything that has not been fed into it previously, essentially. ChatGPT is not making any discovery. It's only regurgitating a patchwork of tidbits we have fed into it. These systems don't have any creativity or originality, except in the way they string together things we have fed into them. Well, even there, they just follow most likely statistics, that's all. – Frank Mar 03 '23 at 17:52
  • Thank you. I realized. Tell me, can artificial intelligence in the future convince us that solipsism is true? As far as I understand it is impossible to find evidence for solipsism, so all an artificial intelligence can do is generate some kind of argument in favor of solipsism. In that case, should we listen to this argument and accept solipsism? – Robert Antoni Mar 03 '23 at 18:22
  • I think that whether there is evidence for solipsism or not in the future is independent from who/what will find that evidence. If anybody or any AI system makes an argument, it will have the same value as any other argument, the fact that it would be generated by AI would not confer it any special authoritative value. – Frank Mar 03 '23 at 18:36
  • That is, there is no difference whether artificial intelligence will create an argument or some philosopher will create an argument? That is, they will have the same value? – Robert Antoni Mar 03 '23 at 18:59
  • Yes - the argument's merits have to stand on their own, irrespective of who or what made the claim. Otherwise, that would be committing the fallacy of "appeal to authority". So, the argument and evidence should be examined regardless of who or what produced them. – Frank Mar 03 '23 at 19:46
  • So when an artificial intelligence (ChaGPT or otherwise) says that solipsism is true, we should not believe it until it provides us with evidence and compelling arguments in favor of solipsism? – Robert Antoni Mar 03 '23 at 20:26
  • That seems right to me. – Frank Mar 03 '23 at 20:36
  • I'm guessing that ChatGPT is actually a "Chinese Room" full of college students. – Scott Rowe Mar 03 '23 at 21:04