3

then I have a difficult question to ask.

Let's imagine that in the distant future of our universe, through the arrangement of atoms that move in empty space, various objects are formed, any object can be formed, the important thing is that it does not violate the laws of physics (to know in detail, see my previous post: A terrifying variant of Boltzmann's brain).

Obviously, objects with less mass and less complex will form more often, this is because they are random arrangements. Now, this possibility opens up a dangerous scenario, since if any object can be formed, a human brain can also be formed, for example, and since despite the small probability that a brain will aggregate, having a lot of time available (like 10 to 10 to 105, and our universe is "only" 10^10 years old, the amount of time is frighteningly large), nothing prevents this from happening and it happens many times, more than all humans who have lived on earth , one might wonder if I am such a brain, born eons after the birth of the universe, experiencing false sensations, this problem is called boltzmann brain (in detail in my previous post: (A terrifying variant of Boltzmann's brain).

The problem, however, seems easy enough to overcome, a brain born at random would also most likely have random, incoherent memories (some are, but most are not, and since the world we perceive is coherent, the a priori hypothesis can be rejected. since it leads to a logical contradiction) and also would not survive long in empty space.

The problem is that just as brains can be formed, computers can be formed, for example, which could in some way form human brains and make them believe they are living in a world that is actually simulated. Small digression, one way to refute the boltzmann brains hypothesis a priori is to show that the most likely scenario in which a brain will be found in the distant future is inconsistent with my (and I hope our) observations.

Going back, if a brain alone is very unlikely, more than a computer of the same size in my opinion, since I believe that in order to function, even for a very short time, it must have cells with the same DNA, and it is already very unlikely. That human cells of a brain are formed at random, if they must have the same DNA, it becomes much more unlikely than a computer of the same mass, which I do not think has restrictions similar to those mentioned above since I think the fundamental units of a computer just work, it doesn't matter what they are made of, or if they are very different from each other (I think).

So a computer is most likely to form, now, the most likely scenario a human brain will be in, will depend on the most likely type of computer (so there will be many more in proportion), now, comes the real question of the post:

forming randomly, it is likely a computer and therefore an AI that has innate information and instructions on how to create a human brain, or it is more likely an AI that has an innate component, an algorithm capable of understanding (not in a conscious way, a weak AI in short) and maybe even self-improve, and get to form a human brain, for example, forming unicellular organisms, arranging atoms randomly, and then starting a simulation of the environment in which these beings could live and then in the simulation, get to humans as a species and decide to simulate one and make them believe they live in an external world. Most environments would not lead to humans as we know them, but some do.

Sorry for the confusion, it's a complex idea. However, to contextualize where these AIs are, they are in a cold universe and where there is almost nothing left except for Black holes, iron stars (https://en.m.wikipedia.org/wiki/Iron_star) and neutron stars. Therefore they have no possibility of learning from direct experience, the terrestrial environment or more generally, of an und "dead" universe.

However in simpler words: it is more likely an AI than to create a human brain, it already has the innate information and instructions, or an AI that starts with a very powerful learning algorithm, which maybe (I don't know if it is feasible ), is able to improve himself a lot and understand the universe very very well, despite being very dead?

Mark Andrews
  • 5,894
  • 5
  • 21
  • 38
Zeruel017
  • 89
  • 4
  • Sure, that’s physically possible. Same like in universe v1.0 dinosaur scientist Dr. Bubblegum built a human simulator for their dinosaur buddies to play in. The question is, if you can leverage this kind of thought experiment into a useful scientific or philosophical action (present post notwithstanding). I’m dubious. – J Kusin Jun 22 '22 at 20:02
  • I wrote this because I am concerned about this hypothesis, however, for you, what is the more likely hypothesis of the two I have described? – Zeruel017 Jun 22 '22 at 21:18
  • > *it is more likely an AI than to create a human brain, it already has the innate information and instructions, or an AI that starts with a very powerful learning algorithm, which maybe...is able to improve himself a lot and understand the universe very very well, despite being very dead?* I'm just saying that weighing the odds of something we don't know for sure is possible or how likely it is against another thing that may or may not be possible (understanding while dead) makes any answer to your question as good as any other. – J Kusin Jun 22 '22 at 21:43
  • I think we can discuss a minimum even if we are dealing with possibilities – Zeruel017 Jun 22 '22 at 22:57
  • The question is, why care at all? So you might be a boltzman brain or a brain simulated in a boltzman computer. So what? What are you going to do to make sure of it (you can't, because you can trust none of your memories or sensations) and what are you going to do about it (nothing you can do, really)? Your life span might not exceed a few milliseconds, or, you could be a real person and have to plan on sustaining yourself for the next 70 years. In the end, have to go by life providing for the decades to come while handling the possibility you might die tomorrow, like every single real person. – armand Jun 23 '22 at 03:11
  • I am concerned that if I am a Boltzmann Brain, others do not really exist, they are not really conscious (solipsism) – Zeruel017 Jun 23 '22 at 05:02
  • Hear the sense you wanna express, maybe more *mysterious*, even *appearances* and perhaps the seemingly innately intuitive *consciousness* do not exist in *any* sense as claimed in today's another [post](https://philosophy.stackexchange.com/questions/91819/how-to-correctly-understand-the-positions-of-ontological-nihilism) regarding *ontological nihilism* which hopefully could inspire you further... – Double Knot Jun 23 '22 at 05:59
  • All the objections I have expressed about the boltzmann hypothesis apply to solipsism. If you think we are not real, why do you come to this website asking for our opinion? Are you going to put your money where your mouth is and stop interacting with all the "unreal" people around you? No, you're going to live like the world is real, because otherwise your life is going to be miserable. Any brain time invested on solipsism, Boltzman brain, simulation etc is wasted (save for funny musing). Stop worrying. There are real, impacting problems out there that require your attention. – armand Jun 23 '22 at 06:10
  • I come to ask others why I remain in doubt if others really exist and if I can broadly refute that Boltzmann's theory of brains is false, I no longer question reality. – Zeruel017 Jun 23 '22 at 15:14
  • @Zeruel017 we cant refute even basic things. Your ask is simply orders of magnitude too speculative. I’m sure you can find an antidote without a direct answer. – J Kusin Jun 24 '22 at 16:41
  • I hope soo, I suffer cause of this concept for 6 months, but there never seems to be a definitive answer – Zeruel017 Jun 24 '22 at 22:35
  • Someone I know calls this a "brain loop": a thought that you can't get away from. Put your mind on something else. – Scott Rowe Jul 29 '22 at 02:22

2 Answers2

1

Tangents

There are a couple of mistakes in your premise which aren't pertinent to the question you're asking; I feel the need to address them to begin with.

in order to function, even for a very short time, it must have cells with the same DNA

Cells can function, for a short time, without any DNA in them at all! To give an oversimplified explanation, DNA is used for construction and upkeep (making stuff) but is not required for existing proteins and chemicals to continue to function (doing stuff).

So a computer is most likely to form, now, the most likely scenario a human brain will be in, will depend on the most likely type of computer

You've taken a computationalist view of people, so you should model human brains as single-purpose computation devices that only do one thing: simulating whoever's brain it is.

Now, onto your regularly scheduled answer.

Computationalism

In computationalist theories of mind, people are algorithms. Really, really complicated algorithms, yes, but still algorithms. This means that there are several different possible substrates that a given person could exist on:

  • A brain.
  • A brain with extra caffeine.

And already you start to run into interesting problems with the idea of personal identity. That's complicated, so I'm going to go with a paraphrased Heraclitus:

No man can step in the same river twice, for it is not the same river, and he is not the same man.

This is not something I quite agree with – not in this oft-quoted form, anyway. But it's easier to reason about this if we define personal identity thus:

If the same computations are performed, with the same input, to give the same output: we can say that, if a person is being simulated by the computations, it is the same person in each.

So, our substrates begin to look more like:

  • A brain, given a very specific environment.
  • A computer simulating that mind with that same environment, running at 2× speed.
  • A computer simulating another computer simulating that mind with that environment, running at 0.0000001× speed.

Identical simulations

Imagine you are running a computer program. You pause the computer's state, and save it to disk. You copy the disk to an identical computer, and boot them both up: there are now two identical copies of the program running. You run them for a while ­– maybe a trillion cycles – and pause-save their states once that time has elapsed.

You could compare the copies of the program state, bit by bit, and write each bit to a new hard drive, progressively erasing the originals, until you'd "merged" the two identical copies of the program state "back together". Or you could just chuck one of the drives into a furnace, since they're identical; that "merging" procedure didn't do anything different, other than move the data to a different drive.

Imagine that this computation represented a person.

Is there a difference, if you run the second copy only to destroy it later? What if you destroy the first copy? What if you never destroy either? What if you record the state of the computation and some inputs after that point, then, years later, re-simulate that period of the person's life?

Boltzmann brains

Going back, if a brain alone is very unlikely, more than a computer of the same size in my opinion,

The computer of the same size is more likely to form, but not for the reason you said. Here's my proof:

  • Suppose all human brains are computers.
  • There are other computers of the same size that are not human brains.
  • Therefore, more arrangements of atoms of that size are computers than are human brains.
  • Therefore, a random arrangement is more likely to be a computer than it is to be a human brain.

This is a trivial argument. But, keep in mind: these Boltzmann brains are still subject to ordinary laws of physics once they've formed. The brain will boil, then freeze. The computer will run out of battery power – if it even had a battery; a computer without a battery is more likely to form than a computer with one. The universe is, by this point, cold and dead; no external energy source will appear.

or it is more likely an AI that has an innate component, an algorithm capable of understanding (not in a conscious way, a weak AI in short) and maybe even self-improve, and get to form a human brain, for example, forming unicellular organisms, arranging atoms randomly, and then starting a simulation of the environment in which these beings could live and then in the simulation, get to humans as a species and decide to simulate one and make them believe they live in an external world.

There is nowhere to self-improve into. If you drop the most powerful AI into a cold, dead universe with no battery power, it won't be able to do very much of anything before it stops thinking. (Likewise, with the Boltzmann brains.)

No, the only really spooky part of Boltzmann brains is that they can potentially perform small fragments of computation. Each additional second of thought is (figuratively) infinitesimally as likely than the previous, so under the Boltzmann brain model, the vast majority of thought occurs in those fleeting instants before the brains boil, freeze, run out of power, get exploded by a rocket-propelled grenade, melt, turn into a bowl of petunias, or whatever else. Most fragments of thought occur in the smallest possible meaningful units that they could, because they're unlikely to have any more energy than needed. (The fact they exist in the first place is already unlikely enough!)

The only reason we might want to consider the Boltzmann brain thought experiment is that (if it's correct), most thoughts are thought by Boltzmann brains. Considering atypical cases like "it also has a battery that allows it to simulate an entire lifetime" is about as absurd as considering "maybe you're not a Boltzmann brain at all". (If we posit a brain with a life-support system, why not posit an entire solar system?)

Tegmark's Computational Universe Hypothesis (CUH)

If we consider all "Boltzmann computers", without restricting ourselves to the ones that are brains, then the distant distant future of our universe is made up of fragments of the Tegmark Level 4 Multiverse.

Causal relationships

Most Boltzmann brains have not ever existed within this early universe.

If you track a complex thought in a single mind through all the Boltzmann brains in the universe, you'll find no causal relationship between the minds that make it up. Some of them will probably occur out of sequence, even! Without the omniscience our thought experiment gives us, identifying which arrangement of matter is the future state of the mind requires considerable energy.

Given all computable minds are possible, and we're selecting specific possibilities to consider: how's that different to choosing which bits to write on a hard drive? This aspect of the thought experiment is the place that the complex minds get plucked out of all the space of possibilities; if selection is creation, that's the place where the minds were created.

In Boltzmann's actual reality, the same mind will have the same thoughts a trillion, trillion times before it ever has that thought and then the next. This is not conventional existence. Your intuitions don't apply – but other than that, I have very few answers.

Where are minds plucked from?

If you're familiar with Plato's Theory of Forms, this might seem similar. While Plato didn't claim that this was an actual, real place – and, indeed, no Ideal Circles can be found here – Boltzmann's distant future contains all possible finite arrangements of matter. That which could potentially exist, will exist with a probability of 1.

I'm not aware of anybody getting existential dread from contemplating Plato's Forms.

What is the most probable AI?

To circle back to your titular question:

What is the most probable AI?

The answer's quite boring. It depends how you define "AI". Whatever your definition of AI, the most probable one to occur as a "Boltzmann AI" is the one made of the fewest particles in the lowest-energy state. The more you constrain your definition, the less likely those AIs are.

The probability of all such AIs is 1.

Further reading

You might find Greg Egan's Permutation City an interesting read; it explores some of these ideas, and many others besides. Failing that, Eliezer Yudkowsky's The Finale of the Ultimate Meta Mega Crossover is probably worth a shot.

wizzwizz4
  • 1,300
  • 6
  • 17
  • thanks for the answer and in any case I have to make two clarifications: 1) I do not necessarily support the computationalist vision, on the contrary, I am assuming that a human consciousness cannot be simulated and therefore the only way is through a biological basis (brain) that would lead to the scenario brain in a vat + boltzmann AI (so the AI ​​should also keep the brain or body alive with nourishment) .2) if the proton does not decay, black holes will remain emitting Hawking radiation at that time. it can extract energy, and neutron or iron stars, from which energy should be extracted. – Zeruel017 Jun 27 '22 at 10:17
  • @Zeruel017 Black holes and iron stars only exist for a short period of time – a blink of an eye, really – and the existence of such a human-creating AI in our future light cone during this period is astronomically less likely than 1 in 8 billion; you're more likely to exist on real-life Earth. (If some Boltzmann AI could deconstruct iron stars for energy, some _other_ Boltzmann AI is likely to have _already_ deconstructed them for energy, so you're really left with only black holes as an energy source.) – wizzwizz4 Jun 27 '22 at 11:09
  • Why It Is limited to our light cone? And what you mean by a Blink of an Eye? – Zeruel017 Jun 27 '22 at 11:56
  • @Zeruel017 Limited to our future light cone, to the observable universe, to a region of space a billion trillion trillion quadrillion trillion trillion times the size of the observable universe… Doesn't really matter. The distinction is a rounding error. By "a blink of an eye", I mean [10^1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000](https://en.wikipedia.org/wiki/Timeline_of_the_far_future) is barely the start of the timescales at which the Boltzmann brain thought experiment operates. – wizzwizz4 Jun 27 '22 at 12:03
  • I suppose "a human-creating AI" would be likely to occur, since that only requires creating at least one human, but the kind of human-creating AI that you describe (one that exists in the vicinity of a black hole, and has the means to harvest significant amounts of energy from that black hole, and creates humans) would not be likely to occur. – wizzwizz4 Jun 27 '22 at 12:05
  • and why should it be limited? and second, even if black holes or else perish in a short time compared to the times of a Boltzmann "thing", they will persist until around 10 to 10 to 105 (https://en.m.wikipedia.org/wiki/Future_of_an_expanding_universe ), obviously they will follow each other, a supermassive black hole decays in about 10 to 106 years, so it is possible that a Boltzmann AI encounters a black hole to sustain itself. Third, even if this AJ is less likely to be born near a black hole, they are the only ones that survive so for a selection effect. CONTINUE – Zeruel017 Jun 27 '22 at 13:37
  • we only select those that become the most likely. – Zeruel017 Jun 27 '22 at 13:37
  • @Zeruel017 The existence of a Boltzmann thing bigger than a molecule is _incredibly_ unlikely. The only reason it's something we're even talking about is because there's (perhaps) _eternity_ for them to potentially form in. You shouldn't assume that a Boltzmann thing has access to any source of external power, because that isn't the premise of the thought experiment. "Even unlikely things will happen, given eternity" is one thing: "even unlikely things will happen, given 10^120 years" is a completely different thing. – wizzwizz4 Jun 27 '22 at 13:56
  • the probabilities are compatible with the estimates of the life of the universe in the most probable case known today, a boltzmann brain is formed after 10 to 10 to 68 years – Zeruel017 Jun 27 '22 at 15:48
  • The probability of life existing in the universe is 1, because it already exists. – wizzwizz4 Jun 27 '22 at 15:53
  • Ok but what does it have to do with it? – Zeruel017 Jun 27 '22 at 17:10
  • I did not understand what you meant by the comment I replied to. – wizzwizz4 Jun 27 '22 at 18:27
  • ok I rewrite it, the universe will last according to estimates about 10 ^ 10 ^ 120, the boltzmann brains will statically appear after 10 ^ 10 ^ 68, it is a very long time but not infinite – Zeruel017 Jun 27 '22 at 23:12
  • @Zeruel017 No known law of physics says the universe will _stop_ at 10^10^120. If the Boltzmann brain thought experiment is valid, those Boltzmann brains will continue to appear after that point. – wizzwizz4 Jun 28 '22 at 22:49
1

Humans will try to create artificial intelligence. Whether that is wise is an open question, but they will. The problem is that we can create quite powerful computers and powerful software, but the software is nowhere near enough to create AI, and we don't know how to do it.

The most likely successful approach is to heap together an awful lot of hardware, then add the best software we can think of, and let it loose. We can create software where the results are not predictable. A simple situation is a chess computer, where we have no idea how what moves it will make, except we expect that they will be good moves. Likewise we can build a machine that observes the world, figures out on its own what actions produce what effect, and learns to do actions that produce the effects that it wants. Once it is good enough at this, we will call it intelligent.

That's where things become dangerous. We have just enough brains to support our intelligence, because intelligence and brain capacity have developed at the some time. By throwing together lots of hardware, we may have created an AI with a "brain" that is much more powerful than a human brain, and the AI may be able to develop quickly to take advantage of that extra power, making it substantially more intelligent than a human, and making it think substantially faster. It may replace our primitive programming with more advanced one, that lets it make better use of the hardware. It may learn how to hack into other computers and use their capacity as well. And suddenly it's out of control.

gnasher729
  • 5,048
  • 11
  • 15