30

I've seen a few different formulations of this, but the most famous is "monkeys on a typewriter" - that if you put a team of monkeys on a typewriter, given infinite time, they will eventually produce the works of Shakespeare, and indeed every text ever written or even conceivable. (Other arguments I've seen include: if the Universe is infinite, there must be a planet exactly like ours somewhere. I'll stick to the monkeys for the sake of this argument.)

I've always been sceptical of this, but it's just occurred to me why - I wanted to ask if my thinking stands up to scrutiny, or if there is a counterargument.

If you have a sequence of coin flips, the probability of heads or tails is always 50:50, no matter the previous sequence. Even if we get a sequence of 10 heads in a row, the probability of the 11th coin flip is still 50:50. Believing otherwise is to engage the gambler's fallacy - the belief that if a particular event happens more frequently than normal, it's less likely to happen in the future.

So here's my thinking. Say that a decision can have one of two outcomes, A or B, and they're equally likely, 50:50 chance of occurring. Then based on outcome A or B, there are a further two outcomes that could happen - if it's outcome A, you could have outcome C or D (both equally likely,) or if it's outcome B, you could have outcome E or F (both equally likely.) So the probability of arriving at outcome C, D, E or F is 25%, after taking just two decisions.

If you made 1,000,000 similar decisions, the probability of that final outcome being reached at any one moment is 1 in a million. The larger the number of decisions, the closer the probability approaches to zero - if there were infinite decisions, the probability of any one outcome would converge on zero.

Now to me, the "monkeys on a typewriter" genre of arguments seems to be saying that if you take that decision tree and stretch it over an infinite timeline, eventually you will reach all the outcomes on the decision tree. But to me, that doesn't add up. If there are 1 trillion possible outcomes, the possibility of one particular outcome is always 1 trillion - it will never become more likely simply due to the passage of time. And so you might literally never reach one of Shakespeare's plays by simply hoping that random keypresses will converge on that 1 in a trillion outcome. Because it's just so unlikely.

Is this a fair criticism, or is there more to the "every outcome will happen in an infinite timeline" argument that I hadn't considered? Note that I'm not very mathematically numerate or logically literate - I'd be extremely grateful if complex formulae were either kept to a minimum or explained in layman's terms.

Lou
  • 411
  • 4
  • 7
  • 10
    Are you familiar with the concept of limits in calculus? The probability that a sequence of N coin flips will fail to contain a single tail approaches 0 in the limit as N approaches infinity (for 2 flips, prior to either flip having been done, the probability of getting no tails is 1/4; for 3 flips, it's 1/8; for 4 flips, it's 1/16 etc.), so if someone bases their argument on that I don't see how it resembles the gambler's fallacy. – Hypnosifl Jan 30 '20 at 16:04
  • 25
    Your confusion here may arise because we as humans find it impossible to comprehend "an infinite amount of time" as a completed whole. – nwr Jan 30 '20 at 16:59
  • 5
    Are you considering the probability of full sequences? The kind of argument you mention is about the probability of sequences containing finite strings of characters. The probability of getting exactly one of Shakespeare's play from the start is indeed very low, but the probability of a random sequence *containing* one of Shakespeare's play at any position becomes closer to one as the size of sequences increases. – Quentin Ruyant Jan 30 '20 at 18:09
  • 1
    It *could* happen and it *would* happen. But infinity *will* not occur because [∞ is not a number.](https://math.stackexchange.com/a/710385/135092) – Mazura Jan 30 '20 at 23:25
  • Your question and post don't quite line up. Infinite monkeys will generate Shakespeare, because the events are independent. But that doesn't mean that *any* event can happen. Mutually exclusive events, for example, still cannot both occur. For that, you need the infinite worlds – Mars Jan 31 '20 at 05:50
  • @Mars I didn't formulate the argument very neatly at all, that's for sure. I hope the premise of the argument I was trying to refute was clear though. – Lou Jan 31 '20 at 13:52
  • I think you've hit upon a separate fallacy. The monkeys and typewriter argument assumes that monkeys type randomly, which is not the case. They will jump up and down on the keyboard, hit keys repeatedly, develop favorite keys and sequences, and clumsily hit multiple adjacent keys simultaneously. At first, it might seem that given infinite time, they will generate all possible works of art. But in reality, the conditions under which those sequences are generated preclude it. – Peter Rankin Jan 31 '20 at 15:42
  • I'm sure you're right about how the practical monkeys on a typewriter situation would play out, but the monkeys on a typewriter scenario is neither practical nor possible - I'm happy to interpret it as a thought experiment where the monkeys do type randomly. The point isn't the monkeys, it's that a random sequence of inputs (say, caused by a computer alternating between all characters on a keyboard and producing a string of characters,) must eventually produce Hamlet, which conclusion I found hard to accept initially. Hence this post. – Lou Jan 31 '20 at 15:47
  • That's a good point; with truly random input, the argument is theoretically true. Some have used the monkeys/typewriters argument to say that, e.g., a naturalistic formation of life is viable given enough time. Yet aside from the extremely underappreciated mathematical improbability, I believe a distinct fallacy is an ignoring of physical laws and rules. They assume that the input and conditions are truly random when they are not (as with monkeys and typewriters). In such cases, not all outcomes are guaranteed or possible, even if given infinite time. – Peter Rankin Jan 31 '20 at 16:10
  • Possibly so. I mean, infinity breaks pretty much all of the physical laws and rules, so we are definitely discussing a fictional scenario here. I'm not even sure myself what insight the monkeys on a typewriter brings given its inapplicability to the real world. – Lou Jan 31 '20 at 16:17
  • Plus, ["All models are wrong, but some are useful."](https://en.wikipedia.org/wiki/All_models_are_wrong) The map isn't the territory; it's a model which imperfectly describes reality - so perhaps what you're asking is what degree of imperfection in a model is acceptable? Really outside the scope of this question, but definitely interesting also. – Lou Jan 31 '20 at 16:18
  • 1
    To add to @Lou’s comments, the “laws of probability” seem to match experience in things we can measure. So we have “faith” that they can predict things we can’t measure. And we can’t really disprove them empirically, because they actually predict the possibility of an unexpected result. – WGroleau Jan 31 '20 at 17:26
  • Given an infinite amount of time, wouldn't the monkeys eventually produce an infinite amount of Shakespeare plays? As a matter of fact(?), wouldn't you only need only one monkey for this? – Jpe61 Jan 31 '20 at 18:43
  • I think most people have answered well the difference between random events and probability of sequential events, I just want to note that I've nearly always seen this argument used in the realm of evolution where 'random' events are used to show how life evolved. But that's a false premise. *Natural selection* is where when those monkeys type a word and if it *fits* the play being written, it is **selected** and then the monkeys continue typing until they randomly type the next word that fits the script. There is a HUGE difference in the two arguments. – CramerTV Jan 31 '20 at 19:26
  • The main arguments against evolution that I've heard, from a probability point of view, relate to the formation of the first self-replicating cell. By definition, natural selection cannot apply at that point. But yes, assuming a first cell, the argument shifts from (more or less) mere probability to include other arguments, although I believe mutation and natural selection have their own issues (including with probability). – Peter Rankin Jan 31 '20 at 20:00
  • @PeterRankin The monkey input style part is wrong... how the monkeys tend to type effects the probability, but as long as that probability isn't zero, with infinite monkeys, you'll still get Shakespeare. The only way you wouldn't is if there is a 0% chance of a monkey ever pressing a single button (for any of the buttons, assuming that button is used in a particular Shakespeare piece). The fact that monkeys have a disposition to press multiple keys only increases the expected number of trials needed to have X% chance of producing Shakespeare – Mars Feb 01 '20 at 07:09
  • 1
    Something to consider is: how useful would an infinitely long passage of text that contains at least one perfect copy of _Hamlet_ be? You would need an infinite amount of effort to find that copy and differentiate it from all the copies where a single letter was wrong, only two letters wrong, etc. – CJ Dennis Feb 01 '20 at 10:02
  • @CJDennis I mean, nobody said the scenario was a practical one. – Lou Feb 01 '20 at 10:32
  • 1
    Exactly, the infinite monkey typewriter Shakespeare -scenario actually has no utily at all... the universe has not existed, nor will it exist infinitely :) – Jpe61 Feb 02 '20 at 20:57
  • @Mars--Weighted probability is a good point, and I might agree if each keystroke were an isolated event, but I believe monkeys act more like state machines. I.e., as another example, each step of a tightrope walker is partly a function of probability. So the chance he or she goes 300 yards successfully might be 98%. But the chance becomes absolute 0% (not almost) for a million straight miles due to natural laws (e.g., fatigue). Monkeys and typewriters are much more complex, of course. But not all output is possible in many condition/state machine combos, even if given infinite time. – Peter Rankin Feb 03 '20 at 16:20
  • (Also, "typewriter" implies one monkey per workstation/output stream, rather than a system where many keyboards are simultaneously pooled into a single output. Real-time pooling will increase randomness and diminish the state machine effect.) – Peter Rankin Feb 03 '20 at 16:25
  • But each keystroke is an isolated effect? Bashing the A key has no effect on the likelihood of bashing the B key in the future. Sure you can model it as an FSM, but then each state simply links to 44 other states based on the typewriter keypress. I don't see how it changes the calculus we've been discussing. – Lou Feb 04 '20 at 08:54
  • For real monkeys, I think each keystroke is far less isolated than we might imagine. E.g., after a while of "pecking," he will get tired of that and switch to banging on the keys, then go back to pecking (if he's an especially good typist monkey). Or every so often he might "take a break" and mash his favorite key over and over. Every evening, he might leap off the typewriter keys to get onto a low-hanging vine, or try to dig out part of a mashed banana from the keys. And there are thousands of other subtle patterns and states that affect his decisions. – Peter Rankin Feb 04 '20 at 12:18
  • Even we as people are notoriously bad at generating our own "random" passwords. We might intentionally avoid patterns that would naturally occur, and we have all sorts of "states" and subconscious patterns to our actions. A monkey might type a word or even a sentence randomly given enough time, but all these multi-faceted short-term and long-term states and patterns will make it impossible (not just highly improbable) that he type a large work of art consecutively. – Peter Rankin Feb 04 '20 at 12:18
  • @PeterRankin I understand what you're saying, but I don't think the practical nature of the thought experiment bears at all on the conclusion. Yes, real monkeys would type preferentially, but real monkeys don't live forever. Because the situation is impossible by nature, it doesn't make sense to consider the simian logistics. – Lou Feb 08 '20 at 09:28
  • Why don't we instead assume an infinite computer outputting a continuous string of random Latin letters (with punctuation, spaces and anything else needed to type out Hamlet,) where each symbol output is equally random - the Library of Babel, in short? Then the conclusion that every finite text will eventually occur becomes intuitive. – Lou Feb 08 '20 at 09:28

10 Answers10

42

It looks like you've hit upon the concept of almost surely in probability theory. Something occurs "almost surely" if it happens with probability 1, but there still exist situations where that thing does not occur. The infinite coin flips problem is a great example - with infinite coin flips, you will almost surely see at least one result of heads, that is, the probability that you get at least one heads is 1. There is, however, the possible situation where you get an infinite sequence of tails - it's not explicitly impossible for this to happen. But, since there are an infinite number of sequences that have at least one head, and only one sequence with no heads, the probability of getting that infinite sequence of tails is 1/X in the limit of X going to infinity, which is 0.

Similarly, with the infinite monkeys, there is some finite number of texts that can be written with normal punctuation and lettering that have the same length as Hamlet, about 130,000 characters. Now the probability of failure is much, much higher than the coin flip, but that doesn't matter with infinite tries. As you try more and more times, the likelihood that you fail every single time gets smaller and smaller, falling to 0 as you try an infinite number of times. It is possible that you never type out Hamlet even if you type forever, but you will almost surely type it at some point with probability 1. Note that this isn't unique to the text of Hamlet - in any infinite sequence of characters, you will almost surely see every finite sequence of characters. An infinite number of monkeys will almost surely type out Hamlet, but they'll also almost surely type out Hamlet with the protagonist's name replaced with "butthead", and a version of Hamlet where he gets into a rocket ship at the end, and every other variation you can imagine.

This isn't the gambler fallacy, which assumes that past outcomes can influence future ones for independent events. In an infinite sequence of events, the likelihood of an event at any point in the sequence never changes. We know that for each sequence of 130,000 random characters, the odds that it spells out Hamlet it exceedingly unlikely. The fact that we don't see it many times doesn't make it any more likely that we'll see it the next time. It's simply the case that with enough tries, you will eventually, almost surely, write out Hamlet - no matter how biased your coin is, it's almost sure that you will not see a tails every single time if you keep flipping it forever.

This isn't the Gambler's Fallacy, but does lead to something called the Gambler's Ruin. Any player with finite wealth playing a fair game will eventually go bankrupt when playing against someone with infinite wealth (effectively the casino), because in an infinite sequence of games, it is almost sure that at some point, the gambler will encounter a series of losses that will be sufficient to bankrupt him.

Nuclear Hoagie
  • 1,160
  • 1
  • 7
  • 10
  • 20
    Side note: "Almost surely" is not _just_ a matter of finite vs infinite. For example, if you pick a random real number between (say) 0 and 1, it will "almost surely" be irrational, even though they are infinite rationals to choose from. The intricacies of infinities like this are usually introduced in undergraduate Real Analysis courses. – BlueRaja - Danny Pflughoeft Jan 31 '20 at 01:46
  • @BlueRaja-DannyPflughoeft, do you happen to have a link to further reading on why it will "almost surely" be irrational? I haven't heard of that before and would love to know more. – 3ocene Jan 31 '20 at 02:21
  • @BlueRaja-DannyPflughoeft Good point. There can still be an infinite number of outcomes that would satisfy an "almost never" event. In the end, it comes down to the ratio between the size of the "almost never" set of outcomes and the size of the "almost surely" set of outcomes, which must be zero. There are infinitely many rationals, but infinitely more irrationals. – Nuclear Hoagie Jan 31 '20 at 02:27
  • 4
    @3ocene There's a concept in real analysis called "measure" that is rather too complicated to get into in a comment, but given the standard measure, one way to show that the number is "almost surely irrational" is to note that rational numbers are enumerable (it's possible to assign integers to each rational number so that each rational number gets a different integer). – Acccumulation Jan 31 '20 at 07:15
  • So you can take an interval of size x/2 around the "first" rational number, then x/4 around the "second", and so on. If you take the sum x/2+x/4+x/8 ... you get x. You therefore can surround all of the rational numbers with a collection of intervals whose size is no more than x (if they overlap, their total size is less than x). You can then take the limit as x goes to 0, and that shows that the "size" of the set of rational numbers is 0. – Acccumulation Jan 31 '20 at 07:15
  • 6
    With the monkeys example, library of babel comes to mind (parentheses mine): `If completed, it would contain every possible combination of 1,312,000 characters, including lower case letters, space, comma, and period. Thus, it would contain every book that ever has been written, and every book that ever could be - including every play, every song, every scientific paper, every legal decision, every constitution, every piece of scripture, and so on. At present it contains all possible pages of 3200 characters, about [10 to the power of 4677] books.` here's a link: https://libraryofbabel.info – John Hamilton Jan 31 '20 at 08:19
  • "The fact that we don't see it many times doesn't make it any more likely that we'll see it the next time. It's simply the case that with enough tries, you will eventually, almost surely, write out Hamlet" - So basically what you're saying is, in an infinite timeline the chances of NEVER writing out Hamlet are about the same as the chances of infinite coin flips all turning up tails? Or, if not the same, that Hamlet **not** "cropping up" eventually is just so extremely unlikely that it almost surely will not happen? – Lou Jan 31 '20 at 14:01
  • 3
    @Lou That's pretty much correct. The likelihood of an infinite typist never typing Hamlet is *exactly* the same as the likelihood of an infinite flipper never seeing heads - both are 0. As long as your event (typing Hamlet or seeing heads) is possible (having non-zero probability), no matter how unlikely, if you try an infinite number of times, it is almost sure that you will not fail every single time. If your likelihood of success on one trial is p (0

    – Nuclear Hoagie Jan 31 '20 at 14:25
  • 2
    @Accumulation That there are more irrationals than rationals is more directly related to cardinality than to measure. But this is beside the point: infinities get complicated and often behave in unintuitive ways (Hilbert's Hotel, Banach-Tarski paradox, etc.) – aschepler Jan 31 '20 at 20:21
16

Here, I think, is a more succinct answer:

Let's say we have a dice with 1 trillion sides. Then, the probability of a given outcome on the next roll of the dice is one-in-a-trillion.

On the other hand, the probability of getting a given outcome, at least once, given infinite dice rolls approaches 1.

Given enough time, monkeys banging randomly at a typewriter will produce the works of Shakespeare

This not an instance of the gambler's fallacy—the likelihood of this happening at least once, given infinite dice rolls does not increase or decrease based on what happened before it. The liklihood of it happening at least once increases based on the amount of time you give it (which is not what the gambler's fallacy is!)

Similarly: The chances of getting tails on the next coin flip is always 50%. But, given enough coin flips, someone flipping a coin will get tails.

AmagicalFishy
  • 316
  • 1
  • 3
  • 2
    Ah okay, I think this explanation actually helped me to understand the most. In particular this: "This not an instance of the gambler's fallacy—the likelihood of this happening at least once, given infinite dice rolls does not increase or decrease based on what happened before it. The liklihood of it happening at least once increases based on the amount of time you give it" So probability will naturally increase over time, but not as a result of events that came before it? – Lou Jan 31 '20 at 14:15
  • 1
    @Lou Correct! The events that came before it don't actually matter. There are other examples of this too: given enough time, life is bound to *eventually* pop up on some planet. ;) – AmagicalFishy Jan 31 '20 at 16:31
  • 2
    @Lou To clarify "probability will naturally increase over time" - let's say that something has a 25% chance of happening within 5 minutes, and a 50% chance of happening within 10 minutes. If 5 minutes pass and the thing has not happened, the chance of it happening in the next 5 minutes has not become 50% (this would be gambler's fallacy). We must ignore past events, so it still has only a 25% chance of happening given only 5 more minutes. I think you got it, but wanted to clarify that probability doesn't increase as time passes, it just increases based on how much time _will_ pass. – charmingToad Jan 31 '20 at 22:01
  • 1
    I've read all the answers and they all ultimately helped me to understand the concepts that I was confusing, but I accepted this one because it was the most clear and concise, and helped me to understand the most. Thanks to all the other contributors! – Lou Feb 01 '20 at 10:34
  • 1
    @charmingToad: Note that if something has a 25% chance of happening in any given five minutes, then it has only a 44% chance of happening in any given ten minutes, rather than a 50% chance. (This is because (1 − 0.25)² ≈ 1 − 0.44.) But other than that -- good comment, +1. :-) – ruakh Feb 01 '20 at 23:40
  • @ruakh Ah you're right thank you! – charmingToad Feb 03 '20 at 19:04
11

"If you made 1,000,000 similar decisions, the probability of that final outcome being reached at any one moment is 1 in a million."

That quote represents the root of your misconception. If a coin is tossed 1 million times, the likelihood of any specific sequence of 1 million tosses is 1 in 2^1000000. However, the chances of tossing heads 10 times in a row anywhere in that million are much much better; much better than tossing heads 10 times in a row in 10 tosses. The chances 10 heads not happening on the first 10 tosses is 1023/1024, the chances of it not happening on neither the first ten tosses nor the second is (1023/1024)^2, the chances of it not happening in any of the sequential groups of 10 (tosses 1-10, 11-20, 21-30, and so on) for 1 million tosses is less than 4*10^-43.... and that is ignoring that 10 heads in a row could happen on tosses 2-11, 3-12, and so on.

"Monkeys typing Shakespeare" is simply an expansion of this observation. It would use a die with enough sides to include every character, space, punctuation mark, and any other typographic symbols used in those works; and much much more than 1 million tosses.

However, there is another fallacy in play, though I am not sure there is a name for it. Random sequences are necessarily capable of representing all combinations. If the monkeys' typewriters have had their vowels removed, or the vowel keys always double typed, not even a sentence of Shakespeare could be produced.

Uueerdo
  • 267
  • 1
  • 7
  • +1 but I don't follow what point you are making in the final paragraph? It seems to just confirm that the event needs to have a probability greater than zero for it to occur in an infinite number of attempts. – JBentley Jan 31 '20 at 01:19
  • 6
    "If the monkeys' typewriters have had their vowels removed, or the vowel keys always double typed, not even a sentence of Shakespeare could be produced." However, there may be a mapping from a sequence of generated letters to the works of Shakespeare, such as a binary or other similar encoding. – JAB Jan 31 '20 at 01:25
  • Thanks for reminding me about the probability of the coin toss. So basically what you're saying is, as the "sample size" increases (number of coin tosses, number of monkey keypresses, not sure what the probabilistic term for this is,) the likelihood of a given sequence occurring does increase? I think I'm still struggling to understand why this notion isn't the same as the gambler's fallacy. – Lou Jan 31 '20 at 14:04
  • 4
    @JAB: Give me any arbitrary dataset (A) and ask for whatever you want (B) and I will give you a list of mapping rules that returns B with A as its input (disclaimer: unexpected behavior for any other input value). Injecting an arbitrary mapping into the monkey-typewriter situation nullifies the importance of the thought exercise. – Flater Jan 31 '20 at 15:48
  • Agreed about the mapping, "monkey typewriter" works with a 2 key typewriter encoding Shakespeare in binary (as ascii for example). I was (clumsily) trying to point out that it still cannot generate truly impossible things like a square circle. Of course, if one fully qualifies the thought experiment as "everything _possible_ will occur", that clarification is redundant. – Uueerdo Jan 31 '20 at 17:10
  • 1
    The last paragraph makes more sense in a mathematical context: we have no guarantee that every finite sequence appears in (for instance) pi. It's entirely possible that we could put odds on a given sequence appearing and find that it's less than one. We can't at the moment, because we can't say enough about the digits of pi. – Spitemaster Jan 31 '20 at 17:14
  • @Spitemaster yes, exactly, my original thought was irrational numbers, but my higher level math is too rusty to confidently say something like pi will never contain ten 3's in a row... but then again, pi is not random and I think the thought exercise assumes randomness. – Uueerdo Jan 31 '20 at 17:20
  • T b r nt t b, tht s th qustn. – Barmar Jan 31 '20 at 19:40
  • The fallacy mentioned at the end is what I'd call an "alphabet error". If there is so much as a single missing letter (from the alphabet used in the target), or a letter that can only be produced in doublet (i.e. the element of the typewriter alphabet, e.g. "PP", does not match the element of the target alphabet, "P"), the probability drops to zero. The thought experiment excludes any such error, of course. – Jeff Y Jan 31 '20 at 19:58
9

You're right about the gambler's fallacy, but you're missing something essential about infinity. Infinity doesn't stop.

So, you've got your immortal monkey and his endless reams of typewriter supplies, and a typewriter with 40 keys. He endlessly hammers on the keys perfectly randomly.

The probability that he types a "T" on the first try is 1/40.
The probability that he types a "T" in the first 2 tries is 1-(39/40)^2, or about 1/20.
The probability that types a "T" in the first 40 tries is 1-(39/40)^40, or about 63%.
It keeps growing. The probability that he gets it in the first 400 tries gets as high as 99.996%.

You're right that the gambler's fallacy is to be avoided, and what that means is that if he doesn't hit a "T" in the first, let's say, 10 attempts, then his chance of hitting it on 11, or between 11 and 12, or between 11 and 50, or between 11 and 411, is still 1/40, 1/20, 63%, and 99.996% respectively.

Now, when we say the probability of hitting a "T" if he hits the typewriter randomly infinitely many times is 1, we're not denying that the gambler's fallacy is wrong. We agree that with independent random events, what has happened before does not change the probabilities of what will happen next. It's just that in the same way as after missing "T" 10 times, the odds of getting it in the next 1, 2, 40, or 400 don't change, so likewise after missing 10 times the odds of getting it in the next infinity presses doesn't change.

The probability that he types out a "T" followed by a "H" is one in 1600. The probability that he does so at some point in the first 3200 taps is about 63%.

The probability that he types out just the phrase "Two households, both alike in dignity" is one in 40^37, which is starting towards those vanishingly unlikely things that starts to wear out the universe before you get to it. The chance of typing all of Shakespeare is unfathomable. But if you have genuinely infinite chances, that's much larger than the expected lifetime of the universe. If it doesn't happen in the first lifetime of the universe, who cares? You still have infinitely many universe lifetimes to go!

Josiah
  • 1,553
  • 7
  • 10
  • 1
    Well written, but a small clarification may be in order: "we're not denying the gambler's fallacy is true". I understand that you mean to say that we're not denying that the outcome of an event is independent of the outcome of previous events, but it can sound like you're saying that a fallacy is true. – Bjonnfesk Jan 31 '20 at 07:16
  • Thanks. Updated – Josiah Jan 31 '20 at 08:16
  • @Josiah I understand your argument, but there's one thing bugging me. The works of Shakespeare do not contain a truly random assortment of characters, vowels occur much more likely than exclamation marks. When extending the random character generation to infinity, the text would be generated in "periods of apparent non-randomness"? If so, could the algorithm behind the generation still be considered random? – MarcioB Jan 31 '20 at 11:12
  • 4
    I *think* I understand now. Are you saying that, the odds of getting sequence X will always increase as the numbers of attempts (keypresses, coin flips etc.) increases, but that this increase in probability has nothing to do with the outcome of previous attempts? – Lou Jan 31 '20 at 14:13
  • 1
    Basically, a gambler with *infinite funds* never runs out of funds and can continue trying to beat the bank. – ceejayoz Jan 31 '20 at 14:40
  • 1
    Precisely. Infinity is a bigger thing than most people imagine. (Are there some infinite sequences that don't include Hamlet as a substring? Yes. But only if they are non-random.) – Michael Kay Jan 31 '20 at 17:49
  • @MarcioB an infinite sequence will indeed contain subsequences that, to a casual observer, look non-random. This is the fallacy of coincidence: the fact that two people have the same birthday doesn't imply that the people weren't chosen at random. – Michael Kay Jan 31 '20 at 17:54
  • @Lou if by "the chance [...] will always increase as the number of attempts increases", you mean "the chance of any arbitrary but specific string appearing in a random string will always increase as the length of the random string approaches Infinity", then yes. Consider Pi. If you take only the first 10 digits of Pi, you probably won't see your birthday in there. However, if you take the first billion digits of Pi, it's extremely likely that your birthday is in there. – Bjonnfesk Jan 31 '20 at 18:29
  • @MarcioB yes, I believe so - in fact, all random algorithms will yield apparently non-random sequences from time to time. The coin flip algorithm, for instance, will eventually yield `HTHTHTHTHTHT`, which doesn't look random at all. Although Pi is in a sense deterministic, the sequence is also indistinguishable from randomness, yet somewhere in Pi is the sequence `123456`. – Bjonnfesk Jan 31 '20 at 18:38
  • @MichaelKay I understand the logic behind that. Extending random generation to infinity would cause any arbitrary collection of characters to be contained within the output. But somehow that still doesn't "feel right" to me, and I can't exactly articulate why. I guess that's a reflection of my inability to fully internalize the concept of infinity. – MarcioB Feb 03 '20 at 11:02
2

This isn't a full answer, but I'd like to point out that you've formulated an alternate version of Zeno's Paradox. As the amount of time increases, the probability that some rare event does not occur becomes smaller and smaller but is never exactly zero. This is similar to how Zeno moves ever closer to but never reaches the target destination. Nonetheless, once you sum the infinite number of movements in the sequence, the destination is reached. Likewise, over an infinite amount of time, the rare event must occur.

Xerxes
  • 126
  • 4
  • Yeah, that definitely helps me to understand the apparent paradox. What I wasn't understanding - and others have clarified - is the idea that the probability of an event occurring increases over time, but not based on the outcome of a previous event. – Lou Jan 31 '20 at 16:04
  • Even the infinite sum is not equal to the target. It is infinitesimally smaller than the target. The probability of missing a given state, while infinitesimal, is still nonzero, so the "must occur" argument is not quite accurate, since the event is missing some of its probability. – pygosceles Jan 31 '20 at 18:52
  • 1
    @pygosceles That's not quite correct. Zeno actually does reach the tree. – Jeff Y Jan 31 '20 at 19:08
  • Yes, but only because he is in a physical environment where inexactness is tolerated. Mathematically, he never reaches it. For practical purposes in a physical world, where there are limits on precision of observation, he does. – pygosceles Jan 31 '20 at 19:35
  • 1
    @pygosceles Again we'll have to agree to disagree. Mathematically he does reach it "in the limit". In the limit, he reaches **all the way** to the tree; he doesn't come up an infinitesimal distance short. – Jeff Y Jan 31 '20 at 20:44
  • That's the fundamental approximation of calculus, but it must be admitted to be an approximation. Exactly speaking, such a function never actually arrives at the target value, hence the innovation of the "in the limit" language. – pygosceles Jan 31 '20 at 20:50
  • Yes it's important to understand the "mechanics" of the math, but the whole motivation and main purpose of Calculus is to model actual events. For that such "innovations" are necessary. It's inadequate to leave off with "this technique doesn't really model reality at all, it's just useful for getting correct answers" or the like. Calculus does actually **resolve** the paradox. – Jeff Y Jan 31 '20 at 21:20
1

One fallacy that is evident in your question but has not been addressed by the other answers is:

everything will occur in an infinite timeline

And you said something that is an instance of the fallacy:

if the Universe is infinite, there must be a planet exactly like ours somewhere

Both of these are completely fallacious. Nothing about an infinite process implies that it 'goes through' every possible situation. Nothing about an infinite world implies that it must have everything possible. In general, you need much more assumptions than just infinitude to conclude anything like that. Just to give you easy concrete mathematical examples to demonstrate the fallacies:

Not every positive integer occurs in the infinite sequence of odd numbers: 1, 3, 5, ...

There are infinitely many primes, but no two distinct primes have a common prime factor.

In mathematics we have a 100% precise notion of probability, and under that definition we can construct a (mathematical) probabilistic process (such as an infinite sequence of fair coin flips) in which some outcome (all heads) is possible but has zero probability. Be aware that this may not have anything to do with reality whatsoever. You need to separately think about or investigate whether some mathematical theorem can be used to deduce something about the real world. In the case of infinitely many coin flips, it says essentially nothing, because you can never in the first place flip a coin infinitely many times! If you flip a coin k times, the probability of getting all heads is 1/2^k, which is not zero. In other words, the mathematical notion of an infinite sequence of coin flips is simply impossible in reality, and the zero probability of that the all-heads outcome in the mathematical notion has zero relevance to reality.

For another example, we can construct a mathematical object corresponding to the notion of choosing a random real number uniformly from the interval [0,1]. Now consider any particular real number that is chosen in this manner. Its probability of being chosen is actually zero. Again, this is irrelevant to the real world, and does not imply that mathematics made an error ("something got chosen even though the probability of choosing it is zero"). In fact, there is no way at all in the real world to choose a real number uniformly from [0,1]! In practical applications, we can for example choose a rational number of the form k/2^32 where k seems for all practical purposes (i.e. passes all statistical tests) to be chosen randomly uniformly from the interval [0,2^32−1]. Each of these rationals would be chosen with probability 1/2^32, which is nonzero.

So be very careful in randomly interpreting very different kinds of infinite mathematical objects as saying anything about the real world.

user21820
  • 623
  • 1
  • 7
  • 17
  • "no way at all in the real world to choose a real number uniformly from [0,1]" - I can think of a few ways, but only if it's fed into a process that only needs to use finitely many digits from it... (e.g. lazy-evaluation of less significant digits, with lazy-evaluated basic mathematical operations that output an object of the same kind, etc...) – Steve Jan 31 '20 at 14:44
  • @Steve: That's precisely the point, isn't it? When you do what you describe, you are **not** choosing a random real number. Rather, you are creating a generator (in Python terminology) that outputs random digits one at a time. Nothing infinite at all. – user21820 Jan 31 '20 at 15:41
  • Also, note that an analog random process does not give you a random real either, because you cannot extract infinite information from it. – user21820 Jan 31 '20 at 15:42
1

If the probability of a head is 0.5 it is always 0.5, and however many times we toss a coin, there is a chance it will not be a head. This holds the same even if we keep tossing it until we get a heads. It holds for as long as we are still tossing coins, even if that's forever.

So I would then agree that the probability is not 1, and thinking otherwise is an example of the gamblers fallacy, warped through our ideas of 'infinity'.

Alternatively, an infinite number of coin tosses, or monkeys on typewriters, is not a "potential" infinity, coin tossing that does not stop, but an actual one:

Aristotle postulated that an actual infinity was impossible, because if it were possible, then something would have attained infinite magnitude, and would be "bigger than the heavens." However, he said, mathematics relating to infinity was not deprived of its applicability by this impossibility, because mathematicians did not need the infinite for their theorems, just a finite, arbitrarily large magnitude.

But I'm not sure I see how time can be an actual rather than potential infinity, in Aristotle's sense:

The actual infinite is not a process in time; it is an infinity that exists wholly at one time.

0

The people who are pointing out that you've stumbled upon the concept of "almost sure event" in probability theory are correct, but this is rather beside the point.

The fact is that "almost sure events" (that is, events having probability 1) fail to happen all the time. Any experiment where a fair coin is tossed countably many times and a specific sequence of heads and tails is observed will have an outcome having zero probability. In other words, here is a case where we know, in advance, that there will be some event that occurs by the end of the experiment that has zero probability.

Alternatively, for instance, we can think of measuring the position of an electron occupying some energy eigenstate in a hydrogen atom. Any measurement we will make of the position of the electron has zero probability of occuring, and yet if we have a mythical apparatus capable of measuring the position exactly, then we must indeed measure some position. Again, we see clearly that an event having 0 probability doesn't mean it won't occur.

These pathologies of probability theory related to almost sure events arise from the fact that we define probability values to take real number values, and the real numbers are an Archimedean field, i.e. they don't admit any infinitesimal elements. There have been some attempts of generalizing the concept to other number systems, but none of these technical projects have any bearing on the fundamental disconnect between probabilistic claims and factual ones.

The fact is that, by design, no probabilistic claim can ever imply any claim that's not probabilistic. It's impossible, in theory, to perform a probabilistic computation (such as determining that a monkey on a typewriter will almost surely type out Shakespeare's Hamlet) and infer from this computation a fact about the world (that the monkey will indeed type out Shakespeare's Hamlet). The first is a probabilistic claim, while the second is not, and therefore it's impossible to deduce the second from the first. To perform such a deduction is indeed to fall into the gambler's fallacy, despite what some of the other answers claim. The gambler's fallacy is properly understood as the delusion that a probabilistic claim can imply a factual one, as this is the real content of a belief that "the odds will even out in the end".

The fact that, nevertheless, we seem to be able to explain some regularities in nature by using methods that are in some sense probabilistic (like using statistical mechanics to derive Planck's law of blackbody radiation, for instance) is a real conundrum that's not often appreciated. One has to think very carefully about what it is that's being done when the fundamental epiphenomenality of probability theory is somehow swept aside in what is best described as a sleight of hand. This answer is already getting rather long, however, so I will refrain from discussing this subject further.

Ege Erdil
  • 101
  • 1
  • You wrote: "Any measurement we will make of the position of the electron has zero probability of occuring, and yet if we have a mythical apparatus capable of measuring the position exactly, then we must indeed measure some position." I don't agree with the first part, for much the same reason that you state "mythical apparatus" in the second part. Any measurement made by anything only gives you some numeric value with finite precision. Moreover, it doesn't even make sense to say that there is a 100% precise 'actual value' of a measurement, based on what we believe about quantum mechanics. Yes? – user21820 Jan 31 '20 at 13:36
  • @user21820 The issue is more complicated for several reasons. Because the pictures in momentum space and position space are dual to one another under the Fourier transform (or just a unitary change of basis), it seems strange to claim that one set of eigenstates (the momentum ones) are "physical", whereas another set of eigenstates (the position ones) are not. There are also some problems with what is meant by the precision of a measurement depending on the exact nature of the apparatus used. In any event, you can get around this problem by iterating the measurement infinitely many times. – Ege Erdil Jan 31 '20 at 14:46
  • We can't iterate a measurement infinitely many times. We can only perform each measurement once. If we 'repeat' the experiment, it is not the same experiment. About measurement, I'm just saying that any measurement device is a part of the setup, and the more precise a numeric value we can read off from it, the less accurate it can be of the underlying state in the case that the device was not there. So it's not even really sensible to talk about the probability of a measurement 'occurring'. – user21820 Jan 31 '20 at 15:49
  • Sorry, but I think you lost me after "measuring the position of an electron" - I'm not scientifically literate enough to make sense of the analogy you've presented. – Lou Jan 31 '20 at 15:52
  • @Lou the argument doesn't really have anything to do with electrons, just generally with measurements of continuously variable quantities. It would just as well apply to you measuring, say, a temperature of 20°. The probability of the temperature “actually” being 20° is zero, if you think of the “real” temperature as being a real number with some random fluctuations on it, because _almost all_ of the values it could take are irrational. – leftaroundabout Jan 31 '20 at 15:56
  • 1
    Howerver I agree with user21820: that issue is mostly do to what we think of being a _measurement_. And related, what we think of being distributions. “...arise from the fact that we define probability values to take real number values” is missing the point: it's that distributions _are not functions_, but rather dual vectors to the space of continuous functions. In that light, the paradox can't arise, because you _can't evaluate_ such a distribution for _whether the temperature is exactly 20°_. IOW, all physical measurements are inexact. – leftaroundabout Jan 31 '20 at 16:00
0

Endless Possibilities

You are skeptical of the claim that "everything will occur, given an infinite number of opportunities." Other answers have given a good explanation of when this claim is true and when it is false. However, I would like to assemble the various ideas into a single answer.

Probability problems are often formulated in terms of choosing marbles from an opaque jar, which is valuable because it appeals to our intuition, to the extent that it can. The marbles represent the space of all possible outcomes (or: all possible values for the random variable). Picking a marble corresponds to sampling the space.

Now, there are two ways to conduct a sample: with replacement, and without replacement. After you pull out a marble, do you keep it, or do you put it back before pulling out another marble? The Gambler's Fallacy is nothing more than the mistaken idea that all probabilities (or, at least the ones of interest) entail sampling without replacement. Or, to illustrate more clearly, that all games of chance are equivalent to counting down a finite blackjack deck. If roulette involved taking each number off the wheel as it occurs, then the Gambler's Fallacy would actually be true for roulette. And if the dealer always replaced played cards into the shoe (randomly!) after every hand, it would be impossible to usefully count down a blackjack deck (it would become a circular, or "infinite" shoe, although an 8-deck shoe with a deep cut makes for a useful approximation).

Shakespearean Monkeys

When it comes to monkeys on typewriters, we have an additional complication: time. We can view the probabilistic event as a monkey striking a key, or as a monkey producing an entire sequence of keystrokes. In fact, the latter is a far more useful way to view the situation. So instead of putting a marble for each letter of the alphabet into our bag, and trying to keep track of what texts are produced by pulling out thousands of marbles, we can instead inscribe the texts which are produced by all the monkeys after 1 keystroke, after 2 keystrokes, etc. up to the limit of what monkeys are willing or able to type. So one marble will have the text "q" on it, while another will have the text "mxlplx", and yet another will have: "To be or not to be".

Since we are trying to avoid the Gambler's Fallacy, we must sample the bag with replacement. After all, there's nothing stopping a monkey from typing "MonkeyButt" 23 times in a row. So we must be able to draw this marble from the bag at least 23 times, and we can only do that if we put it back. Now, the original question becomes: "Given an unlimited number of draws, are we guaranteed that we will draw a marble with the entire text of Hamlet carefully inscribed upon its surface?" And the answer is: "It depends."

You see, we made a subtle but important leap when we switched the random variable from keys typed to texts typed. We sort of hand-waved away how long the texts could be. In fact, even if we have an infinite number of monkeys, nobody has suggested that the monkeys themselves are immortal, or have infinite patience. It could turn out that no monkey is willing to type more than 10,000 keystrokes, under any circumstances. If that is the case, then we have no chance of drawing Hamlet, no matter how lucky those keystrokes are (unless you are willing to assemble works from multiple monkeys, but that ruins the claim in other ways).

The Outer Limits

All of this is a fancy way to point out what is hopefully by now an obvious fact: you can only draw a marble from the bag, if the marble is already in the bag. If we have theoretically tireless monkeys which are highly motivated to type and physically capable of typing at least as many characters as can be found in Shakespeare, and there are no constraints on the sequences of characters typed (perhaps monkeys don't like to type 'p' after 'a' because they are on the opposite sides of a QWERTY keyboard), then, given an infinite number of "monkey texts", the probability that one of them corresponds to Hamlet is 1.

Now, let's talk about planets. If the forces which affect planet formation have a finite range, and the universe has infinite size, and the universe has infinite matter, and the universe has mostly uniform density (consistent with the observable universe, at least), and the laws of physics are the same everywhere in the universe, then we basically have the physical conditions necessary to create any kind of planet which can be formed under conditions similar to earth. Under those conditions, I would tend to agree that the probability of another earth-like planet existing is 1.

In fact, I would agree that the probability of TEN other earth-like planets is 1. I would go so far as to claim that there are an infinite number of earth-like planets in such a universe. This is due to the simple fact that we as humans can only distinguish a finite number of planets as "different", due to the limitations of physics. Therefore, we can put every "possible-planet marble" into our bag, but our bag will only contain a finite number of marbles, including our "pale blue dot". And since we will draw from the bag an infinite number of times with replacement, it follows that earth and every other kind of planet we have or will observe must occur an infinite number of times.

However, there are several things that we won't see: we won't see a cube-shaped planet, or a donut-shaped planet, or a planet that looks like a Sierpinski triangle. That's because physics does not allow the construction of such planet shapes. So an infinite number of draws does not allow anything at all to happen. It only allows any event which is individually possible to happen, possibly an infinite number of times. You can only draw a marble from the bag if the marble can exist and you put it in the bag.

Lawnmower Man
  • 519
  • 2
  • 3
-1

You are right that infinite trials do not imply that a given state will ever be reached. The independence assumption of letters, coin tosses, and die rolls means that we are sampling with replacement. The probability that a certain sequence will never occur over infinite independent samples, while infinitesimally small, is nonzero. Thus the gambler's fallacy applies--there is actually no guarantee that a given sequence will ever occur; the present and future are not hostages to the past. Failure is an inexhaustible quantity. In sampling batches of 100 random elements each (with replacement) from a set of one million unique objects, you can be millions of batches into the sampling and it is highly likely that some elements will never have been seen, despite the total number of elements seen (including repetitions) being in the hundreds of millions. The combinatorics of sequences make the numbers involved exponentially large as the complexity of the sought sequence increases. In general there is no finite or even infinite number of trials large enough to make a guarantee that a given sequence will occur. This is a strong statement and informs our perception of what almost surely given mythic infinite resources actually means in practice.

Nuclear Wang's argument about the gambler's ruin is another important caveat: We cannot assume we have an infinite amount of any kind of resource, and some states are not recoverable, no matter how much of time and eternity we try to burn. This should color our choices with greatly warranted caution.

There are other concerns when we try to liken the monkeys-with-typewriters thought experiment to real life. For one, temporal independence is not a valid assumption in dynamical, physical systems: The present state always depends on prior states, and certain hypothetical transitions between states would be so drastic as to be impossible according to natural law. When was the last time anyone randomly teleported to the Moon, or will it ever happen? Never, and it won't. In reality, there are good reasons to believe that the probability that an organism would spontaneously generate from inorganic matter is exactly zero. Constraints such as energy wells, conservation of matter, momentum and energy, the inevitability of attraction and repulsion etc. necessarily destroy the "anything can happen" genre of fantasy. When we try to apply our discrete independent probabilistic assumptions to nature, then the natural laws, the continuity of physical reality, the smoothness of time and space and myriad other inviolable constraints all throw monkey wrenches into the monkeys-with-typewriters fiction. While a person or an animal with a well-maintained typewriter can choose to press any key at little cost and have letters appear on paper in arbitrary order, some things are truly impossible. Someone could make an interesting thought experiment by postulating a long list of things that are likely to be impossible.

I trow that no electron ever tunneled across energy barriers above a determinable finite threshold, and no electron ever will. In general, anyone who believes that any state is possible falls for the fatal trap of believing that he will be king someday, that he will escape death, that he can keep spending money he doesn't have, and that there cannot be any absolute morality because in the end, "all things are equal" and everything will happen eventually. All things are not equal. Different distributions, manifolds and the laws of nature all mandate that different choices can have permanently different outcomes. Entropy is real and active. Laws are laws, not whims. Gravity comes to work all day, every day, no exceptions. Debt and interest do not rest. Choices matter, and they determine our ultimate destination.

pygosceles
  • 297
  • 1
  • 8
  • This is not accurate. The point of infinity as a concept is intentionally to "break" the real world (or "break away from" it). The argument above is "to" the real world, and thus is inapt. – Jeff Y Jan 31 '20 at 19:38
  • Infinity does not break any real-world concepts, and infinity of one resource does not alter the availability of other resources, therefore the "anything is possible" argument fails even in the presence of some infinite quantity. Mathematically, there is no hard guarantee that a given state will ever be reached even given infinite time. Epsilon and zero are not the same number. The physical limitations argument is entirely valid, and not overridden by infinite time. – pygosceles Jan 31 '20 at 19:47
  • I guess we'll have to agree to disagree. Infinity is a **very** useful intellectual tool, but I'm safe in declaring that there is no such thing in the physical world, anywhere. And mathematically there **is** a hard guarantee that monkeys typing infinitely **will** produce Shakespeare. But no physical infinities means it is **strictly** a thought experiment **only**. Here is an interesting link: https://plus.maths.org/content/do-infinities-exist-nature-0 – Jeff Y Jan 31 '20 at 20:17
  • @Jeff No physical infinities seems more of a conjecture than an established reality. What evidence do we have that space is not infinite in its extent? Or that time will end? Or that it had a beginning? – pygosceles Jan 31 '20 at 20:39
  • The Big Bang is strongly evidenced. No? I mean, we should not close ourselves out from seeking new contradicting evidence (micro black holes?). But neither should we speculate merely on the basis of no evidence (e.g. about something being "outside" or "before" our universe), **especially** if the speculation includes "and we never can nor will know one way or the other". – Jeff Y Jan 31 '20 at 21:31
  • The Big Bang is a conjecture--one which contradicts all of the laws of physics. Time and space are philosophically and concretely simple. Following Occam's Razor, space is intrinsically unbounded, as is time. – pygosceles Jan 31 '20 at 22:01
  • 1
    *In general there is no finite **or even infinite** number of trials large enough to make a guarantee that a given sequence will occur.* That's mathematically false. Finite, sure. But if you go infinite, then the maths proves that if there is a non-zero probability of an event occurring, however small the probability, it will occur. And not only that, it will happen infinitely many times! This is a "smaller infinity" than the total space available, but it's still infinite. It's a bit mindbending, but that's what Georg Cantor proved. In a non-infinite universe of course this isn't the case. – Graham Feb 01 '20 at 08:38
  • @Graham It is not false. The probability of an event occurring even in an infinite timeline is not equal to one, therefore it is not guaranteed. It might increase over time, but it never equals one. Proving that it does occur reduces to the halting problem, which is undecidable. Also, the independent sampling assumption is not true in the real world, which is dynamic, causal, strongly dependent on prior states, and bound by the limitations of immutable natural laws. Said another way, some things never have happened and never will. – pygosceles Feb 05 '20 at 17:14
  • 1
    @pygosceles No, a non-zero probability over a finite time always tends to 1 as time increases. If the probability in any finite period is M (where 0 – Graham Feb 05 '20 at 17:53
  • @Graham "tends to" is not the same as "equals". That's my point. From a purely mathematical perspective, there never arrives an actual guarantee that a given event will be observed. This is the math itself. The halting problem is germane; the question of whether the program may ever terminate is a question of patience, not of certainty. As the probability tends toward (but is not equal to) zero, the ability for humans or computers to differentiate between zero and nonzero probability degrades. A program that halts in some cases (revealing a nonzero probability event) can still be undecidable. – pygosceles Feb 05 '20 at 18:01
  • 1
    @pygosceles No, you're still thinking in terms of finite values. Until you can get past that, you're not going to grasp the concept. – Graham Feb 05 '20 at 18:11
  • 1
    @pygosceles re the halting problem, it has nothing to do with "patience" - if you think it does then you haven't understood it. The point of the halting problem is purely that it's possible to construct a program such that the probability of it halting is *exactly zero* and that analysis of the program cannot distinguish that it is exactly zero. If the probability is exactly zero, it will never end ***even in infinite time*** which is why I said 0 – Graham Feb 05 '20 at 18:13
  • 1
    @pygosceles ... This doesn't mean that there are not programs where it *can* be proven that the program's probability of halting is non-zero, or proving that the program's probability of halting is zero. It only means that programs exist where this can't be proven. But this has no relevance to anything here. The probabilities *can* be stated, so the halting problem is a complete red herring. – Graham Feb 05 '20 at 18:17
  • @Graham No, I am definitely thinking in terms of infinite values. The probability as one moves toward infinity of samples of a finite-probability event moves *toward* zero but never actually reaches zero. This is according to vanilla mathematics. Calculus was invented to compensate for deficiencies such as these. The fact that Liebniz and others had to formalize this using additions to the framework proves that we are not working with a guarantee, but rather with a practical approximation. Once the approximation is admitted, the guarantee vanishes. Hence "almost surely", **not** "surely". – pygosceles Feb 05 '20 at 18:21
  • @Graham No. That is not what the halting problem is. The halting problem is that the question of determining whether a given program will halt on a given input is *uncertain*. That is why we have a class of semi-decidable problems that are considered generally undecidable but which actually do halt on some inputs. If one wishes to **guarantee** that an event **will** happen, one has to guarantee that a program simulating the sampling procedure **will** halt, which cannot be done in this case. If any simulation does halt, we can say that the outcome is possible, but we cannot say it is certain. – pygosceles Feb 05 '20 at 18:25