There are 2 problems in your understanding here I suspect.
How improbable some things are is difficult to understand.
Alternative explanations for your experiment change how things work in unexpected ways.
Addressing the first:
One problem with Bayesian credence is humans are bad at understanding what 0.00000000001% means.
Imagine you had a hypothesis that an invisible dragon was making your coins always land heads. This hypothesis is very specific and hard to work out what the credence you have is. So let's do a slightly easier one.
The hypothesis that your coin, the one you just pulled out of your pocket with no special history, will almost always land heads. The invisible dragon implies this one, so this is a weaker hypothesis.
What is the credence you have for that hypothesis? Well, we can run Bayesian analysis backwards. How many heads in a row would it take for you to say "well, that actually seems likely"? 10? 100? 1000?
Suppose you are pretty sure this coin is utterly normal. So it would take 1000 heads in a row for you to be 50-50 that this coin will almost always land heads.
P(A|B)=P(B|A)P(A)/P(B)
Here P(A) is "coin will almost always land heads", P(B) is "1000 heads in a row".
0.5 = P(Coin will almost always land heads|1000 heads)=P(1000 heads|coin will almost always land heads)P(coin will almost always land heads)/P(1000 heads)
P(1000 heads) is 2^-1000, or 10^-300.
P(1000 heads | coin will almost always land heads) is ~1.
0.5 = P(Coin cheats|1000 heads) =~ 1 P(coin will almost always land heads)/10^-300.
So P(coin will almost always land heads) =~ 0.5 in 10^300.
Or, 0.(place 301 0s here)5
Now we can chain it again. Given that you have a coin that (for no reason you yet know; there is no special history of this coin) almost always lands heads, what would be the probability it is an invisible dragon that is doing it telekinetically?
This requires it be an invisible creature, that this creature be using telekinesis, and that invisible creature is a dragon. Each of these is going to be less likely than the above.
So the "proper" initial credence value is going to be extremely hard to express, even exponentially, because you have to chain together a bunch of requirements each of which is exceedingly unlikely.
To work out how unlikely, you can again use inverse Bayes. You can work out the kind of evidence that would be required to make it reasonably credible, and how unlikely that evidence would happen if it wasn't true.
I suspect you'll need fancy notation in the end. Like maybe conway up arrow notation.
A bonus effect is that Bayes theorem lets you take even extremely unlikely things -- things with a 1 in 10^300 chance of being true -- and in a relatively short experiment (1000 coin flips) make them "ok, that is a reasonable explanation".
The real weakness is that knowing P(thing you don't believe in) is not easy.
When dealing with such extremely unlikely things, it might help to think logarithmicly.
lg(P(A|B)) = lg(P(B|A)) + lg(P(A)) - lg(P(B))
lg(P(unlikely|evidence)) = lg(P(evidence|unlikely)) + lg(P(unlikely)) - lg(P(evidence))
lg on probabilities are all negative. So lets define a new term -- E. E(X) = -lg(P(X)). E is positive.
E(unlikely|evidence) = E(evidence|unlikely) + E(unlikely) - E(evidence)
Using base 2 for logs, 0.5 probability corresponds to E of 1.
1 = E(fixed coin|1000 heads)
0 = E(1000 heads|fixed coin)
? = E(fixed coin)
1000 = E(1000 heads)
1 = 0 + 1000 - E(fixed coin)
E(fixed coin) = 999
0 in this scale is certain -- the bigger the number is, the more surprising it is.
To convert back to probability, just take 0.5^E.
This exponential scale might give you a better way to think about unlikely events. How many "otherwise even odds events" would have to behave in a way in sync with your unlikely hypothesis for you to say "well, that is now actually likely".
That is the "E" level of your unlikely hypothesis.
And the E level of something like "invisible dragon telekinetically making this coin land heads" might be a google.
Addressing the second:
The next problem is with the naive application of this theory.
The problem is that a pile of heads wouldn't ever cause you to actually think there is something as specific as a telekinetic invisible dragon. There are a pile of other reasons why the coin lands heads (it has heads on both sides, there is a magnet in it that lets it be remote controlled, whatever).
This impacts P(1000 heads). Because it isn't actually 1 in 2^1000. In fact, the more heads you have, the more likely you'll assume the coin is fixed, and the less unlikely new heads are!
When we are talking about a fixed coin, the alternative is the coin isn't fixed. So
P(fixed coin|1000 heads) = P(1000 heads|fixed coin) * P(1000 heads) / P(fixed coin)
but P(1000 heads) = P(1000 heads | fixed coin) * P(fixed coin) + P(1000 heads | fair coin) * P(fair coin).
Our assumption that P(1000 heads) = 1/2^1000 relied on the fact that P(1000 heads | fair coin) * P(fair coin) is 1/2^1000 and P(1000 heads | fixed coin) * P(fixed coin) is small.
However, when enough coin flips occur that P(fixed coin) hits 0.5 this assumption is no longer safe.
If there was a small chance of a coin being fixed (in any way) to always land heads, say 1 in 100, then P(1000 heads) is actually 1%! It doesn't generate boundless amounts of evidence.
Applied to the invisible dragon case, because there are many other ways to fix the coin than the invisible TK dragon, a boundless number of heads stops giving evidence of the dragon when it starts making the coin being fixed nearly certain. At that point to get more evidence of the dragon you need a test that distinguishes the invisible TK dragon from the other ways the coin can be fixed.
In log space, E(fixed coin) might be 100, and E(fixed by TK) might be 10^100, and E(dragon is doing it) might be 10^10^100, and E(dragon is invisible) might be 10^10^10^100.
And the probabilities might roughly multiply, which means the E (evidence required) roughly adds up.
P(unlikely event) = Sum P(unlikely event | S_i) * P(S_i)
You have to, in a sense, consider every situation when doing Bayesian analysis.
If you arrange your situations in a certain way, this is easy. Imagine your hypothesis was "the coin flips are fair" and "the coin are not fair". This covers the entire universe, so there are only 2.
But if you measure the chance of 1000 heads in a row as 1 in 2^1000, it implicitly does "there is an invisible dragon using TK to control the coin" and implicitly "the coin flips are fair" with the probability of the dragon being insanely small.
What really needs to be done is "fair coin", "invisible dragon", "alternative explanation for unfair coin".
With that model, large numbers heads in a adds evidence to the union of ("invisible dragon" and "alternative explanation"). And because the alternative explanations for an unfair coin are insanely more likely than the dragon, more and more heads doesn't actually move the dragon out of infinitesimal chances.
To use Bayes to pull the dragon out of infinitesmal chance of being true, you are going to have to provide more and more observations that split the invisible dragon from the alternatives. And the harder it is to detect a dragon the more extreme the evidence is going to have to be, mathematically.