1

Under uncertainty, precise probability cannot be assigned, see my other question: How valid is assignment of probabilities when evidence is totally lacking, as in Pascal's Wager? In this case, either probability cannot be assigned, or a range of probabilities is assigned (in the case of complete uncertainty, [0, 1]).

How does this change when there is some evidence, but not full evidence? For example, many cosmologists posit the existence of an infinite universe under the assumption that the universe is flat. This does not directly follow, yet many physicists say that it is "likely."

In general, how can evidence slightly favoring one hypothesis over another be used in support of that hypothesis, if the prior probability of the hypothesis being true is not known? Indeed, if we have a probability ranging over [0, 1], and any value can rationally be chosen from that, we could choose 1. This makes evidence useless in support or refutation of that hypothesis, according to Bayes' Theorem.

Question: In what sense can we use evidence to change our probabilities of a given proposition if the prior probability of that proposition cannot be known (or range over an interval)?

Josh
  • 327
  • 1
  • 6
  • Given that probabilities are in the range [0,1], we can apply some math to them, we can often compute maximum likelihoods. In particular, enough repetition can make the original distribution and probabilities highly unlikely to matter. We really can measure the maximum likelihood they do matter due to the Law of Large numbers. The whole mechanism used by most scientists is modern Normal Theory statistical techniques, which rely upon the normality of measures on large enough data sets to give answers that are less than a fixed probability of lying outside a given range. –  Aug 22 '19 at 19:56
  • @jobermark First, my question isn't really aimed at cases where evidence is abundant, but more where evidence is few and far between. Second, I'm not sure that it really matters how much evidence you have if the probability is either 0 or 1 (Bayes' theorem). But yes, I think the probability does converge in (0, 1), but not [0, 1]. Perhaps (0, 1) should be used in scientific questions? – Josh Aug 22 '19 at 20:15
  • It is not that prior probabilities can not be known, it is that there is nothing to know. In many cases, [Bayesian priors](https://en.wikipedia.org/wiki/Prior_probability) are an artifice assigned more or less based on technical convenience (Wikipedia lists some schemes). It is how they are updated that really matters. – Conifold Aug 22 '19 at 20:44
  • @Conifold So the ability to incorporate evidence, and how much pull that evidence has, is dependent on the prior you choose? Is there, perhaps, another way to evaluate evidence besides having to update your prior? Besides, of course, frequentist probability. – Josh Aug 22 '19 at 22:20
  • Sure, Bayesian methodology has a lot of critics, starting with Popper. But applying probabilities to assessing the quality of evidence is a stretch to begin with, so it is not surprising that it leads to more stretches, and results of questionable meaningfulness. I think tracking the relative change of Bayesian probabilities has value, but the absolute numbers are often meaningless (and can be manipulated by shifting the prior). – Conifold Aug 22 '19 at 22:34
  • 1
    @Josh But then the answer is obviously 'It can't that is why we verify theories with more than one test.' People don't judge theories on little bits of data, they judge it on subjective criteria, or adequate data. –  Aug 22 '19 at 22:53
  • @Conifold What about probabilities that are generated "internally"? For example, a math problem where you have a best guess, but are not completely sure that that is, in fact, the correct answer. Can we meaningfully assign a probability here? It seems not, because there would be no evidence to persuade you that your answer is wrong, and evidence does exist to persuade you that it is correct. Then again, you could be wrong (perhaps previous experience can tell you you might be wrong?). If you can assign probability, how could you determine a specific, non-arbitrary value? – Josh Aug 24 '19 at 03:26
  • These are questions to those who do assign them, see [SEP Subjective Probability Theory](https://plato.stanford.edu/entries/formal-belief/#SubProThe). – Conifold Aug 24 '19 at 03:32
  • @Conifold Yes, but is it *rational* to assign in this case? It is not a case of complete ignorance, like what I've asked about in [my other question](https://philosophy.stackexchange.com/questions/64884/how-valid-is-assignment-of-probabilites-when-evidence-is-totally-lacking-as-in?noredirect=1&lq=1) (which you gave a very good answer to :)). There is knowledge to sway you one way or the other, but it is not clear how much one should be swayed. Is it just as rational to assign a 100% chance of being correct as a 50%, as the evidence is not objective (and, I suppose, fairly unhelpful)? – Josh Aug 24 '19 at 03:45
  • I do not subscribe to such assignments, so can't help you there. – Conifold Aug 24 '19 at 03:46
  • @Conifold So this is a case of "uncertain subjective probabilities," as you say in your other answer? Sorry for the confusion, but I'm not entirely sure when something passes from being "uncertain" to "certain." – Josh Aug 24 '19 at 04:14

0 Answers0