4

Or, does it just sidestep it? Or, is it just completely unrelated to it? I’m having trouble seeing what the connection might be. It seems to me that Bayesians solve it by saying, take:

  1. Your prior credence in a hypothesis, H.

  2. Your prior credence in observing some evidence, E. (Sometimes P(E) is calculated as the sum of conditional probability of E given H and not-H weighted by the prior credence of H and not-H respectively, and sometimes by breaking not-H down into competing hypotheses.)

  3. Your conditional credence of E given H.

If you observe that E happens, then if your posterior credence in H is anything other than what Bayes’ theorem says it is, you’ve reasoned incorrectly. You’re as objectively wrong as the guy who believes ‘P’, ‘if P then Q’, but then does not accept ‘Q’. If you’re a sane reasoner who applied anything like sensible credences to 1.,2., and 3. for the common “Humean examples” of induction (this A is a B, that A is a B, ..., therefore, probably all As are Bs), your credence in H will surely go up.

Granted, Bayesian epistemology might have its own problems, such as problem of the priors – where do our starting credences ultimately come from, are there objectively correct priors or does anything go? - but this is not the same issue as Hume’s problem of induction, is it? I mean, deductive logic is concerned with validity - getting from true premises to true conclusions - but it says nothing about the actual truth values of the premises. Likewise, inductive logic should be about getting from prior credences to posterior credences, but surely it’s no fault of an inductive logic (anymore than in the case of deductive logic) that it doesn’t tell us what the prior credences are supposed to be?

Hopefully this all makes sense. What do epistemologists and philosophers of science think about the relation between Bayesian epistemology and Hume’s problem of induction?

Adam Sharpe
  • 3,764
  • 2
  • 9
  • 35
  • 1
    No. It does not even attempt to solve it, only refines the mechanics of making inductive inferences and quantifies their reliability. But to the extent that epistemological basis of induction is in doubt, as *a priori* justification is unavailable and inductive one is circular, putting numbers on it does no more to remove the doubt than quantifying utility and regimenting its calculation does to remove moral problems of utilitarianism. So Popper et al. apply their critique of induction directly to Bayesianism. Philosophical problems have no mathematical solutions, as someone quipped. – Conifold Feb 08 '21 at 23:03
  • 1
    At the base, we accept inductive arguments because our brain is structured in such a way that it accepts them. We're fortunate to have a brain like that, because it helps us find things out and achieve goals we could not otherwise achieve. Other agents might not have brains structured to accept such arguments, and they would be at a disadvantage as a result. The ultimate epistemological basis of induction is "that's how our brains happen to work." In fact, we accept deductive arguments for the same reason; our brains are just structured to do so. – causative Feb 08 '21 at 23:27
  • @causative Thanks for your answer! I liked and upvoted it. I'm waiting for some other answers though, because, while your answer helps explain how we use induction and how we might implement inductive reasoning on a computer or in our brains (especially the bolded part of your answer onwards), I'm mainly interested in *justifying* the correctness of induction (in light of Hume's problems, etc.). – Adam Sharpe Feb 09 '21 at 01:19
  • "the problem of induction just becomes the problem of the priors" -- But consider something like the [grue paradox](http://people.loyno.edu/~folse/grue.html)--until the year 2100, new observations of emeralds remaining green would give just as much support to the hypothesis "emeralds are grue" to the hypothesis "emeralds are green", so until you wake up and look at an emerald on Jan. 1 2100, if you're a Bayesian there can be *nothing but* your priors to justify the belief that emeralds will still be green as opposed to the belief that they will suddenly be blue. – Hypnosifl Feb 09 '21 at 01:24
  • @Conifold I'm not too sure Bayesians don't attempt a solution or that a priori justification isn't available though (i.e. using probability theory plus the assumption that credences must obey probability theorems). So, I've been reading a bit more about it and the SEP article on the problem of induction discusses a possible Bayesian solution (plato.stanford.edu/entries/induction-problem/#BayeSolu). It seems to me that if probability theory gives us a priori correct machinery for updating credences, the problem of induction just becomes the problem of the priors. – Adam Sharpe Feb 09 '21 at 01:31
  • @Hypnosifl Oops, sorry, I just deleted the comment you responded too, because I made a mistake but it was too late to edit it (I re-added it directly above though). In response: yeah, I think that makes sense (for reasons I believe that go beyond what I wrote in my OP)... I think any theory that makes reference to grue objects is inherently less simple (involve more complexity), involves less symmetry, or has other a priori "vices", than a theory with green objects, and so would have a lower prior probability of being true. – Adam Sharpe Feb 09 '21 at 01:35
  • Does this answer your question? [From "what-is" to "why-is" to "ought" -- can "why-is" close the "is-ought" gap?](https://philosophy.stackexchange.com/questions/74266/from-what-is-to-why-is-to-ought-can-why-is-close-the-is-ought-gap) – Yuri Zavorotny Feb 09 '21 at 07:58
  • I think "Bayesian solution" so taken misses Hume's point, as mathematical "solutions" usually do (think of Zeno's paradoxes). It is not that we lack justification for inferring something in the future from something in the past, or something broad from something narrow, by *assuming* some fixed/general setup (with urns in SEP example). It is that we lack justification for assuming that any such setup is fixed/general, for Mill's "uniformity of nature". SEP politely admits as much by calling it of "relatively limited scope" when talking about the binomial parameter. Priors are a side issue. – Conifold Feb 09 '21 at 12:53
  • @Conifold I guess I think of it a bit differently. The important insight of Bayesian epistemology isn't the fancy mathematics of probability (which I think is every bit as necessarily true or a priori as arithmetic is), it's the insight that rational credences are probability measures. The premise is so general, but interesting stuff follows. For example, we know that P(H|E) > P(H) iff P(E|H) > P(E|not-H). Letting H be the hypothesis “all ravens are black”, and E be “this particular raven is black”, then that P(E|H) > P(E|not-H) seems so obvious to me that I want to call it a basic belief... – Adam Sharpe Feb 09 '21 at 17:45
  • ...In possible worlds where H is true, every single instance of a raven will be black, but in worlds where there are ravens but H isn’t true, there will be a mixture of black and non-black ravens. A priori, we're more likely to find black ravens in worlds where all ravens are black, so in the raven case if someone's credences violate 'P(E|H) > P(E|not-H)' I think the burden is on them to explain why. Otherwise, regardless of the exact amount of confirmation, I'm reasoning perfectly by saying each observation of a black raven confirms that all ravens are black *at least a little bit*. – Adam Sharpe Feb 09 '21 at 17:45
  • "I think any theory that makes reference to grue objects is inherently less simple" But isn't the classic Humean point about induction that we have no inductive justification for the idea that the future will resemble the past, which is an aspect of talking about complexity vs. simplicity in natural laws? It's more of a philosophical presupposition (and/or something ingrained in our biology). And a Bayesian approach to the grue example supports this by showing we're stuck relying 100% on priors to justify "emeralds are green" as opposed to "emeralds are grue". – Hypnosifl Feb 09 '21 at 21:50

2 Answers2

3

First it should be recognized that probability is one possible formalism we can use to model uncertainty. It is not the only possible formalism; Dempster-Shafer theory is another. And Bayesian inference cannot be applied to Dempster-Shafer theory without modifying it. So, we cannot say that Bayesian inference is completely universal. It only applies if we are modeling uncertainty using probabilities, which we don't have to do.

The next caveat is that full Bayesian inference is too computationally difficult to apply in real life. It can be used on toy domains, and it can be applied very approximately to real-life propositions. Full Bayesian inference requires assigning a probability to every single possible world, and updating these probabilities as observations eliminate possible worlds, which is very costly. AIXI is an imaginary agent that can do full Bayesian inference, and it is not even computable.

The current generation of intelligent agents often use Maximum A Posteriori (MAP) or Maximum Likelihood estimates, or other approximate methods instead of full Bayesian inference, because the estimates are more computationally tractable.

Essentially, the purpose of induction is to enable an agent to effectively discover things about the world, and perhaps use those discoveries to act effectively in service of some goal. Any method whatsoever that achieves this end, can be called a solution to the problem of induction, whether it's full Bayesian inference, MAP, Dempster-Shafer, or something else. Note that any solution to this problem is constrained by the no free lunch theorem; in other words, no agent can perform induction unless it already has encoded within itself prior information about the universe in which it finds itself. Fortunately, the prior information can be very general, e.g. AIXI's prior simply says that the world is generated by a computable program, with a probability distribution over such programs, and that's enough.

The most impressive agents in machine learning don't even use any of those rigorous mathematical methods. Agents like AlphaStar have learned through trial and error to use and update a vector representation of the world, through techniques that do not exactly correspond to full Bayesian, MAP, Dempster-Shafer, or any other exact human-invented formalism. (Some of these formalisms were used in the creation of AlphaStar, but that's different from how AlphaStar works internally).

And what about humans? We don't exactly know how human neurons manage to organize themselves to effectively perform induction, but they do it somehow. We can be reasonably confident that we don't naturally perform full Bayesian inference, nor exact MAP inference, nor any precise formal inference. The human brain's version of induction seems a lot more similar to AlphaStar than to full Bayesian inference; in other words, it's a collection of learned techniques that somehow work without a strong formal basis.

causative
  • 10,452
  • 1
  • 13
  • 45
2

Bayesian epistemology does not solve or sidestep the problem of induction. Concrete Bayesian computations give you a way to update beliefs, as modeled in terms of probabilities on the basis of observations given a particular probabilistic model of the world. If that model is wrong, or if the world changes in such a way that now this model is wrong, the assessed probabilities will not conform to accurate beliefs about how the world behaves.

Dave
  • 5,261
  • 1
  • 18
  • 51