The
skeptical argument against the real world hypothesis (RWH) is this:
- If I'm not justified in believing I'm not a brain in a vat, then I'm not justified in believing I have hands
- I'm not justified in believing I'm not a brain in a vat
- Therefore, I'm not justified in believing I have hands
Premise
(1) follows from the undeniable closure of justification on known entailment. And so you might try objecting to
premise (2) by saying you are in fact justified in rejecting the possibility of being a brain in a vat. You might say, look here I have
hands! Here's one, and here's the other. I can feel them, I can touch
them: is this not good enough reason to believe they are real? But,
then again, if this all was an illusion and you were just a brain in
a vat, what would you expect to be any different? Every experience of having hands just as well supports that you have hands, as it
supports that you're a brain in a vat being tricked into
thinking you have hands.
By the very construction of a skeptical hypothesis (SKH) it has the consequence that, for all possible experiences e, e is just as much expected on SKH as it is on RWH; Pr(e|SKH) = Pr(e|RWH). This is a plausibility rather than a probability judgement, in that the Pr function outputs the level of expectation or surprise we have (or should have) in the operands.
Baysians update their beliefs in accordance with Bayes theorem upon being presented with evidence. And so given that the two hypotheses are empirically equivalent, as long as the priors are also equal, it would be impossible for the Baysian to confirm RWH over SKH on evidential grounds. But now consider how the disparity between two hypotheses grows exponentially as evidence is accumulated, and how much evidence we really have (every experience of every moment of our lives). Even if two hypotheses are empirically equivalent, the same evidence could support one dramatically more than the other if the priors weren't equal. And, given that the two hypotheses are incompatible, we are then forced to reject SKH in favour of RWH. We can, then, happily say that we know SKH to be false. Notice that I am not simply Moore shifting. I am not saying I know SKH is false because I know RWH is true. In fact, even under the assumption that I don't have good enough reason to believe RWH, we still have good enough reason to reject SKH. The skeptical hypothesis is still defeated, even if the real world hypothesis cannot be established. And so, if we could justifiably say that Pr(RWH) is even a little bit greater than Pr(SKH), then we could have a solid answer to external world skepticism.
By the very construction of a skeptical hypothesis (SKH) it has the consequence that, for all possible experiences e, e is just as much expected on SKH as it is on RWH; Pr(e|SKH) = Pr(e|RWH). This is a plausibility rather than a probability judgement, in that the Pr function outputs the level of expectation or surprise we have (or should have) in the operands.
Baysians update their beliefs in accordance with Bayes theorem upon being presented with evidence. And so given that the two hypotheses are empirically equivalent, as long as the priors are also equal, it would be impossible for the Baysian to confirm RWH over SKH on evidential grounds. But now consider how the disparity between two hypotheses grows exponentially as evidence is accumulated, and how much evidence we really have (every experience of every moment of our lives). Even if two hypotheses are empirically equivalent, the same evidence could support one dramatically more than the other if the priors weren't equal. And, given that the two hypotheses are incompatible, we are then forced to reject SKH in favour of RWH. We can, then, happily say that we know SKH to be false. Notice that I am not simply Moore shifting. I am not saying I know SKH is false because I know RWH is true. In fact, even under the assumption that I don't have good enough reason to believe RWH, we still have good enough reason to reject SKH. The skeptical hypothesis is still defeated, even if the real world hypothesis cannot be established. And so, if we could justifiably say that Pr(RWH) is even a little bit greater than Pr(SKH), then we could have a solid answer to external world skepticism.
It all comes down, then, to establishing these priors. Traditionally Baysians address the priors of two competing hypotheses by simply ignoring them. They would say; set the priors to whatever you would like, and given the accumulation of enough evidence one hypothesis will eventually overwhelm the other. If we see a dramatic tendency for one hypothesis to be better evidenced than another, we can inductively infer that this pattern will continue and that future findings will continue to favour the one over the other.
But of course this strategy fails here, since the RWH and the SKH are empirically equivalent. You can gather as much evidence as you'd like, it will never favour one over the other as long as the priors are equal. The Baysian is then left with two options. On one hand he can arbitrarily set the prior probabilities to favour RWH, and in doing so give a rather unsatisfying answer to the skeptic. If all we can muster is an arbitrary rejection of skepticism, then it seems we have failed to offer a genuine response. On the other hand, the Baysian can seek out a priori principles to guide our judgement of priors. There is already one such principle at play, the principle of indifference: in the complete absence of probabilistic information, we should treat the priors as if they are equal. And it is exactly this, or at least something very much like it, that fortifies the skeptics argument.
Notice that this principle is cashed out in terms of what we should do, or what our expectations should be. But what sorts of principles could these be? It seems futile to come up with principles that track the objective probability of a hypothesis. So we should understand these principles as guiding our expectations. But then there is something inherently normative about these principles: they tell us about appropriate expectations to have. An epistemic realist (a term coined by analogy to moral realism) might say that there are simply normative facts governing how we should compare hypotheses. They might say that someone who accepts ad hoc explanations is doing something they simply shouldn't be doing. And it's not because ad hoc explanations are more likely to be false (indeed, it's unclear how such a claim might be defended), but simply that an explanations being ad hoc constitutes reason in of itself to reject it or, at least, be more suspicious of it.
Indeed there are probably many more intrinsically good or bad features for a hypothesis to have, like parsimony, explanitory fit, falsifiability, or even aesthetic appeal.
What we need are a priori principles by which we can favour one hypothesis over another. And this we have. It's naturally thought that parsimony and explanitory power are intrinsically good features for a hypothesis to have. And, being ad hoc or unnecessarily complicated are intrinsically bad features for a hypothesis to have.
But of course this strategy fails here, since the RWH and the SKH are empirically equivalent. You can gather as much evidence as you'd like, it will never favour one over the other as long as the priors are equal. The Baysian is then left with two options. On one hand he can arbitrarily set the prior probabilities to favour RWH, and in doing so give a rather unsatisfying answer to the skeptic. If all we can muster is an arbitrary rejection of skepticism, then it seems we have failed to offer a genuine response. On the other hand, the Baysian can seek out a priori principles to guide our judgement of priors. There is already one such principle at play, the principle of indifference: in the complete absence of probabilistic information, we should treat the priors as if they are equal. And it is exactly this, or at least something very much like it, that fortifies the skeptics argument.
Notice that this principle is cashed out in terms of what we should do, or what our expectations should be. But what sorts of principles could these be? It seems futile to come up with principles that track the objective probability of a hypothesis. So we should understand these principles as guiding our expectations. But then there is something inherently normative about these principles: they tell us about appropriate expectations to have. An epistemic realist (a term coined by analogy to moral realism) might say that there are simply normative facts governing how we should compare hypotheses. They might say that someone who accepts ad hoc explanations is doing something they simply shouldn't be doing. And it's not because ad hoc explanations are more likely to be false (indeed, it's unclear how such a claim might be defended), but simply that an explanations being ad hoc constitutes reason in of itself to reject it or, at least, be more suspicious of it.
Indeed there are probably many more intrinsically good or bad features for a hypothesis to have, like parsimony, explanitory fit, falsifiability, or even aesthetic appeal.
What we need are a priori principles by which we can favour one hypothesis over another. And this we have. It's naturally thought that parsimony and explanitory power are intrinsically good features for a hypothesis to have. And, being ad hoc or unnecessarily complicated are intrinsically bad features for a hypothesis to have.