Hawthorne's *Knowledge and Lotteries* starts (more or less) with an account of why we are reluctant to say we know our lottery ticket won't win. (I don't think he needs to start with this--given what he does, he's not obligated to explain our intutions--but he does.) His explanation is that we use "parity reasoning"--we think of "This ticket won't win" as a member of an epistemic space divided into subcases p_{1}...p_{n} such that we have about the same reason to believe that each of p_{1}...p_{n} will not obtain. In this case p_{1}...p_{n} will be the propositions that the different tickets win. If it's absurd to think that we can know that all of p_{1}...p_{n} will not obtain, we reckon ourselves unable to know our favorite member of p_{1}...p_{n}.

The "about the same" qualification takes care of minor variations among the probabilities--the lottery tickets needn't be evenly weighted for the paradox to go through. And the schema is flexible enough to take care of some cases of radically different weightings--on p. 15 Hawthorne discusses a case in which 5 tickets have 10% chance each and the rest have a tiny chance, and points out that we'll divide that up as [one of the biggies wins/one of the rest wins], with parity reasoning operating within [one of the rest wins].

But I'm not sure it can account for coin flips; on the other hand, I'm not sure it has to.

Suppose someone's going to flip a coin till it comes up tails; is there some n such that you know it will come up tails within n flips?

It's not obvious that parity reasoning prevents us from answering "Yes." Take the proposition "The coin will come up tails within 1000 flips." Let p_{i} be "Tails come up first on the ith flip"--were interested in the proposition that p_{1000} and higher *don't* obtain.

But the members of this space aren't roughly equal. Each p_{i} is twice as likely to obtain as the next. So I have more reason to think that the thousandth subcase won't obtain than that any of the previous 999 won't. Thus, though it's absurd to think that we can know that all of p_{1}...p_{n}... will not obtain, this doesn't imply it's absurd to think that we can know that p_{1000} can obtain, or similarly for any of the higher ones.

You might say that "about the same" should be measured in terms of absolute probability--that p_{1000} and p_{999} are about equally likely because the one is only 2^{-999} more likely than the other. But if there's a threshold for what differences count as "about the same" here, that threshold should be pretty low. So this cutoff still won't get you from the absurdity of knowing that no p_{i} will obtain to the absurdity of knowing that some particular p_{i} will obtain, if i is high enough.

Say the cutoff is 1/1000: if the probability of q is within 1/1000 of the probability of r, we have about the same reason to believe that q and r will obtain. Then we have about the same reason to believe that any of the p_{i} will obtain for any i higher than 9. But that doesn't mean that it's absurd to think that we can know that p_{i} won't obtain, i > 9, unless you think it's absurd to think that we can know that *every* p_{i} won't obtain for i > 9. And that's not obviously absurd--it's a 1/512 chance, which is less than double the threshold for epistemic insignificance.

(Anyway, the absolute probability standard for "about the same" is obviously wrong. Probability q is not about the same as probability 0, no matter how low q is. Sorry I ascribed this to you.)

Perhaps Hawthorne would be OK saying that we do think we know that the coin will come up heads within 1000 flips. I'm more or less OK with saying that--but I'm also OK with saying that we know a lottery ticket will lose in many circumstances (as Hawthorne acknowledges). I'm not sure you can get one without the other.

Rather than depend on parity reasoning, I think we're better off saying that we're unwilling to ascribe knowledge as soon as we conceptualize the space in probabilistic terms. The cases in which we do ascribe knowledge (Hawthorne's example: Of course all 60 golfers won't get a hole in one!) are ones in which probability isn't front and center.

The better part of valor, however, may be to go ask the psychologists. Why we don't ascribe knowledge is an empirical question. And it's not the question that Hawthorne winds up concerning himself with; that's when we *should* ascribe knowledge. (I think philosophers should give up that question, too, but that's another post.)

Comments