June 26, 2004

Can Justification Just Fall Short of Knowledge?

(Cross-posted to Certain Doubts)

We all know that justified true belief can fail to be knowledge when funny stuff happens (or at least most of us think this). What I want to ask is whether a JTB can fail to be knowledge for a more mundane reason--because the belief is justified, but it isn't justified enough to count as knowledge.

Another way, perhaps, to put this is to question a line from section 6 of Ralph Wedgwood's paper "The Aim of Belief": "[T]here is no way for a rational thinker to pursue the truth except in a way that, if it succeeds, will result in knowledge." Is this so?

Here's a case I'd like to survey you on. Charlie Brown, a baseball general manager, is trying to decide who to pick in the amateur draft. He looks at the prospects and comes to believe, based on his high school performance, that Joe Shlabotnik will be a good major league player someday. Indeed, Joe does turn out to be a good major leaguer. So Charlie had a true belief; it also seems as though it may have been justified, because it was based on performance. Yet I would think that it falls short of knowledge, because predicting someone's eventual major league performance on the basis of his high school performance is too uncertain.

(Apologies to non-baseball fans; the argument probably transfers to any sport, though baseball performance is notoriously difficult to predict.)

Indeed, I'd argue that Charlie is much better off knowing that his pursuit of the truth about Joe's future performance will not result in knowledge. I'm convinced by Tim Williamson's argument that one of the advantages of knowledge over JTB is that it is less likely to be abandoned in the face of counterevidence. Yet Charlie should be ready to abandon his belief in Joe's future in the face of counterevidence. Given the chancy nature of baseball prospects, a general manager has to be prepared to abandon someone who looked promising but who isn't panning out, or he may damage his team by keeping on an underperforming player. Players who you know to be good will be kept in the lineup after a poor start (I remember Barry Bonds batting under .200 one May when he was in Pittsburgh and going on to win the MVP--er, sorry again to non-baseball fans); players who you think to be good won't.

Does this case convince you? Do you think Charlie is only justified in believing that Joe will probably be good? Do you think it casts any sort of light on the kind of justification that's necessary for knowledge?

Posted by Matt Weiner at June 26, 2004 12:53 PM

Matt, one way the lottery paradox is standardly set up when "justified belief" is the notion in play is that one is justified in believing that a lottery ticket will lose. Supposing that it does lose, then we have a JTB that the ticket will lose, and yet most would deny one knows that the ticket will lose. There are all kinds of other diagnoses of lottery situations, of course (and I agree with the view (of, e.g., Dana Nelkin) that one is *not* justified in believing that the ticket will lose), but this is a fairly standard way of thinking that is an affirmative answer to your question without any new cases demanding consideration.

Posted by: Jonathan Sutton at June 26, 2004 01:46 PM

Jonathan, that's a good point. Perhaps I should have been more explicit--I'm trying to come up with an example of a JTB that's not knowledge and that's not a Gettier example or a lottery example.

One thing is that in the lottery example it seems much easier to take the route that you're justified in believing that your ticket won't win. (In fact, if I read Hawthorne right, on his views whenever you're justified in believing that the ticket will win you do know that the ticket will win.*) That doesn't seem quite as intuitive to me in the general manager case--Charlie Brown may be justified in acting as though Joe will be a good major leaguer rather than as though he probably will, although there may be no difference in acting on those two beliefs! Still, it is always open to say that Charlie isn't justified in believing that Joe will (tout court) be a good major leaguer.

*As ever, I am using "Hawthorne's view" as an abbreviation for "the view Hawthorne says he will opt for if you put a gun to his head."

Posted by: Matt Weiner at June 28, 2004 11:00 AM

I place the manager case with the lottery case insofar as they are both cases in which (1) it is acceptable to assert that S has a justified belief that p and (2) S does not know that p and, unlike Gettier cases, (3) S knows (or is in a position to know with ease) that he does not know that p. There are indefinitely many such cases, I think -- when I see the ominous clouds in the sky, it is quite natural to say that I believe it is going to rain, and am justified in so believing, although I do not know that it is going to rain, and I know that I do not know that.

I state (1) in a rather contrived way because I think it is false that I am justified in believing that my ticket will lose/it is going to rain (and likewise for Charlie Brown unless he is irrationally confident in his belief about Shlabotnik). And that is because it is, literally speaking, false that I *believe* that my ticket will lose/it is going to rain. What I actually believe is that the ticket will probably lose/that it will probably rain. However, in normal conversational contexts, we do not distinguish between belief that p and belief that probably p; it would be outright pedantic to do so. Someone who strictly speaking believes merely that the ticket is extremely likely to lose can be said to believe that it will lose, without probabilistic qualification, if we are speaking loosely in the manner that we do outside of philosophy (and that can be a self-ascription of belief).

Posted by: Jonathan Sutton at June 28, 2004 03:14 PM

Addendum: what I meant by "irrationally confident" was that Brown will not, strictly speaking *believe* that Shlabotnik will be a good player unless he is irrationally confident; he will believe that Sh. will *probably* be a good player. And he is justified in believing that Sh. will probably be a good player; however, Brown also *knows* that Shlabotnik will *probably* be a good player. So we do not have justification that falls short of knowledge after all.

Posted by: Jonathan Sutton at June 28, 2004 03:21 PM

(I'm cross-posting this to Certain Doubts as well.)

I'm always surprised and delighted when it turns out that people have actually read some of my stuff!!!

Anyway, my answer to Matt's challenge is (no doubt predictably, since we philosophers are obstinate creatures) not to abandon my principle linking knowledge and justification, but to appeal to a different aspect of my view -- viz. my endorsement of a modest form of *contextualism*. (The sort of contextualism that I like is closest to the version that has been defended by Stewart Cohen. But I am very doubtful whether the standards for "knowledge" or "justification" can ever rise so high that sceptical claims like 'Moore doesn't know that he has hands' are true; so unlike Cohen, I don't think that contextualism is much use for answering sceptical arguments.)

Thus, I say that in some contexts, it is perfectly true to say that Charlie Brown "knows" that Joe will be a good major leaguer; in other contexts, it is not true to say that he is "justified" in believed that Joe will be a good major leaguer (in those contexts, it is at most true to say that Charlie is justified in believing that Joe will probably be a good major leaguer). What I deny is that the terms 'justified belief' and 'knowledge' have a context-invariant meanings that makes "justified belief" require a lesser degree of justification than "knowledge".

In short, it is the same kind of justification that is required both for justification and knowledge. The precise degree of justification that is required for the belief to count either as "justified" or as "knowledge" varies with the context; but any degree of justification that can suffice to make it true that a given belief is "justified" can also suffice (if the other success conditions are met) to make it true that the belief in question counts as "knowledge".

Posted by: Ralph Wedgwood at June 28, 2004 06:11 PM

Ralph, in a sense I entirely agree with you. Certainly, that there are no contexts in which "justified belief" and "knowledge" differ in their extension. I also agree that there are contexts in which Brown can be appropriately said to know that Joe will be a good major leaguer. But only because there are contexts in which it is appropriate to exaggerate -- I think what is said is literally false.

"Justified belief" and "knowledge," the phrases, differ in the respect that mere knowledge that probably p cannot *truly* be said to amount to categorical knowledge that p. Mere belief that probably p (and hence mere justified belief that probably p) can be truly (speaking loosely) said to amount to belief that p (and hence justified belief that p). If mere knowledge that probably p could be truly said to be knowledge that p then:

(1) Gettier cases would seem to be intuitively knowledge after all. The boy in the land of fake barns does know that it is probably a barn before him, given that he is unaware that he is in the land of fakes.

(2) Knowledge would not seem to entail truth, since one can know that probably p when it is false that p.

Posted by: Jonathan Sutton at June 28, 2004 06:48 PM

Jonathan, the distinction between believing p and believing probably p is definitely one of the weak spots in the argument, or one of the places you can make a stand, or one of the bullets available for biting, or something like that.

One of the issues here is: Do we actually believe that p for many p? If so, why? My view (this week) is that we have lots of categorical beliefs that p, but that this is due to our cognitive limitations. If we could process arbitrary amounts of information we would keep track of the probabilities of all our beliefs (these may well be NP-complete problems). Since we can't, we just plain believe that p for many p. But I don't think that this necessarily makes these beliefs that p irrational--there's still a lot of work to do on what it takes to make a belief rational or not.

Posted by: Matt Weiner at June 30, 2004 11:03 AM

Matt, I think we should distinguish between tracking the (relatively precise) probabilities we might want to assign to the huge number of propositions we have an interest in if our minds were not so puny -- for which the problems you mention do arise -- and simply classifying a proposition as categorically true as opposed to probably true (or extremely likely to be true as opposed to be somewhat likely to be true, if we want a bit more fineness of grain). The latter, much coarser grained task is much more manageable -- and actually managed by us all the time for those propositions we explicitly consider, I think.

That having been said, I imagine there is a lot of vagueness here for propositions that we do not explicitly consider and devote some reflection to. I imagine it is often indeterminate whether we believe categorically that p or merely that probably p, and consequently often indeterminate whether our attitude to p is justified. Further, I imagine that it is often indeterminate whether we really believe that p as opposed to *suspecting* that p, or some such.

Posted by: Jonathan Sutton at June 30, 2004 03:20 PM