March 16, 2004

My APA Paper

[UPDATE III: A draft of my response is now added in the comments; I guess I'll field my own questions.]

[UPDATE II: Added a bunch of paragraph breaks to the paper for ease of reading; also, check out Jonathan Sutton's response in the comments.]

Here's my APA paper, "Deductive Closure and the Sorites." I also have some very nice comments by Jonathan Sutton, most of which I agree with. (If he'd like to post them in comments that would be great. [UPDATE: He has, thanks!] Then I can post my reply, and everyone else can post questions. We might not even have to go to Pasadena.)

The basic idea is that on certain kinds of fallibilism (see note 2 for exactly which kinds), one must take Deductive Closure to be a Sorites premise. Though the principle of deductive closure seems perfectly harmless--if you know A and B, and you properly deduce C from A and B, you know C--repeated applications take us from knowledge to not-knowledge.

The comparison to the sorites is supposed to give aid and comfort to these fallibilists. The argument by John Hawthorne that I discuss seems to show that these fallibilists must reject Deductive Closure, and that seems outrageous. I think that you can mostly accept Deductive Closure in the way that you mostly accept that two visually indistinguishable hues are the same color. (Whatever that way is--I don't have a dog in that fight.) And the fact that Deductive Closure can't be accepted without qualification is no more outrageous than the fact that the indistinguishability principle can't be accepted without qualification.

A word on how this paper fits in: It's all part of a project to disparage knowledge. (Jonathan will not approve.) My view is that knowledge is a useful folk concept that doesn't capture anything useful for serious epistemological investigation. Whatever knowledge gets the epistemologist can be got better by considering degrees of justification.

To argue this I'd like to show that knowledge doesn't have the roles you might think it has. Peter Graham and Jennifer Lackey have argued that knowledge isn't what's transmitted in testimony; I'm arguing that knowledge isn't what's preserved in deduction. In fact, if deduction yields a sorites with respect to knowledge, knowledge behaves exactly in the way that you'd expect being justified tout court to behave; there's no clear line between how justified a belief must be to be justified tout court, and as you add more and more justified premises the justification of the conclusion slowly leaks away, if you're not careful (in the way I describea at the end of the paper).

Herewith, the paper:

It is, to put it mildly, intuitively appealing to think that knowledge is deductively closed:
(DC) If S knows A and B, and C follows deductively from A and B, then S is in a position to know C.[1 ]
Part of the appeal of DC is that it captures one of the most common ways in which we expand our knowledge. If we know premises A and B, and we deduce their logical consequence C, then it seems that we know the conclusion we have properly deduced. No matter how I came to know A and B, they can be used to generate further knowledge.

I will argue that, in spite of DC's intuitive appeal, it should not be accepted without qualification. It is a sorites premise; like "Two hues are the same color if they are visually indistinguishable," it seems intuitively obvious, but it leads to absurd conclusions when applied indiscriminately. Furthermore, treating DC as a sorites premise yields a better account of the generation of new knowledge than does unqualified acceptance of DC. Accepting DC without qualification would be proper if a piece of knowledge could always be put in the bank for further use, with no regard for how that knowledge was attained. Knowledge, however, cannot be put in the bank; sometimes, if rarely, we must reexamine the origins of our knowledge.


DC presents a particular problem for a fallibilist who believes that we may know that p even when we have not ruled out certain improbable ways in which p can fail to obtain. For instance, the fallibilist might hold that I can know that
(1) My feckless friend Bill will never be rich
even if I have not established that
(2) Bill's ticket will not win the lottery tomorrow.[ 2 ]
The problem is that, if I have not ruled out a possible alternative to p, then I do not seem to know that that alternative does not obtain; I do not know (2), that Bill's ticket will not win the lottery tomorrow. (2) however, follows from (1), so if I know (1) but not (2), DC is violated. On the other hand, if knowledge that p requires ruling out every alternative to p, no matter how improbable, knowledge will be extraordinarily hard to obtain.[3 ]

Contextualism holds that more than one standard for knowledge is in play here; DC holds within each standard for knowledge. To know that p, we must rule out all the alternatives to p that are relevant according to the standard of knowledge in effect. By the standard we apply when considering (1), the possibility that Bill's ticket wins is not relevant, so we know both (1) and (2). By another standard, which we must apply when considering (2), the possibility that Bill's ticket wins is relevant, and we know neither (1) nor (2).[ 4] On either standard, if we know (1), we know its consequence (2).

Contextualism thus accounts for why we may be willing to ascribe knowledge of (1) but not of (2). (2) is a consequence of (1), but when we explicitly consider (2) rather than (1), we shift the standard so that knowledge disappears. Note that this account respects the way we use DC to gain new knowledge from old knowledge. We might say, because of Bill's fecklessness, that we know he will never be rich; but we would not go on, "If Bill won the lottery, he would be rich; by modus tollens, we know he won't win the lottery." Bill's fecklessness provides no grounds for belief that his ticket won't win.

Hawthorne (2002), however, has shown that a contextualist who wishes to preserve DC must do considerable violence to our epistemic practices. Consider the following situation: Alice has 5000 feckless friends, each of whom holds one ticket in tomorrow's lottery. The only way any of Alice's friends will become rich this year is to win that lottery. The lottery has 5001 tickets, one held by Dr. Evil, who is not Alice's friend. Sarah asks Alice in turn, of each of her friends, "Will Bill be rich this year? Will Harry be rich this year?" etc. Alice replies, in each case,
(3Bill[/Harry/etc.]) Bill[/Harry/etc.] will not be rich this year.
In fact, Dr. Evil's ticket wins, so none of Alice's friends is rich this year. Each of her statements (3) turns out to be true. Looking back at year's end, should we say that Alice knew that Bill would not be rich, that Harry would not be rich, etc.?

If contextualism is to support fallibilism about lottery cases, the contextualist must say that there is a standard for knowledge by which, when we consider whether Bill will be rich this year, we may ignore the possibility that Bill's ticket wins. Alice, however, seems to stick to one standard as she considers whether Bill will be rich, whether Harry will be rich, etc. If, by a single standard, Alice knows (3Bill) and (3Harry) and the rest, then by DC within that standard she knows
(4) None of the 5000 friends will be rich by year's end.
On Alice's evidence, however, (4) has only a 1 in 5001 chance; (4) will not be true unless Dr. Evil's ticket wins. Though, looking back, we know that Dr. Evil's ticket did win, it is outrageous to say that Alice was in a position to know (4).

Hawthorne points out that the contextualist can wriggle out of this problem by positing that Alice does shift standards. One could say: On the Bill-standard for knowledge, one may properly ignore the possibility that Bill's ticket will win, but not that Harry's ticket will win, or Jerry's, etc. On the Harry-standard for knowledge, one may properly ignore the possibility that Jerry's ticket will win, but not Bill's, or Jerry's, etc. When evaluating (3Bill), the Bill-standard is appropriate, so it is proper to say that Alice knows (3Bill). When evaluating (3Harry), the Harry-standard is appropriate, so it is proper to say that Alice knows (3Harry). But DC only governs premises that are all known by the same standard. (4) is the conjunction of one premise that is known by the Bill-standard, one that is known by the Jerry-standard, one that is known by the Harry-standard, etc.; Alice's knowledge by each of these standards may be deductively closed without her knowing (4) by any standard.

This multi-standard solution preserves fallibilism and unqualified DC, but it has little else to recommend it. As Hawthorne points out, this "solution to our lottery puzzle require[s] rapid context shifting where, initially, context shift was far from noticeable" (Hawthorne 2002, p. 251). Worse yet, it wreaks havoc on the role of deduction in our epistemic practices. Suppose that Bill and Harry are going on vacation together, and Alice is wondering whether they will be able to afford a certain hotel. She reasons:
(3Bill) Bill will not be rich this year.
(3Harry) Harry will not be rich this year.
(5) Therefore neither Bill nor Harry will be rich next week.
(6) Therefore they will not be able to afford the hotel.
On the multi-standard solution, Alice knows (3Bill) by the Bill-standard and (3Harry) by the Harry-standard. Since these are different standards, it does not follow by DC that she knows (5), even though (5) is a deductive consequence of (3Bill) and (3Harry). If Alice is to come to know (5) and (6) by deduction from (3Bill) and (3Harry), she must first rederive her premises under a single standard.

It would be nightmarish to constantly recheck the foundations of our knowledge in this way. The deduction from (3Bill) and (3Harry) to (5), in particular, is unlike the deduction from (1) to (2), which may reasonably be taken to require rechecking our reasons for believing the premise. When we reason from (1), that Bill will never be rich, to (2), that Bill's ticket will not win, we realize that our acceptance of (1) required ignoring the possibility that (2) might be true; ignoring that possibility might have been reasonable when considering (1), but it is not reasonable when considering (2). Focusing on (2) raises the new doubt, "What if Bill's ticket does win?" No such new doubt is raised in the deduction from (3Bill) and (3Harry) to (5). To get to (3Bill) and (3Harry), Alice must have ignored the possibility that their respective tickets win; if this was proper, it is proper to ignore these possibilities nwhen considering (5). For Alice has not refocused her attention on the tickets; she is still considering whether her friends will be rich. So there can be no new need for Alice to recheck the foundations of her knowledge; at least so it seems.


Let us suppose that there is no context-shifting, so that all our discussion takes place with respect to a single fallibilist standard of knowledge. (My argument will thus also defend non-contextualist fallibilism against the lottery paradox.) On this standard, Alice can properly ignore the 1 in 5001 chance that one particular lottery ticket wins; she thus can know each of (5Bill), (5Harry), etc. We must suppose that she and we never slip into explicit consideration of whether a certain lottery ticket will win; the question at issue is always, "Will Bill ever be rich?", "Will Harry ever be rich?", "Will Bill or Harry ever be rich?", etc.

The fallibilist picture, then, is that Alice knows that Bill will not be rich this year. The chance that he will become rich by winning the lottery is negligible, and in ascribing this knowledge we properly neglect it.[5] Indeed, Alice knows that neither Bill nor Harry will be rich this year. If Bill will not be rich (a negligible chance), the only way that either Bill or Harry will be rich is if Harry wins the lottery. The chance that Harry wins the lottery is negligible; indeed, properly neglecting this chance, Alice knows that Harry will not be rich. (I remind you that, in ascribing this knowledge, we are looking back from the time when Dr. Evil has won the lottery; so none of Alice's 5000 friends become rich this year. It would not be proper to neglect a possibility that actually came to pass.) If Alice knows (3Bill), and on her evidence there is a negligible chance that (3Harry) is false, then that negligible chance will not prevent her from knowing the conjunction of (3Bill) and (3Harry). Well then, does Alice know that neither Bill nor Harry nor Jerry will be rich this year? There is a negligible chance that Jerry will win the lottery and become rich; indeed, we properly neglect this in saying that Alice knows that Jerry will not be rich. So this negligible chance that (3Jerry) is false will not turn Alice's knowledge that (3Bill) and (3Harry) into not-knowledge that (3Bill) and (3Harry) and (3Jerry)…

You can tell where this is going. Eventually we reach the conclusion that Alice knows (4), that none of her 5000 friends will be rich. Though (4) turns out to be true, on Alice's evidence it has only a 1 in 5001 probability, so by any standard it is absurd to say that she knows it. In fact, we leave knowledge behind well before we reach (4). Yet it is impossible to say exactly when we leave knowledge behind. Each step seems innocuous. At each step, Alice's conclusion becomes vulnerable to a new doubt, that the friend at issue might win the lottery. But this doubt is ex hypothesi negligible. It seems implausible that adding a new conjunct, and a new 1 in 5001 chance of error, could take Alice from knowledge to not-knowledge. Still, when we heap all these conjuncts together, we have a sorites paradox.

Each step of the sorites involves an application of DC. Alice knows (3Bill) and (3Harry), so by DC she knows their conjunction; she knows (3Jerry), so by DC with the previous step she knows its conjunction with (3Bill) and (5Harry), etc. The problem is that our fallibilist picture allows that Alice may know that p even if she has not foreclosed every possible alternative to p, so long as the unforeclosed alternatives have negligible probability (and are not otherwise relevant). If Alice's evidence puts her in a position to know that p, it may still be compatible with certain ~p-possibilities of negligible probability; if it puts her in a position to know that q, it may still be compatible with certain ~q-possibilities of negligible probability. Alice's evidence is then compatible with certain ~(p & q)-possibilities—the union of the ~p-possibilities and the ~q-possibilities—whose probability is at most the sum of two negligible properties. Adding together enough of these negligible probabilities yields a non-negligible probability that destroys knowledge.

I will not attempt to solve the sorites paradox. Let us simply say that what makes it paradoxical is the intuitive appeal of general sorites premises such as "Two hues are the same color if they are visually indistinguishable," which lead to obvious falsehoods when applied repeatedly. The intuitively appealing general premise cannot be accepted without qualification, though without a solution to the sorites paradox I cannot say exactly what qualification is required. So DC's intuitive appeal does not entail that we should accept it without whatever qualification that is required for general sorites premises.

Of course, DC's intuitive appeal does not entail that it is a sorites premise, either. DC's intuitive appeal, I argued, rests on the way it captures our epistemic practices. The hyperactive context-shifting necessary to reconcile fallibilism with unqualified DC was undesirable because it threatened those epistemic practices. To save fallibilism, I must show that treating DC as a sorites premise does not threaten those practices.


DC unqualified allows us to take our knowledge as given, without worrying about how we attained it. So long as we know p and q, we know any logical consequence of p and q that we can deduce; we can accept that consequence without reconsidering how we came to know p and q. This, I claim, is not much more realistic than the idea that we must recheck our knowledge every time we try to deduce a new consequence.

On fallibilism, we may know that p if we are properly ignoring an improbable alternative to p. This leads to a sorites paradox when, as in the lottery case, the ignored alternatives build up over the course of a multi-premise deduction. Many improbable alternatives, compounded, can add up to a probable alternative that cannot properly be ignored. If we are fallibilists about knowledge, we must avoid this compounding of ignored alternatives.

This does not mean that we must constantly recheck the foundations of our knowledge. Ignored alternatives compound over many-premise deductions. If we know p and q, and they were not themselves obtained by repeated applications of DC, then we may deduce their joint consequences. A single application of DC will not generate a sorites paradox. The known propositions p and q cannot be simply put in the bank without regard for how they were attained; we must remember that p and q were not attained by repeated applications of DC. Still, we do not have to recheck their foundations.

It is even possible to construct a many-premise deduction that is immune from the sorites paradox. This can be done by building in redundancy, in the engineer's sense: supporting various intermediate steps in multiple ways. A giant conjunction of all the known premises will not have this redundancy; the ignored alternatives will compound, and it will not be safe to apply DC repeatedly. Frequently, however, we can structure our arguments so that the falsity of a single premise would not undermine the deduction of any of the intermediate steps. Then it will be safe to apply DC.

Suppose, for instance, that Alice parks cars on-street for customers, noting their location. Suppose that an average of one car every day is stolen in the neighborhood, though in fact none of Alice's customer's cars have been stolen. For any one of her thousand customers, Alice can reasonably say that she knows where that customer's car is parked; the risk of theft is negligible in the individual case. If asked to sign off on a map of the location of all her customers' cars, Alice may reasonably say that she does not know that all thousand cars are where she parked them; the risk that one of the thousand has been stolen is non-negligible. The map can be obtained by repeated applications of DC, conjoining the known propositions about where the individual cars are; but because this is a giant conjunction, the possibility that a car has been stolen compounds with each application of DC, and Alice does not know the conclusion. Yet Alice may reasonably say that she knows that most of the cars are where she parked them, because the argument for that statement has the proper redundancy. The conclusion "Most of the cars are where I parked them" is entailed by any number of conjunctions of 980 of the known statements "Car X is where I parked it," and the possibility that the cars have been stolen do not compound when Alice deduces the conclusion from each of those different 980-strong conjunctions.

This picture of how deduction yields knowledge is more realistic than the picture that any known premise may be put in the bank for further deduction. On this picture, we must retain enough information about how knowledge was attained to make sure that we are not compounding the fallibilities of our individual premises. We must not engage in lengthy chains of deduction where each link is fallible and the failure of one link destroys the whole chain. Still, so long as our arguments have the proper redundancy and no new pertinent doubt has been raised, we need not constantly recheck the foundations of our knowledge. Treating DC as a sorites principle means that new knowledge may be deduced from old knowledge almost always, but not quite always; and that is just what the fallibilist needs.

[1] Also S should be in a position to work out the entailment. This formulation is almost exactly that given by Cohen (1999), p. 62, with the qualification Cohen adds in his n. 14. Cohen's formulation, however, involves one premise, while mine involves two; as we will see, this is important.
[2] The example of feckless Bill is due to Lewis (1996, p. 565). It is possible to be a fallibilist about brain in vat cases without being a fallibilist about lottery cases. There is a definite, though low, probability, that Bill's ticket will win, but there is no obvious probability to assign to the possibility that I am a brain in a vat. So Hawthorne's argument, discussed below, will not apply to a position that is fallibilist concerning brains in vats but not lotteries. In this paper, I will only consider fallibilism concerning lottery cases.
[3] See Vogel (1990) on how the lottery case generalizes; see also Cohen (1998) on how infallibilism can lead to skepticism.
[4] Such contextualist accounts have been developed by Cohen (1988, 1998), DeRose (1995), and Lewis (1996). The treatment of relevant alternatives in the text is based on Lewis's account.
[5] In fact, it is debatable whether this chance is negligible, given the low number of tickets in the lottery. Even a fallibilist might not allow that we know that Bill will never become rich, given the high expected value of a 1 in 5001 chance of becoming rich. If this causes concern, substitute a more tractable example.

Works Cited
Cohen, Stewart (1988). "How to Be a Fallibilist." Philosophical Perspectives 2, 91-123.
Cohen, Stewart (1999). "Contextualism, Skepticism, and the Structure of Reasons." Philosophical Perspectives 13, 57-89.
DeRose, Keith (1995). "Solving the Skeptical Problem." Philosophical Review 104, 1-52.
Hawthorne, John (2002). "Lewis, the Lottery, and the Preface." Analysis 62, 242-251.
Lewis, David (1996). "Elusive Knowledge." Australasian Journal of Philosophy 74, 549-567.
Vogel, Jonathan (1990). "Are There Counterexamples to the Closure Principle?" In Doubting: Contemporary Perspectives on Skepticism, ed. M. Roth and G. Ross. Kluwer, Dordrecht.

Posted by Matt Weiner at March 16, 2004 10:03 AM
Comments

Here are the comments.

Whether Matt presents a persuasive argument against deductive closure depends heavily on one’s prior commitments. We should consider four positions. Matt’s argument might well persuade many proponents of three of those positions of something, although just what varies from case to case. A proponent of the fourth will be unmoved by what Matt says.

The first position is the primary target of Matt’s paper: any form of contextualism about knowledge (or ‘knowledge ascriptions’, if we want to be picky) that endorses deductive closure in any context of knowledge ascription. The second position is that of the contextualist who already embraces at least some failures of deductive closure. The third position is that of the invariantist who embraces at least some failures of closure (for the record, this position is my own). The fourth is that of the invariantist who endorses deductive closure.

Matt primarily argues that a contextualist should accept that knowledge, whatever the context of ascription, whatever knowledge amounts to a given context, is not closed under the logical rule of Conjunction Introduction. It is not without exception true that if one knows A and one knows B, then one knows A and B. If we assume Conjunction Introduction holds in full generality, then we face something like a Sorites paradox which concludes that we know something that we clearly do not know—a giant conjunction which entails that all of the holders of lottery tickets but for the actual winner will never be rich. (If the closure of CI fails at all, then it will presumably fail for mere pairs of propositions provided that each member of the pair is itself a “small enough” conjunction that it can be known, but whose conjunction is “too big” to know.)

If the contextualist gives up on deductive closure for CI, there is little reason for him to resist giving up deductive closure for Modus Ponens. He should accept that if one knows P, and one knows that if P, then Q, it does not follow that one knows Q, however aware one is that Q is entailed by what one does know. In particular, he should accept that in a single context of knowledge ascription, one can know that one’s friend will never be rich and fail to know that he will lose the lottery for which he has a ticket, although that he will lose the lottery follows from the proposition that he will never be rich. Whatever reasons there are for thinking knowledge closed under MP in all contexts are equally reasons for thinking knowledge closed under CI. Since those reasons cannot be decisive in the case of CI (if Matt’s argument is persuasive), they cannot be decisive for MP. Since knowledge appears not to be closed under MP, and only contextualist fancy footwork saves those appearances in the case of MP but not CI, the contextualist should surrender and accept that deductive closure fails in full generality—for at least those two logical rules, in at least the cases in which it intuitively does so fail.

So I reconstruct Matt’s argument, anyway. And I have no objection to make to it. It is easily extended to an argument that says something to a contextualist who already accepts that knowledge is not closed under MP (there are such—I work with one). He should simply say “Interesting! I guess knowledge is not closed under CI either.”

What should an invariantist’s reaction be to Matt’s considerations, however? An invariantist who already holds that deductive closure for MP fails will, I suggest, have the same reaction as the contextualist who holds that closure fails for MP. “Interesting! I guess knowledge is not closed under CI either.” That’s my reaction, anyway. But an invariantist who endorses deductive closure in full generality, and in particular for MP, will find Matt’s argument utterly unpersuasive. Such an invariantist will not claim that I know that my lottery ticket will lose. He will claim that I do not know that I will never be rich, precisely because I do not know that my ticket will lose. He will not claim that I know that my car has not been stolen. He will claim that I do not know where it is parked—although I do, perhaps, know where it is probably parked, and that I probably will never be rich. Consequently, such an invariantist will deny that I know that my feckless friend Bill will never be rich, and likewise for all similar propositions about my friends as discussed by Matt. Matt’s argument will not get off the ground since its initial premises will be flatly denied by an invariantist committed to DC. Matt has not presented a persuasive argument against deductive closure in general, only one that is at best persuasive to a contextualist who antecedently denied closure.

OK, so how might those of us persuaded by Matt’s considerations that knowledge is not closed under CI proceed? Matt disavows offering any solution to Sorites paradoxes. However, I suggest that we can draw some conclusions about what anyone who holds any popular theory of vagueness should say.

Firstly, let us note that there is an important respect in which the case Matt presents is not analogous to a standard Sorites case. There is a natural continuum from (or to) definite heaps to (or from) definite non-heaps. The same natural continua obtain for heads of hair with respect to baldness and hirsuteness, color patches with respect to being red or being orange, and so on in indefinitely many instances where vagueness arises. What is to be said about the Sorites continuum (and all counterpart continua) is widely agreed upon at the most general level by many views about vagueness: in between those entities which are definitely heaps and definitely non-heaps are the borderline cases which are neither definitely heaps nor definitely non-heaps. What differs radically from view to view is the proper interpretation of these claims. Epistemicists will claim that there is a fact of the matter about whether anything is a heap or not; it is just that we cannot know whether the borderline cases are or are not heaps. Supervaluationists and others disagree; ‘definitely’ has metaphysical bite on non-epistemicist views. There is some sense in which there is no fact of the matter about whether borderline cases of heaps are heaps or are not heaps.

There is no natural continuum of lottery tickets; that they might have numerals assigned to them (hey, they could be imprinted with convoluted shapes) is neither here nor there. Consequently, there is no natural continuum of Alice’s friends who hold lottery tickets and end up losing. She (definitely) knows of any one of her friends, pick one at random and let him be A1, that he will never be rich. And she knows the same of any other, pick one and call him A2; she definitely knows that A2 will never be rich. She also definitely knows that A1 will never be rich and A2 will never be rich. She definitely does not know that A1 will never be rich and . . . A5000 will never be rich. There are numerous conjunctions concerning not too few and not too many of her friends which Alice neither definitely knows nor definitely does not know. Those are borderline cases for extending knowledge by Conjunction Introduction. And that they are borderline cases has nothing to do with which of Alice’s friends they concern and nothing to do with which lottery tickets they happen to hold. It has only to do with the number of conjuncts. The natural continuum here is to be found in the structure of the propositions knowledge of which Matt’s argument concerns, not in the objects or properties the argument’s premises concern, as is the case with standard Sorites arguments.

How does this bear on one of Matt’s concluding paragraphs?

"This picture of how deduction yields knowledge is more realistic than the picture that any known premise may be put in the bank for further deduction. On this picture, we must retain enough information about how knowledge was attained to make sure that we are not compounding the fallibilities of our individual premises. We must not engage in lengthy chains of deduction where each link is fallible and the failure of one link destroys the whole chain. Still, so long as our arguments have the proper redundancy and no new pertinent doubt has been raised, we need not constantly recheck the foundations of our knowledge."

I am not entirely sure what Matt means by these statements concerning what we must and must not do, and so I am not sure what I have to say will conflict with his views—what I wish to do is forestall any reading of them that is too internalist. Small enough conjunctions, n the relevant kinds of case, can be known if we arrive at them by a small enough number of applications of CI. This is so whether or not the knower realizes that too many applications of CI will lead to the knower believing conjunctions that are neither definitely known nor definitely not known, and even more applications of CI will lead to belief in conjunctions that are definitely not known. If one arrives at belief in a small enough conjunction through a small enough number of applications of CI, there is no need to “retain information about how [one’s] knowledge was obtained”; one simply (and definitely) knows the conjunction. If one arrives at belief in a borderline case conjunction through CI, one will neither definitely know nor definitely fail to know that conjunction whether or not one retains such information.

Perhaps we should say that known premises can most certainly be put in the bank for further deduction—but one can still get overdrawn. Keeping a meticulous eye on one’s balance will be of little use here, too, because of higher-order vagueness considerations. There is no size at which a conjunction becomes definitely too big to be definitely known; the size at which that occurs is itself an indeterminate matter. One’s balance has to be big enough— the conjunction believed small enough—for one definitely to know that conjunction. What one believes about that balance—what information one retains about it—is irrelevant to whether one has the money or not. Luckily, as with more familar (alleged) cases of the failure of deductive closure, actual knowers spend within their means. Just as they do not come to believe categorically that their car has not been stolen simply because they believe categorically that it is parked at such-and-such a location, they do not come to believe that all but one (or, more generally, too many) of the lottery ticket holders will never be rich simply because they believe of each one that he will never be rich. The failure of deductive closure is not in conflict with how believers actually extend their knowledge. In order to remain knowers in good standing, there are inferences they should not make—but they are not remotely tempted to do so, and hence do not need explicitly to check whether they have entered the realm of borderline cases for them to remain knowers in good standing.

Posted by: Jonathan Sutton at March 16, 2004 10:39 AM

And here's a draft of my response--it's too long, I may decide to just say "excellent comments" and sit back down.

This response may be a bit dull--I agree with almost everything Jonathan has said. Before moving on to some small areas of disagreement, I would like to express especial agreement with the second half of his comments, in particular his closing paragraphs. What determines whether Deductive Closure is in danger of failing--whether a believer is in danger of losing knowledge by deducing from known premises--is not a matter of whether she is aware that there is a danger. It is a question of the structure of the argument itself. An argument that has built-in self-reinforcement will not lead to a known conclusion if the premises are not known; large enough plain conjunctions will not be known even if the premises are known.

Furthermore, as Jonathan points out, actual knowers will not generally be tempted into the arguments that lead to violations of Deductive Closure. This is the paradox of the Preface: Even if we wholeheartedly believe every single statement in a book, we will not claim, "Every sentence in this book is true." These are important points, and I thank Jonathan for making them.

Next, I'd like to explain a bit more who my argument is meant to reach. Jonathan claims that the invariantist who accepts closure will be unmoved by my argument. In this too, he is correct. The invariantist who accepts closure will deny knowledge of the premises of the arguments I have cited. And Jonathan brings out why I find that unappealing--if I don't know that my car hasn't been stolen, I don't know where it's parked, and I don't know that I will be home within an hour of arriving at Salt Lake City airport. More disturbing, for me, it seems to me that if I do not know that the stranger giving me directions is not lying or mistaken, I do not know where the Salt Palace is. And if I can't gain knowledge through testimony, I don't know much. So my arguments won't provide the invariantist reason to deny closure, but such an invariantist may face much worse problems.
What my arguments are meant to do is to soften the sting of denying closure. Deductive closure seems so obvious that denying it seems like a great cost. But many other sorites seem obvious, and are usually reliable, yet cannot be accepted without reservation. My proposal is that denying unqualified deductive closure is no more costly than denying any other unqualified sorites premise. This is a bullet we have already bitten. So those who would like to abandon closure--perhaps out of fear of skepticism--may find my way of denying it congenial.

Finally, I'd like to say a bit about the circumstances in which deductive closure fails. It is true that we can expect failures to occur both with repeated uses of CI and with repeated uses of MP. But that doesn't mean that all applications of CI and of MP are equally impugned. In particular, consider a view on which knowledge is simply justified true belief, where a belief is justified if it is likely enough on the information the believer has, where "likely enough" is a vague standard. Suppose that it is determinate that 99% probability is likely enough, and that any probability over 75% is determinately not determinately likely enough. (I know, this is dreadfully oversimplified, and knowledge isn't justified true belief anyway--but the argument will carry over to at least some other accounts).

Note first that a single application of CI is safe in the following way: It won't take you from two premises that are determinately known to a conclusion that is determinately not known. Hence closure of CI does not fail completely for pairs of propositions, if it makes sense to speak of incomplete failure.

More important, applications of MP will be completely safe, so long as one of the premises is known with complete certainty. This is most relevant for the apparent failures of MP that motivate contextualism.

Take the lottery case. Suppose that I am 99.98% sure that Bill will never be rich. I am 100% sure that, if Bill wins the lottery and collects on his ticket, then he will be rich. So I am just as sure that Bill will not win the lottery and collect on his ticket as that he will never be rich. I could perform as many MPs as I like without my certitude dipping below 99%, so long as every conditional were one that was absolutely certain on my information.

But most of us have the intuition that this will not do--that, even if I know that Bill will never be rich, I do not know that he will not win the lottery and collect on his winnings. Hence the motivation for contextualism. The contextualist can say that, when we are considering the prospect of the lottery, the standards for being likely enough go way up, or can provide another account of how the possibility of Bill's ticket winning is relevant when considering the lottery and not when considering his future riches. Viewing deductive closure as a sorites may not solve this particular problem; it may only solve the problem of how knowledge can leak away when the context clearly does not shift.

Posted by: Matt Weiner at March 17, 2004 04:57 PM


In the age of the Internet, one gets the rare opportunity to respond to the response to one's comments a little more frequently than was previously the case! A number of times in the past, I would have relished that opportunity in order to say that the speaker had not understood my comments *at all.* That is most definitely *not* true here. And I am very happy that you have found my comments "excellent," Matt; I found your paper to be excellent, too, and it will influence my own work substantially.

As far as the Paradox of the Preface goes, I am not entirely sure that your model case for denying deductive closure under CI will generalize. That is, if I really know the proposition expressed by each sentence in my book, it is not clear to me that I will not know their conjunction. I have nothing to say about *why* that is not clear to me; I just mean that my intuitions go in neither direction even given that I am convinced that the kind of case you describe is an instance of the failure of DC under CI.

My intuitions are similarly neutral with respect to your claim that:

"a single application of CI is safe in the following way: It won't take you from two premises that are determinately known to a conclusion that is determinately not known. Hence closure of CI does not fail completely for pairs of propositions, if it makes sense to speak of incomplete failure."

if the known premises are both big enough conjunctions. (When I wrote the comments, I simply assumed the negation of the quoted proposition for big enough conjunctions -- but I should have been neutral, I think.)

Well, the above is all little more than psychobiography and incidental comments. So let me try something a little more substantive.

I think that the reason that the denial of DC seems so outrageous to many is a background assumption that deductive logic provides examples of the very best kinds of inference there are -- those that are necessarily truth-preserving. But when it comes to the evaluation of inferences construed as what thinkers actually engage in, rather than abstract arguments, I think that a good inference is one that takes one from premises that one justifiably believes to a conclusion that one justifiably believes in virtue of performing the inference. And one is not justified in believing (1) that one's car has not been stolen -- just that it probably has not (in normal situations) -- although one is justified in believing (2) that it is parked at location X, and one is justified in believing (3) that if it is parked at X, then it has not been stolen (one knows those propositions, after all), and one cannot become justified in believing (1) by an inference from (2) and (3). Luckily, people are not remotely tempted to make that inference -- we do not have to say that reasoning that is actually performed on a regular basis is no good if we take this line on DC and what makes for a "good" inference. As with the instances that you describe in which DC under CI fails.

I hope the general idea there is clear -- there is more that can be said (and I have said it in work that is as yet unpublished), but I have gone on here for long enough already.

Posted by: Jonathan Sutton at March 17, 2004 08:05 PM

The view here is kinda neat, and it gives us logic-types something to think about. Instead of thinking of what we know as what is true in all of the relevant alternatives (which will give us full deductive closure), we've got to think of it in another way if we take these purported counterexamples seriously.

So how do we do it?

Posted by: Greg Restall at March 18, 2004 08:21 PM

[Oops! I hit the post button instead of the preview button. Oh well.]

Here's another way to think about it: Think of all of the propositions as ordered by entailment. So A <= B iff A entails B, and this ordering is reflexive and transitive. None of the objections in Matt's paper lead us to think that we could ever have A <= B where we're in a position to know A, but not in a position to know B. So, think of the things we're in a position to know at some point. This collection of propositions must be closed upwards under the ordering <=. But it is not necessarily closed under conjunction, because of the failure of CI to preserve knowledge.

Or if you, like Matt, prefer talking of degrees of justification, then don't think of an epistemic state as a class of propositions, but rather, as a ranking of propositions with degrees, and the ordering on degrees must cohere with the entailment ordering on propositions. If A entails B, if A is justified to degree x, then so is B.

That seems like an interesting way to think about it, and you can then go on and ask further questions: what are these degrees? Are all epistemic rankings (or sets of propositions, if you think about it in the "flat" manner) formally equal, or are there more constraints you can put on these classes?

Posted by: Greg Restall at March 18, 2004 08:30 PM

Thanks for the kind words again, Jonathan. I'm blushing.

I wanted to pick up on this:

But when it comes to the evaluation of inferences construed as what thinkers actually engage in, rather than abstract arguments, I think that a good inference is one that takes one from premises that one justifiably believes to a conclusion that one justifiably believes in virtue of performing the inference.

That seems exactly right to me--and it's helpful to my larger project of replacing knowledge with justification as the property that epistemologists should study. The best inferences aren't those that preserve knowledge but those that preserve justification.

I'm here inching toward the Harman line that implication and inference are completely different things (if I'm quoting that right); that's not somewhere I've liked to be in the past, but maybe I'll have to swallow it.

An interesting question is what is meant exactly by "what thinkers engage in." If we mean the voluntary actions by which thinkers affect their beliefs, those may not look much like arguments at all--I take it most inferences just happen. Or you might take it that the best inferences are as Harman describes in Change in View--briefly, you reconsider your beliefs when the cost of doing so is low enough, and you only carry out low-cost kinds of revision. I hope I don't have to adopt that Harmanian view, but I'll have to think about it.

I like Greg's suggestion a lot, and maybe it provides a way of avoiding the Harman view. My current view is that purely epistemic justification is a matter of how much the available information supports a belief, without regard to whether your cognitive limitations let you figure out whether the available information supports the belief. (You can be blameless in a non-epistemic way for unavoidable cognitive error, but that doesn't mean the belief itself is justified.) That obeys Greg's constraint--if A entails B, you must be at least as justified in believing B as in believing A. But your level of justification can drop when you use CI.

Posted by: Matt Weiner at March 19, 2004 08:48 AM

Right, Matt. There's something nice in obeying just a little of the connection between entailment and justification (or degree of justification), because then you're free from the worry that justification doesn't measure the content of the belief, but inheres in something else. (If you reject deductive closure, the issue comes up: just how much deductive closure do you reject?) It seems like distinguishing multiple premise arguments from single premise arguments might give you enough wiggle room. The nice thing then is that CI is all you need to add to single premise arguments to define all multiple premise arguments, so your diagnosis of the problems with CI are really characteristic of all multiple premise arguments.

(Well, those with finitely many premises anyway)

Posted by: Greg Restall at March 19, 2004 08:59 PM

But when it comes to the evaluation of inferences construed as what thinkers actually engage in, rather than abstract arguments, I think that a good inference is one that takes one from premises that one justifiably believes to a conclusion that one justifiably believes in virtue of performing the inference.

That seems exactly right to me--and it's helpful to my larger project of replacing knowledge with justification as the property that epistemologists should study. The best inferences aren't those that preserve knowledge but those that preserve justification.

As I think you know, for me justified belief and knowledge are the same thing, so either characterization of what is preserved in the best inferences is fine with me! But that is a long story...

An interesting question is what is meant exactly by "what thinkers engage in." If we mean the voluntary actions by which thinkers affect their beliefs, those may not look much like arguments at all--I take it most inferences just happen.

I mean a psychological event or process culminating in belief formation as opposed to something purely formal, and, whatever inferences in this sense are, they had better be happening all the time -- and often pretty unreflectively. Beyond that, I have no particular views on the precise (or vague) extension of 'inference', or on whether there are sufficiently similar processes culminating in states other than belief that we might also profitably term 'inferences'.

Posted by: Jonathan Sutton at March 20, 2004 01:00 PM