April 13, 2004
Hedge of Reason
The latest club to beat contextualism with is Subject-Sensitive Invariantism (SSI), in which the standards for knowledge depend on the situation of the person to whom knowledge is being ascribed rather than the person who's doing the ascribing. Take Stewart Cohen's airline case: suppose John, Mary, and Smith are all looking at a printed itinerary that shows (accurately) that a flight lays over in Chicago. Whether the flight lays over is vitally important to John and Mary but not to Smith; so Smith has no reason to worry about whether the itinerary has a typo, but John and Mary do. On SSI, Smith knows that the flight lays over, but John and Mary do not. On contextualism [UPDATE: Approximately, note Richard Chappell's first comment], John and Mary can say truly both
(1) Smith does not know the flight lays over
(2) John and Mary do not know the flight lays over,
while Smith can say truly both
(3) Smith knows the flight lays over
(4) John and Mary knows the flight lays over.
(Assuming they all like to talk about themselves in third person! What I really mean is that the propositions (1)-(4) are true in the respective contexts, or something like that.)
In Knowledge and Lotteries, Hawthorne defends SSI against the following objection (from Cohen): Suppose John and Mary know that Smith has looked at the same itinerary and that he is not in a high-stakes situation. Since SSI predicts that Smith just plain knows that the flight lays over, shouldn't Mary be able to say (3) truly to John? Worse yet, can't she then truly say this:
(5) Smith knows that the flight lays over, but we don't.
Hawthorne argues (p. 160) that Mary can't say (3) because assertion requires knowledge [MW: boo!], and Mary doesn't know (3); she doesn't know that Smith knows that the flight lays over. On SSI, for S to know that p, S must have eliminated all doubts concerning p that are relevant for someone in S's position--and p must be true. Mary knows (by her standards) that Smith has eliminated all doubts about the layover that are relevant for him; but she doesn't know (by her standards) that the flight lays over. So she doesn't know (by her standards) that Smith knows (by his standards) that the flight lays over; (3) encompasses two conditions, and Mary only knows one of them.
A problem for this account is that it predicts that Mary should withhold judgment on whether Smith knows. She doesn't know (by her standards) that Smith doesn't know (by his standards) that the flight will lay over, so she can't assert (1) either. But Mary should assert that; if John asks, "Does Smith know?" Mary should say "No, he's only read the same itinerary we have." Hawthorne acknowledges this cost for a basically similar case on p. 163, suggesting that maybe the ascriber projects her own ignorance onto the subject. Keith DeRose, in section 6 of his new paper, argues that this is a major cost for SSI and that projectivism won't help. But I have another objection to pursue.
Namely: On Hawthorne's account, Mary doesn't know (3) because Smith's knowledge requires both that Smith be in a certain epistemic position and that what Smith believes be true; while Mary knows that Smith is in that epistemic position, she doesn't know that what Smith believes is true. But then doesn't Mary know the following conditional? And shouldn't she be able to say it?
(6) If the flight lays over, then Smith knows it.
Conditional knowledge ascriptions like (6), which I'll call "hedged knowledge ascriptions," seem weird, at least in this situation. (6) is perfectly fine in a case in which the speaker doesn't know what Smith's epistemic status is. If John and Mary know that Smith has checked with the airline, but they don't know what the airline has said, then it makes sense for Mary to say (6) to John. They're uncertain about whether Smith knows because they're uncertain about what evidence he has. But in the original airline case, John and Mary know exactly what evidence Smith has. They know he looked at the itinerary, and they know what the itinerary says. In that situation, (6) sounds plain weird.
What makes it weird, I think, is that we take it that someone who knows p has evidence that's sufficient to establish p. If the speaker knows what Smith's evidence is, then (6) amounts to a confession that Smith's evidence isn't sufficient to establish that the flight lays over; the speaker herself is familiar with Smith's evidence and leaves that question open. So we ought not to be saying that Smith might know that the flight lays over; it's established that his evidence isn't sufficient for that. In this way (6) resembles concessive knowledge attributions such as
(7) I know that the plane lays over, but there might be typo in the itinerary.
(8) Smith knows that the plane lays over, but there might be a typo in the itinerary.
(7) is absolutely unacceptable, (8) pretty dubious. In each, the speaker ascribes knowledge while casting doubt on the evidence that supports the knowledge.
If my diagnosis is correct, SSI has unavoidable costs. SSI rests on the idea that the exact same evidence can establish knowledge for A but not for B. This raises a prima facie problem: Can't B correctly ascribe knowledge to A in full awareness that A's evidence is not conclusive--since it doesn't remove some doubts that B cares about? Hawthorne's argument was that the knowledge requirement on assertion blocks this ascription, but devices such as the hedge let B concentrate on the question of whether A's evidence is sufficient to establish the conclusion in question. And then SSI seems to yield counterintuitive results.
(BTW, conditionals like (6) sometimes sound funny even when they're true, because the antecedent is known false or the consequent is known true or some such. That's not happening here; on SSI, the antecedent really should be relevant to the consequent.)
Posted by Matt Weiner at April 13, 2004 06:12 PM
Have you read Keith DeRose's latest paper on Contextualism? He tackles SSI, and says that it's just plain wrong to suggest that Contextualism denies the possibility that the speaker's context may be influenced by the concerns of the subject. Indeed, I believe he also discusses the airline case (and reaches significantly different conclusions from those laid out in your post).
oops, my mistake... that'll teach me to rush ahead instead of reading the whole post! :(
Were I interested in defending SSI, I would deny that Mary knows (6). If Mary knows (6), then (I think) she knows (6.1):
(6.1) If there is a layover, then there is someone with Smith's actual evidence who knows it.
And if she knows (6.1), then she has sufficient evidence to rule out this possibility:
(6.2) There is a layover and there is someone with Smith's actual evidence who does not know it.
But given the high stakes for Mary, (6.2) is precisely what she can't rule out. And if she can't rule out (6.2), it seems to me anyway that she doesn't know (6).
Richard--You're right, though, that my account of the contextualist verdicts on (1)-(4) is oversimplified; I've updated (a little). (Possibly it would've been smarter to assume that everyone who might make it through this whole post knows what contextualism anyway--it would've made it shorter!)
Geoff--I agree that if Mary can know (6) she can know (6.1), but not that if she can know (6.1) she can rule out (6.2). What she can rule out, I think, is (6.3):
(6.3) There is a layover and no one with Smith's actual evidence knows it.
This is more felicitous as a negation of (6.2): the negation of [p -> (Ex)(Sx & Kx)] is [p & (x)(Sx -> ~Kx)]
The reason it makes a difference is that SSI affirms that the same evidence can be good enough for knowledge for Smith but not for Mary, and presumably that Mary can know that this evidence would be good enough modulo truth. So Mary ought to be able to know that Smith's evidence is good enough for Smith to know, if there is a layover, but not good enough for her to know, even if there is a layover. Smith witnesses the falsity of (6.3) and the truth of (6.2), but Mary doesn't.
(Maybe more precisely, Mary can rule out (6.4):
(6.4) There is a layover and there is no one who would know it if they have Smith's actual evidence;
but since she knows that Smith is a counterexample to (6.4), and that Smith does have Smith's actual evidence, (6.3) follows.)
Oops; moronic error. If she knows (6.1) then Mary has sufficient evidence to rule out not (6.2) but rather
(6.3) There is a layover and there is no one with Smith's actual evidence who knows it.
And it seems like Mary's high stakes don't affect her ability to rule out (6.3), at least not obviously. So never mind.
Drat, I was hoping to get that correction up before you noticed the error.
My (invariantist) diagnosis of the situation:
Smith does have grounds sufficient to know that the flight
lays over. And so does Mary, but her doubt that those grounds are
sufficient to know, and her belief that she needs further grounds, are
what prevent her knowing that the flight lays over, and also prevent
her from knowing that Smith knows. In all likelihood, in a real case,
they will prevent her believing that the flight lays over on Smith's
grounds, and, even if she does believe it, she would so so with
insufficient confidence (corresponding simply to how willingly she
would give up that belief) for her belief to constitute knowledge.
Once the epistemic panic is over, Mary might well recognize that she
was irrationally sceptical, and should have known all along that the
flight lays over, since she had sufficient grounds for sufficiently
confident belief that it did. Such things happen to all of us all the
time. The contextualist takes what are in fact irrational judgments --
and more importantly, irrational failures to judge -- on
Mary's part to be rational, and, given the rationality of Smith's
apparently contrary judgments, a contextualist solution appears to be
So, is there anything to SSI? I am inclined to think that what grounds
are sufficient to know will depend on the *intersubjective* (loosely
speaking, "objective") importance of the truth of the proposition
whose epistemic status is in question. It is much harder to know that
a plane is in working order than to know that a car or a bicycle is --
the grounds needed to know one's plane is in working order are much
more extensive than those needed for a car or bicycle and *not* just
because of the relative complexity of planes. They are also more
extensive because of how important it is that the plane is in working
order. This could, of course, be ascribed to what "in working order"
amounts to for planes -- a difference in the proposition expressed, or
the truth-conditions of instances of the schema "X is in working
order" depending on what plays the role of "X", and what it refers to.
But I am inclined to think such semantic (or at least, linguistic)
explanations will only go so far in such cases, and themselves will
have an entirely non-linguistic explanation in any case ("it has those
truth-conditions because of facts about planes and the humans who use
them"). These are the fundamentals that SSI perhaps exaggerates.
Jonathan, I like the diagnosis you give in the first paragraph of your post. In fact, I suggest much the same thing in section 5 of my paper, "The Emperor's New 'Knows'," which is posted on my website. The basic idea is that the practical stakes for Mary give her reason to double-check, which raises her confidence threshold and keeps her from believing in the way necessary for knowing and from attributing knowledge to anyone, such as Smith, in the same evidential position.
What if Mary's panic isn't irrational? That is, what if it's a situation where the stakes are so high for Mary that it does make sense for her to do check over and above looking at the itinerary? Is that to be solved by the second problem--if it's that important for someone, then no one can know that the flight lays over simply by looking at the itinerary?
Kent--does that avoid the problem of the hedged knowledge attribution? I guess it does, because it sounds as though you're saying that Mary will wrongly refuse to attribute knowledge to Smith on that evidence. It sounds as though you're saying that here Mary will assert (1) and (2), as the contextualist predicts, but that this time she's wrong.
I would say that I could not *blame* Mary for rechecking however many times she might want -- and the higher the stakes, the more I am willing to say that what she does is both blameless and, increasingly, psychologically impossible to avoid. But still irrational in any important sense of the term. As I would hope that she will recognize at a later, calmer, time.
Also, it is not just *someone* -- any old rational or irrational agent -- whose concerns can raise the standards required for knowledge. It is... enough of the right people at the right time, whose concerns are governed by what is (loosely speaking) objectively the case... by what knowledge really requires. (I understand if this statement causes you to throw up your arms in exasperation!)
Doesn't the contextualist face a similar bad consequence? Suppose that Smith (for whatever reason) declares:
(1) If that plane has a lay over, then I know that it does.
The contextualist predicts that Mary can say (truly and felicitously) assert:
(2) What Smith said is true, though if that plane has a lay over, he doesn't know that it does.
Jason Stanley's right that (2) is true on the contextualist account, but isn't this just a special case of the general problem contextualists have with 'what he said' assertions? There's nothing about the conditional that's essential here; provided the standards shift from the context of (3) to (4), the same problem arises:
(3) Smith: I know the plane has a layover.
Time passes, standards go up...
(4) Mary: What Smith said is true, though he doesn't know that the plane has a layover.
Contextualists have to deal with this somehow anyway, with or without the conditional. I think Keith DeRose would claim that (4) and (2) are unassertible without using a clarifying device. Maybe that's not convincing, but in any event it doesn't pose a new problem. But it seems that Matt's problem is a new one for SSI, since the 'only assert what you know' explanation for the unassertability of 'Smith knows but I don't' isn't available for Matt's conditional.
That's not quite right. The contextualist exploits the knowledge account of assertion at exactly the point at which Hawthorne exploits the knowledge account of assertion. The reason, the contextualist would say, that (4) is unassertible, is that Mary doesn't know (by the new standards) that the plane has a layover, so she can't say that what Smith said was true, because that entails that the plane does have a layover, which, by assumption, she doesn't know.
So you do need to conditionalize Smith's assertion, and what this shows is that contextualist and the SSI theorist face equally problematic consequences.
Matt, I'd like to respond to your question from yesterday, but I'm not sure what you're asking.
In my non-SSI invariantist view, one's interests and stakes do not affect one's epistemic position. (Roughly,) two people with the same evidence are in the same epistemic position, even if the stakes are higher for one. But the higher stakes can justify double-checking, even if one is already in a position to know. If one hasn't double-checked, one's residual doubts keep one from meeting the doxastic condition for knowing; and if one has residual doubts, one can't coherently attribute knowledge to someone else who is similarly situated epistemically.
So (if this is what you're asking), what would be weird about Mary assertively uttering (6)?
(6) If the flight lays over, then Smith knows it.
She would not be assertively uttering (6) as she would if she believed that Smith knows whether or not the flight lays over (in that case, she could also say "If the flight doesn't lay over, then Smith knows that"). She doesn't assertively utter (6) because she believes that Smith, with the same evidence that she has, is not in a good enough epistemic position to know that the flight lays over (even assuming that it does).
I hope I got your question right.
One other thing: Epistemic panic is the extreme case. Even if one is in a good enough epistemic position, high stakes can give one good practical reason to double-check. Things get crazy when there's no end to the double-checking, i.e., when double-checking doesn't quell residual doubts.
It seems to me that an oddity of Jason's (2) is that Mary is in a position to assert outright that Smith doesn't know that the plane has a layover--the fact that (1) is conditionalized (as Jason said) is what makes it possible for Mary to assert with knowledge. It reads a bit more smoothly to me if Mary says
(2') What Smith said is true, though even if the plane has a layover, he doesn't know it.
In my example, Mary's knowledge ascription (6) had to be hedged; (2') doesn't seem to be quite like a hedged ascription.
That's not to say it's not a problem to contextualists! (There may be some rather twee considerations here about whether these are the same problem....) I'm in favor of a scorched-earth approach to knowledge, on which all accounts are seen to fail, because I think epistemologists should be talking about justification instead.
I guess I don't see that it is irrational-in-any-important-sense for Mary to do more spadework if the stakes are high enough. (I've managed to miss a flight because the itinerary I had was wrong--that was my fault through sloppy cut-and-pasting though!) If her evidence doesn't eliminate some remote possibility, but the stakes are high enough and the investigation cost low enough that she risks more by not investigating than by investigating, then by all means it is rational for her to investigate. That's why I think that the important factor turns out to be degree of justification rather than knowledge.--But I think you'd be externalist about evidence and also about rationality, so that you'd say her evidence did eliminate the remote possibility; is that right?
Kent, I think this ties in with your answer too--I agree that the high stakes can justify Mary in double-checking, even if she's in a position to know (as you have it); your response, which had slipped my mind, is to attack the doxastic condition (where Hawthorne exploited knowledge of the truth condition, and where the contextualist exploits the justification condition). There's something appealing there; to some extent, full belief means that you store something away as immune from further double-checking (I'll source this to Harman, Change in View).
That seems to have the oddish consequence that in this case Mary shouldn't know before double-checking, because she shouldn't allow herself full belief before double-checking. But if she just blows off the doubt, then she does know, even though it's imprudent for her to jump to the conclusion.
One thing that worries me here is that I think this full belief is an artifact of our limited reasoning capacities. If epistemology is going to be illuminating about what the Ideal Reasoner would think, then will it allow that the Ideal Reasoner ever abandons these residual doubts in favor of full belief? So is knowledge something that is important for the Ideal Reasoner? Maybe this is just evidence that epistemology shouldn't concern itself with Ideal Reasoners.
(Oh, and don't worry about the question--I think the only question I asked you was one I tentatively answered, and you gave a more detailed answer to it.)
Matt, I think:
1) If the stakes are high enough, then Mary and anyone in her situation will have to rule out remote possibilities of certain kinds in order to know. No irrationality, then, in ruling them out. (That is the idea behind my talk of what it takes to know that a plane is in working order above -- if one is, say, an aviation mechanic; car mechanics have to rule out fewer remote possibilities in order to know that a car is in working order.)
2) If Mary believes that certain remote possibilities have to be ruled out in order to know that the plane lays over, then she will, in virtue of that belief, have to rule them out in order to know. (Smith OTOH who has no such belief, knows that the plane lays over on the same grounds that Mary possesses.) That she does rule them out is not irrational. What is irrational is her believing that she had to rule them out in the first place since, in fact, she did not. She dug herself into a hole, irrationally, and it is rational for her to get herself out having done so.
3) I agree with the standard line (deriving perhaps from Goldman 's "Discrimination and Perceptual Knowledge" originally -- for many of us, at least?) that remote enough possibilities in which not-p do not have to be ruled out in order to know that p. So I would not say that Mary's evidence rules out remote possibilties, but that it would not have had to in order for her to know that the plane lays over *if only she had not believed that she did have to rule them out*. As rational Smith does not.
I hope that is clearer... please let me know if it is not.
Your clause 1 sounds a lot like SSI; but do I take it from your previous couple of comments that you take it that "high stakes" won't be high stakes for an individual (maybe *I'm* not flying on the plane I'm checking) but high societal stakes? Society-sensitive invariantism, as it were? That would be an intriguing idea, I think, and might avoid the hedging objection. Hedges would never undesirably come out true because the standards for knowledge are always the same for the subject and the ascriber--they're both parts of the same society that determines the standards.
To be clearer, maybe: The worry about your #1 is that on its face the stakes can be different for two different people--Mary has a lot to lose if the flight doesn't layover, but Smith doesn't. The societal element I'm suggesting would make that go away, because it's not what the individual has to lose that determines the standards. Sound congenial?
Yes, quite congenial, with the following qualifications:
The "society" will, in many, if not most, if not *all* cases, be all human beings, I imagine. At least, all human beings if they are members of a society that has, or should have, the relevant concerns. The standards are, in the relevant sense, objective -- but also conditional, in a thoroughly objective way. (Knowing that the planes in your aviation museum are in working order... much easier than if they are in the airport waiting to depart *if* those planes are to be used in the way we would assume from those descriptions etc..)
In some cases, depending upon how we spell out the details of what the stakes are, Smith will know, and Mary is just being irrational in checking further. In other cases, Mary is being rational, and Smith does not know. And in yet other cases, the concept of knowledge being vague like almost every other concept... we will have a borderline case.
Jonathan: Regarding your last paragraph and some earlier comments, it seems that you're denying that one can know and yet be rational in thinking one needs to check further. (I imagine you'd say that a borderline case of knowing is also a borderline case of being rational in thinking this.) But it's not clear to me why you deny this.
Yesterday, in response to Matt, you wrote, "If the stakes are high enough, then Mary and anyone in her situation will have to rule out remote possibilities of certain kinds in order to know. No irrationality, then, in ruling them out." But you then wrote, "If Mary believes that certain remote possibilities have to be ruled out in order to know that the plane lays over, then she will, in virtue of that belief, have to rule them out in order to know." But her reason for checking out these possibilities is not that this is needed for knowing but the practical reason that if the plane doesn't lay over she's in trouble. This keeps her from believing confidently, and that's what keeps her from knowing. She knows once she rules them out, but not because she thought knowing required that. So when you say, "What is irrational is her believing that she had to rule them out in the first place," it seems you're assuming that her reason for believing this was that ruling them out was needed in order to know.
I appreciate that you distinguish believing that remote possibilities have to be ruled out from, once believing that, proceedin to rule them out, and that the latter could be rational even if the former isn't. But I don't see why the former couldn't be rational too, for practical reasons. Mary may have dug herself into a bit of an epistemic hole, but she had good reason to avoid falling into any bigger practical hole.
Kent, what I think is irrational in particular is:
Failing to form a belief that would amount to knowledge if one did form it on the grounds one possesses, at least if one is aware that those grounds are relevant to the very proposition one has entertained.
(And any belief that she needs to do the checking in order to know, if she has such a belief.)
I do not want to say that Mary's *actions* are irrational -- so what she does might well be practically rational, and its practical rationality might well, all things considered, be worth paying the price of doxastic irrationality if that is required. OTOH, perhaps it can come for free -- if she can somehow believe that the plane lays over with sufficient confidence to know that, and still proceed with checking solely for practical reasons. I am not sure whether that is psychologically possible or nor.
I also want to say that Mary might be a rational person, both practically and doxastically speaking. And that what she does is what a rational person might well do in her circumstances, and so, in virtue of that, might well be termed rational in a secondary sense.
OK, Jonathan, I get it. That leads to raise two questions, the first of which is really your question about what is psychologically possible.
(1) Is it psychologically possible to feel the need to double-check while not having residual doubts? Perhaps so. Here's a familiar sort of case: you feel compelled to go back to check that you locked the front door even though you're convinced that you did, solely on the grounds that the thought that possibly you didn't will persist. It's rational to go back and check, just to make that thought go way. On the other hand, in Mary's case I was supposing that she really did have residual doubts.
(2) Is it really epistemically irrational not to form the belief and to make sure beyond what is needed for knowing, i.e., not to form a belief that would amount to knowledge if you did form it on the grounds you presently possess? I'm not so sure that is.
1) Yes, I agree, but I take it to be a matter of it being practically rational to make a doxastically irrational thought go away. And it might be practically rational to make the doxastically irrational thought go away in order that it not lead to further doxastically irrational thoughts, and prevent the formation of all kinds of rational thoughts, desirable for all kinds of practical reasons. Together with the irrational actions that the irrational thought might lead to, or rational actions, desirable for whatever practical reason, that the irrational thought might impede. Perhaps we should draw a distinction between rational thoughts and rational *thinking*, conceived of as a process whose standards are closer to those of rational action than those of rational belief conceived of as a state. (I hope that makes some kind of sense! In any case, it is rational belief -- the state -- that I have the firmest views about.)
2) My concern is: if knowledge is not the end of rational enquiry, then we do not know what is -- what the goal of enquiry is, and when that goal has been reached. I am not sure that we could coherently conceive of an end of enquiry that exceeds knowledge, except in an entirely ad hoc (indeed, irrational) manner.
Well said, Jonathan, but still, it seems to me that there is room for a rational enquirer to err on the side of caution without being deemed slightly irrational for so erring -- even if he has no practical stake in the matter. One reason I suppose this is that it is often not evident to one that one's epistemic position is adequate, even if it is.
This is related to my worry about your suggestion that if knowledge is not the end of rational enquiry, then we do not know what is. Even if that's true (and even if its antecedent is true), we often don't know that we have knowledge even if we do. Moreover, we're generally interested not in whether or not we know that p but in whether or not p.
OK, I confess: I'm a reliabilist about justified belief and a deflationist about truth.
I would say that we are indeed generally interested in whether or not p -- but we have attained the object of our interest precisely when we know p or know not-p.
I am quite skeptical about the importance of the falsehood of the KK thesis; most counterexamples offered to the KK thesis (e.g., Williamson's) are quite marginal. I am inclined to think that when we *first* acquire knowledge that p, we almost always know that we know that p.
OTOH, most of what we know, we remember -- it was acquired in the past, and we may remember little of how we came to know it. When we remember that p, perhaps, then, we often know that p without knowing that we know that p. (That is not obvious, but it is not obviously false either.) But if we are actually engaged in an enquiry into whether or not p, and we come to know p, I am strongly inclined to think we will know that we know that p in the vast majority of cases -- that is, it will be evident to us that our epistemic position is adequate. (And if KK is not enough for that, then my concern about the end of enquiry re-emerges.)
So, Jonathan, it seems that to the extent that we disagree it comes down to the KK thesis. You accept at least a hedged version of it and I don't. I hope your book in progress, Without Justification, which I look forward to in any case, will spell out just what version of KK you accept and why.
Several comments above, Jason writes:
That's not quite right. The contextualist exploits the knowledge account of assertion at exactly the point at which Hawthorne exploits the knowledge account of assertion. The reason, the contextualist would say, that (4) is unassertible, is that Mary doesn't know (by the new standards) that the plane has a layover, so she can't say that what Smith said was true, because that entails that the plane does have a layover...
Here, historically, but more importantly, logically, is how the "now you know it, now you don't" problem has developed, and where it stands, at least to my thinking. (Much of this from unpublished material, so no reason why anyone should know it, unless they happened to be at a relevant talk.)
So, there's the original problem -- that the contextualist would have us say in the relevant circumstances (where it would be crazy!), "I used to know, but now I don't" -- and my reply. I won't go into that. It's all in part III of "Contextualism and Knowledge Attributions" (PPR, 1992, but had been in my dissertation & earlier graduate work). I think that reply is an important part of the ultimate solution, but...
The original reply doesn't solve everything here, as an anonymous referree of "CKA" pointed out. Thus was born the "fortified" version of the problem: that in the relevant circumstances (where it would be crazy), the contextualist would have us say, "What I said before -- 'I know the bank will be open' -- is true, but I didn't know the bank would be open." (Incidentally, if that anonymous ref. is reading this, I'd love to know who you are. I've always wondered...)
In the mid 90s, I gave talks at which, among other things, I replied to this fortified objection in just the way Jason describes -- by appeal to the knowledge account of assertion plus the factivity of knowledge. If this is where the story stopped, Jason would be right. It's this maneuver I made on behalf of the contextualist that's just like John's dealing with a similar problem on behalf of SSI. Other parts of these talks became, much later, the basis for "Assertion, Knowledge, and Context," but this reply to the fortified objection never got published because...
In 1997, when I gave such a talk at Yale (I was not at the time working at Yale), I encountered the "Kagan Problem": Shelly Kagan objected that the reversed fortified conjunction sounds just about as bad (I think he actually claimed just as bad, which I'm not sure of, but it's at least just about as bad) as the non-reversed form. Here, you move from HIGH to LOW, rather than from LOW to HIGH, and the speaker says, "What I said -- 'I don't know the bank will be open' -- was true, but I knew the bank would be open." The old maneuver, based on the knowledge account of assertion plus the factivity of knowledge, won't help here, but we seem to need help here just about as badly as we do with the non-reversed form. SSI faces a Kagan-reversed form of the fortified objection as well.
It's in response to this reversed fortified problem that I'm inclined to make moves like those Geoff started to describe -- moves, incidentally, that I don't think will be available to the invariantist. I won't give a long account of my current thinking, but I think you can get the basic idea from just this: I look at similar problems that develop with other, clearly context-sensitive terms ("What I said -- 'Mary is tall' -- was true, but Mary wasn't tall"), which are far from smooth [even if, at least arguably, they can be true; but they sound almost as bad as the relevant conjunctions with 'knows'] to motivate injunctions against standards-switching (at least when not employing appropriate clarifying devices) even when half of the conjunction is up a level, if one wants to be talking in an appropriate way while using context-sensitive terms....