I'm off for the weekend to the Inland Northwest Philosophy Conference--hope I haven't blown my intellectual effort for the weekend by thinking of a pun that didn't involve "Moscow" (explanation).
Posting will resume when I get back to the office--the term is over but I imagine I'll be coming in often enough to keep the blog going until I leave for Milwaukee.
Recently I posted on the oddity of hedged knowledge ascriptions, such as "If P is true, then S knows that P," when the ascriber knows what evidence S has and when the ascriber is not willing to assert outright that S knows that P. This raised some problems for subject-sensitive invariantism, which predicts that sometimes it should be natural to assert these hedges (see the previous post for the details).
I should record this, though: There's an anonyblogger concerning whose identity I have some evidence that seems reasonably strong to me, but that is somewhat indirect. I have thought to myself, "X is Y." I have also thought, "If X is Y, then I know it."
I wouldn't be quite willing to say outright that X is Y--even if I were willing to talk about X's identity at all (which I'm not, X has a perfect right to remain anonymous). Yet if X were publicly unmasked as Y, I would say "I knew that X was Y all along--on these grounds."
So here I am tempted to hedge, even though I know what my evidence is (but you don't). Perhaps it's because I'm not quite willing to say outright that I know, but I have some evidence which under some circumstances would be useful for knowledge.
Note that on a "practical environment" view of knowledge (as endorsed by Hawthorne if you put a gun to his head in Knowledge and Lotteries), the question of whether I know that X is Y is moot--no matter how strong my evidence is, I'm not going to do anything about it. But there is a practical question that would arise if X's identity were revealed; I would want to rub it in that I was right all along. Hmm.
(BTW, I need to come up with some better titles for these posts. The last one was based on the sequel to Bridget Jones, and I haven't even read the first one--well, not more than the first fifty pages--and this one is based on one of the most embarrassing enthusiasms I had when I was young and dumb. Suggestions?)
Seen, approximately, on CNN Headline News (in the ZCMI mall, where I went to hide from the snow and get a hot cinnamon bun):
$3 a year What US is paying Russia for Embassy, due to low ruble
The question is, is this false or just silly?
First thought: It's false. It's not because the ruble is low that the US is paying $3 a year, it's because Russia is charging a nominal cost. ("If there were a strong ruble it would be a whopping $15 a year!")
Second thought: No, it's just silly. The low ruble is the cause of the fact that the cost is $3 a year. If the ruble were strong the charge would be just as negligible, but it wouldn't be exactly $3.
Third thought: But is it true then that the US is paying $3 a year because of the low ruble? The fact that the ruble is low isn't enough (given the background conditions) to ensure that the rent is $3 a year; that requires the fact that the ruble is trading at 28.89 to the dollar (that's not approximate, I just looked it up). So, if the fact to be explained is that the US is paying $3 rather than any other amount, and the background that we're allowed to assume is that the rent is set at 87 rubles/year, the only way to make a true statement is to say "$3 a year... due to ruble trading at 29 to the dollar."
I didn't have a fourth thought.
De La Soul seems to have taken the nice weather with them, as this afternoon I got caught in the snow in a T-shirt for the first time in a while. That's what'll happen when the temperature drops 30 degrees from 9 am to 2:30 pm. Conditions in Moscow are and are predicted to be tolerable, though the other one is a bit cooler (it's night there).
De La Soul seems to be playing just outside my office window, or maybe it's the opening band. It's a nice day, too. Conclusion: I should be there, rather than here.
...or, when a quotation is more credible than what was originally said.
Let's say that a former national security official, with high security clearance, writes a book. The first draft of this book contains a story (never mind what) that is deemed to reveal classified information, so White House lawyers make him excise it. However, the story has already appeared in an alternative publication, so the official just substitutes a quote from that article. You can't knock someone for quoting something that's already public, can you?
I'm not so sure. Look at it this way: Alternative publications aren't always very credible. But when a high-level official quotes the alternative publication--if he's better placed with respect to classified information than the original reporter might be--he's more credible than the original publication was. The official, we know, knows what he's talking about; we didn't know that about the original reporter. So the mere act of quotation effectively puts out new information.
And note that the official doesn't have to say anything that's not completely public knowledge in order to affect his readers' epistemic situation. Suppose the reader knows that the reporter wrote that p. The official then writes: "The reporter wrote that: p." The reader already knew what the official literally says. Now she comes to know that the official also thinks that p; if the official didn't think that p was true, then it would be misleading to quote it. Effectively, the official implicates: "The reporter wrote truly that p." If it would harm national security interests to have it known that this official thinks that p, there would be a legitimate reason to excise the quotation from the reporter.
So here, repeating a quotation makes it more credible; and it might be perfectly proper to prevent the official from saying something that in itself is public knowledge: that the reporter wrote that p.
(All this is an abstract discussion; in the case in point, I agree with Josh Marshall: It's difficult to see why this needs to be kept secret at all. Just thought that there were semi-interesting points about epistemology of testimony and speech acts to be made here--and it's been a while since I posted anything substantive!)
Did I just hear J-Live say ceteris paribus? I don't want to hear any more complaints about how much I overintellectualize.
Some days you just don't feel like blogging any philosophical arguments, and the only thing to do is link to rathergood.com's kitten videos.
Bill Tozier is selling the number 5--Erdos number that is. If you win his eBay auction he will attempt to coauthor a paper with you--his number is 4, so yours will be 5.
I had the same reaction that Eszter Hargittai did, in Henry Farrell's comments (which via): Ha ha, mine is lower than his, but
The bummer part is that his number is low enough that I can’t try to sell him my co-authorship for an improved number, because by co-authoring with me, he’d be where he is already.
(I'm not sure if forthcoming papers count, but when my paper with Nuel Belnap on probability in branching space-time comes out, my number will be 3. I think I have a finite but high number without that paper, but I'm not sure how to check.)
I'd like to add a bit to my entry on ecumenicism in epistemology and the metatheory/normative theory distinction. Hopefully this will be a bit clearer (to those who made it through the previous sentence).
I argued that Newcomb's problem and the problem of what justification is are different enough that ecumenicism makes sense for justification but not for Newcomb. Newcomb's problem is a normative problem; it asks the question "What to do in this case?" The answer is an action--the subject has to take one or both boxes. The definition of justification is metatheoretical; it asks the question "What property should we use 'justification' for?" The answer is a conceptual definition, and we can simply decide to define two senses of 'justification', an internalist and an externalist one.
But here's an objection: Newcomb's problem is meant to illuminate theories of rationality. We're supposed to conclude that it's rational to maximize expected utility, or that it's rational to act according to causal decision theory. Isn't this a metatheoretical question about the use of 'rationality'? Shouldn't it be analogous to the metatheoretical question about 'justification'?
One answer may be--to say "Action A is rational" is to say "Do A." So the metatheoretical question turns into a normative one. But then you can (so the objection goes) raise the same question about belief--to say "Belief P is justified" is to say "Believe P." If that's true, then you can't be ecumenical about justification any more than about rationality--you'll be counselling someone both to believe and not believe something.
Two responses to the objection:
(1) [This depends on a difference between theoretical and practical reasoning.] "Believe P" isn't actually advice, because we can't believe or not believe at will. This means that any counsel about what to believe is moot. We can, on the one hand, advise people about what actions to perform to get their beliefs in working order, and, on the other hand, evaluate the beliefs they do have according to various good-making properties.
Epistemic justification addresses the other hand only; the one hand is a kind of practical rationality about achieving epistemic ends. So, anyway, I claim--there's a kind of purely epistemic justification that doesn't allow that beliefs can be justified because the believer is incapable of believing otherwise. Then the believer may be free from blame, but the belief can be criticized. There's nothing to stop us from using two different criteria to evaluate the belief. In contrast, in Newcomb's problem you're actually looking for practical advice, and there's no room for criticizing the action without criticizing the actor who chose it.
(2) In Newcomb's problem the actor has full information (well, except about what's in the opaque box--in most versions). There's a debate over rationality that's more analogous to the internalism/externalism debate: Should you adopt the course of action that actually will have the best consequences, or the course of action that you expect to have the best consequences? (That question, I admit, carries a presupposition that not everyone will grant.)
There are those--I think Parfit does this--who say that you have reason to do something if it would have a good consequence, even if you have no way of knowing that that good consequence will come to pass. But I don't think most people will say that and deny that there is some sense of rationality in which it is practically rational to do what you have epistemic reasons to think you have practical reason to do. For instance, Keith DeRose says that we don't blame someone for violating the knowledge norm of assertion when they have reason to think that they knew what they said, calling this a secondary norm that derives from more general norms.
Let's transfer this to the epistemic case, supposing that you could choose your beliefs at will. Which beliefs should you choose? On externalism, those beliefs that are justified by the externalist criterion of justification. What if you choose a belief that is internalism-justified but not externalism-justified? Well, then you haven't believed what you have most reason to believe. But you had no way of telling what you did have most reason to believe. In fact, on the information you had, it could not but seem that what you had most reason to believe was what you did believe. So it seems as though the externalist ought to say, along with the externalist about practical reason, that there's a sense in which you were rational to believe what you did, even though you didn't believe what was justified.
In short: In some sense, the externalist should admit that it's OK to believe what's internal-justified. The advice "Believe what's external-justified!" is not always one that can be followed, no matter how smart you are, if you don't have enough information. Then the best you can do toward following that advice will be to believe what's internal-justified.
This argument puts pressure on the externalist to be ecumenical, but I don't think there's any similar pressure on the internalist. The internalist should acknowledge that there is more than one way to evaluate beliefs--internalist justification is the best attempt a goal that's partly determined by externalities. But I think that goal is truth. Externalist justification is an uneasy compromise between truth, which we actually want, and internalist justification, which is the best we can do.
As Brian points out, the Inland Northwest Philosophy Conference looks like a can't-miss. I'm thinking of flying into Spokane the morning of April 30, renting a car, driving to Moscow, and driving back to Spokane May 2 (leaving after the Cohen/Elgin/Sosa closing workshop). If you might be interested in splitting a ride. let me know. I may also be looking for a hotel room to share.
(That's Moscow, Idaho.)
Over in Fake Barn Country Allan Hazlett attacks the Ecumenical Solution to the epistemological internalism/externalism controversy:
(ES) There's two concepts of justification, one internalist, the other externalist. They're both useful, they're both concepts of justification, and each serves to capture some (but not all) of our intuitions about justification. Epistemologists should proceed to examine both, and end this bickering over the "real, correct" account of justification.
Asks Allan, why should we find ecumenicism any more convincing here than for Newcomb's problem? (Or ontology, but I find it convincing for ontology, so I'll pass.) We can say: "One-boxerism accounts for some of our intuitions about rationality (the "if you're so smart how come you're not rich?" intuition); two-boxerism accounts for some others (rational actions cause their intended outcomes)." But nobody's going to find that satisfying.
"Why Does Justification Matter?"* [SEE UPDATE IN EXTENDED ENTRY] is meant to give a framework for ecumenicism that doesn't simply depend on the fact that different notions of justification each imperfectly capture our intuitions. --In fact, I think intuitions about justification are cheap, because "justification" is a philosopher's term. (On the FBC thread Jamie Dreier cites Stew Cohen making this point, so I can cite a lot of authority here!) The argument concerns not so much what "justification" should mean as what epistemological properties we should care about. There's more of it below the fold.
But Allan effectively raises the question: how can I even say that there are two different properties you can care about? Mightn't the person who cares about one property be wrong; the way one-boxers are convinced two-boxers are wrong?
My first thought was that the hypothetical person in Newcomb's Problem is faced with an exclusive decision: Take one box or two. That's not true with respect to justification; you can have your cake and eat it too, by stipulating that "justification" has two different senses. Subscript them if you like.
This looked to me like a distinction between practical and theoretical reasoning. But I don't think it is; I think it's a distinction between normative theory and metatheory. Normative practical reasoning issues in commands "Do X" or "Do Y"; normative epistemology issues in commands "Believe P" or "Believe Q." Meta-practical rationality tells us "It is good for actions to have property D"; meta-epistemology tells us "It is good for beliefs to have property G."
In metatheory, there's no reason an action/belief that has one good-making property has to have them all, so we can just say that there is more than one good-making property. In normative theory, you have to fish or cut bait: do it or don't do it, believe it or don't believe it. You can't be ecumenical about it.
Now, the Newcomb problem may seem as though it's metatheoretical. We're discussing what conception of rationality to use--does rationality depend on your actions' expected utility or their expected effects? That sounds as though it's ripe for ecumenicism. But it's not--because the point of saying that a certain action is rational is to say that you should perform that action--period. It's not to say that it has one good-making property but perhaps not another.
Perhaps you want your concept of justification to work the same way--it tells you what to believe, period. No room for saying "this belief is good in this way but not good in this way." Then, I think, there's substantial pressure toward internalism. Because if you ask yourself the question "What should I believe?" you have to be able in principle to figure out the answer. That won't happen if the answer depends on things outside your experience. The commands issued by your conception of justifcation will be commands you can't understand yourself.
(This doesn't happen for one- versus two-boxing. There the subject has enough information to make her choice, whichever criterion you use.)
So my attempt at ecumenicism is just an attempt to sneak internalism in the back door. Well, my internalism (such as it is) stems from the conviction that a criterion on belief isn't much use if you can't use it at least as a regulative ideal for inquiry. Many will say that knowledge is such a regulative ideal, but I think striving to make our beliefs true is as good as striving to make them knowledge.
Here's the argument of "Why Does Justification Matter?" in brief: The idea is that, if you say property P is epistemologically important, you're endorsing P as a property that people's beliefs should have; and you're effectively advising everyone, "Make sure that all your beliefs are P, and that you have all P beliefs." Advice has two dimensions of felicity; it should guarantee achievement of your goal, but it should also be something you can do. So there are two dimensions to evaluating P's epistemological importance: (1) How likely is it that P beliefs will be true? (2) To what extent does P depend on factors independent of the subject's experience (the less, the better)?
Truth is the best property with respect to factor (1). An internalist notion of justification is the best property with respect to factor (2) (actually, I call it "experience-dependent" to avoid some baggage of internalism; I don't want to commit myself to the idea that experience is internal in any way, let alone that justifications have to be available to conscious processing). And though I don't stress this in the paper, externalist notions of justification will be intermediate with respect to (1) and (2).
*The link is to the penultimate version; I haven't put the final version online yet, partly for copyright purposes. Also, the PDF I linked doesn't work on all computers (I think not in Mac OS X); here's a link to the Word version. If you can't read the Word, either, let me know and I'll e-mail you the text!
In this thread, Kent Bach finds himself having trouble answering a question from me, because I hadn't actually succeeded in asking one. This reminds me of what happened when John MacFarlane gave a talk at the University of Utah on epistemic modals. (I shouldn't cite his paper, but surely I can link his papers page.) A couple of us were chatting in the break before the discussion period; I raised some point (which I've forgotten) and John asked me if I was planning to ask is as a question. I said, "No, I'm planning to say some long thing with no question marks in it at all."
When I was called on in the question period, I said:
Take this example: Batman and Robin are chasing the Joker around an abandoned warehouse. The Joker fires two shots, which miss. Then Spiderman shows up and starts chasing the Joker too. The Joker fires four more shots, which also miss. Spiderman then advances on the Joker and subdues him. Batman says to Robin, "That was very foolhardy, because the Joker might have had more bullets left." That seems to me true, and the theory on which "might" depends on the knowledge of some group G can account for it, as long as it's OK to have the group exclude the speaker and hearer. And I think your proposal has trouble with that.
After I finished speaking, Matt Shockey whispered to me, "You should have had them chase the Riddler. That way what you said would've had question marks in it."
(My example was inspired by Brian's second example here.)
Nothing philosophical to put up today, but I see in my referrer logs that I got a hit from www.iaea.org. Interesting.
The latest club to beat contextualism with is Subject-Sensitive Invariantism (SSI), in which the standards for knowledge depend on the situation of the person to whom knowledge is being ascribed rather than the person who's doing the ascribing. Take Stewart Cohen's airline case: suppose John, Mary, and Smith are all looking at a printed itinerary that shows (accurately) that a flight lays over in Chicago. Whether the flight lays over is vitally important to John and Mary but not to Smith; so Smith has no reason to worry about whether the itinerary has a typo, but John and Mary do. On SSI, Smith knows that the flight lays over, but John and Mary do not. On contextualism [UPDATE: Approximately, note Richard Chappell's first comment], John and Mary can say truly both
(1) Smith does not know the flight lays overand
(2) John and Mary do not know the flight lays over,while Smith can say truly both
(3) Smith knows the flight lays overand
(4) John and Mary knows the flight lays over.
(Assuming they all like to talk about themselves in third person! What I really mean is that the propositions (1)-(4) are true in the respective contexts, or something like that.)
In Knowledge and Lotteries, Hawthorne defends SSI against the following objection (from Cohen): Suppose John and Mary know that Smith has looked at the same itinerary and that he is not in a high-stakes situation. Since SSI predicts that Smith just plain knows that the flight lays over, shouldn't Mary be able to say (3) truly to John? Worse yet, can't she then truly say this:
(5) Smith knows that the flight lays over, but we don't.
Hawthorne argues (p. 160) that Mary can't say (3) because assertion requires knowledge [MW: boo!], and Mary doesn't know (3); she doesn't know that Smith knows that the flight lays over. On SSI, for S to know that p, S must have eliminated all doubts concerning p that are relevant for someone in S's position--and p must be true. Mary knows (by her standards) that Smith has eliminated all doubts about the layover that are relevant for him; but she doesn't know (by her standards) that the flight lays over. So she doesn't know (by her standards) that Smith knows (by his standards) that the flight lays over; (3) encompasses two conditions, and Mary only knows one of them.
A problem for this account is that it predicts that Mary should withhold judgment on whether Smith knows. She doesn't know (by her standards) that Smith doesn't know (by his standards) that the flight will lay over, so she can't assert (1) either. But Mary should assert that; if John asks, "Does Smith know?" Mary should say "No, he's only read the same itinerary we have." Hawthorne acknowledges this cost for a basically similar case on p. 163, suggesting that maybe the ascriber projects her own ignorance onto the subject. Keith DeRose, in section 6 of his new paper, argues that this is a major cost for SSI and that projectivism won't help. But I have another objection to pursue.
Namely: On Hawthorne's account, Mary doesn't know (3) because Smith's knowledge requires both that Smith be in a certain epistemic position and that what Smith believes be true; while Mary knows that Smith is in that epistemic position, she doesn't know that what Smith believes is true. But then doesn't Mary know the following conditional? And shouldn't she be able to say it?
(6) If the flight lays over, then Smith knows it.
Conditional knowledge ascriptions like (6), which I'll call "hedged knowledge ascriptions," seem weird, at least in this situation. (6) is perfectly fine in a case in which the speaker doesn't know what Smith's epistemic status is. If John and Mary know that Smith has checked with the airline, but they don't know what the airline has said, then it makes sense for Mary to say (6) to John. They're uncertain about whether Smith knows because they're uncertain about what evidence he has. But in the original airline case, John and Mary know exactly what evidence Smith has. They know he looked at the itinerary, and they know what the itinerary says. In that situation, (6) sounds plain weird.
What makes it weird, I think, is that we take it that someone who knows p has evidence that's sufficient to establish p. If the speaker knows what Smith's evidence is, then (6) amounts to a confession that Smith's evidence isn't sufficient to establish that the flight lays over; the speaker herself is familiar with Smith's evidence and leaves that question open. So we ought not to be saying that Smith might know that the flight lays over; it's established that his evidence isn't sufficient for that. In this way (6) resembles concessive knowledge attributions such as
(7) I know that the plane lays over, but there might be typo in the itinerary.
(8) Smith knows that the plane lays over, but there might be a typo in the itinerary.
(7) is absolutely unacceptable, (8) pretty dubious. In each, the speaker ascribes knowledge while casting doubt on the evidence that supports the knowledge.
If my diagnosis is correct, SSI has unavoidable costs. SSI rests on the idea that the exact same evidence can establish knowledge for A but not for B. This raises a prima facie problem: Can't B correctly ascribe knowledge to A in full awareness that A's evidence is not conclusive--since it doesn't remove some doubts that B cares about? Hawthorne's argument was that the knowledge requirement on assertion blocks this ascription, but devices such as the hedge let B concentrate on the question of whether A's evidence is sufficient to establish the conclusion in question. And then SSI seems to yield counterintuitive results.
(BTW, conditionals like (6) sometimes sound funny even when they're true, because the antecedent is known false or the consequent is known true or some such. That's not happening here; on SSI, the antecedent really should be relevant to the consequent.)
I hope nobody expects me to believe that the Pirates scored 13 runs off Greg Maddux (well, Maddux only gave up six of 'em). Ridiculous.
Back when I was planning to write a dissertation on imperatives, I spent a while thinking about R.M. Hare's putative example of an imperative inference. Hare turned his conclusion into something like the anankastic conditionals that Kai von Fintel and Sabine Iatridou discuss and that I've been discussing; and the conclusion I reached wound up in "Why Does Justification Matter?", so it seems timely to blog it.
(How did I get from a dissertation on imperatives to a dissertation on testimony? Once I decided there were no imperative inferences, the most interesting thing about imperatives was the norms they were subject to. I thought that imperatives were subject to many different normative considerations, whereas assertions were clearly subject to only one fundamental normative consideration, that of truth. In order to explain how assertions were only subject to that consideration, I prepared a short account of how truth was the norm for testimony. After I'd written about 180 pages on the epistemology and norms of testimony, I realized I wasn't going to get to imperatives in the dissertation.)
Hare, in The Language of Morals (p. 35), argues that the following is a good imperative inference:
(1) Go to the largest grocer in Oxford.
Grimbly Hughes is the largest grocer in Oxford.
[Therefore:] Go to Grimbly Hughes.
and that (1) can be conditionalized:
(2) Grimbly Hughes is the largest grocer in Oxford. [Therefore:] If go to the largest grocer in Oxford, go to Grimbly Hughes.
saying that in English we write the conclusion of (2):
(3) If you want to go to the largest grocer in Oxford, go to Grimbly Hughes.
There's a few odd things about this--aside from the dubious claim about the logical form of (3).
First is that (3) isn't a conditional where you can detach the consequent given the antecedent; it's an anankastic conditional like "You must take the A train if you want to get to Harlem." Hence I thought that a better way to English the conclusion of (2) might be:
(4) In order to go to the largest grocer in Oxford, go to Grimbly Hughes.
Second is that it makes a big difference what kind of imperative (3) is meant to be. It simply makes no sense to think of (2) yielding (3) or (4) if the imperative is taken as what C.L. Hamblin (Imperatives) calls a willful imperative, such as an order or a request. Even if I have authority to issue commands to you, there is nothing in the fact that Grimbly Hughes is the largest grocer that requires me to issue a command conditional on your wants. The imperative has to be an accountable imperative, such as advice, where the oomph behind the imperative comes from goals the addressee already has or should have, not from the imperative itself. (This means just that anankastic conditionals can't have willful imperatives as consequents.)
Third, and what really got me going, is that exactly similar reasoning should lead from (2) to:
(5) If you want to go to Grimbly Hughes, go the the largest grocer in Oxford.
And (5) seems unfelicitous. Whether this infelicity is along the lines of a true-but-misleading assertion or a false one, I can't say (and I'm not sure there's an answer), but it's definitely odd. Note that it's oddity can't be accounted for by temporal factors--your going to Grimbly Hughes just is your going to the largest grocer in Oxford. Nor can it be accounted for by considerations about whether going to Grimbly Hughes constitutes going to the largest grocer in Oxford, or vice versa. Consider:
(6) If you want to go to the largest building on this block, go to Grimbly Hughes.
(7) If you want to go to Grimbly Hughes, go to the largest building on this block.
The solution, I think, has to do with what the addressee knows or can find out how to do--or can do.* When considering (3) and (5), it's most likely that the addressee does not know which grocer is largest but can find Grimbly Hughes in the A-Z. Hence, when she hears "Go to Grimbly Hughes" she can carry it out; when she hears "Go to the largest grocer in Oxford" she can't. With (6) and (7), it's most likely that she can tell which is the largest building on this block but doesn't know where Grimbly Hughes is. Hence she can carry out "Go to the largest building on this block" but can't carry out "Go to Grimbly Hughes" without further help.
So there seem to be two dimensions of felicity for advice. The first is how effective following the advice would be toward achieving the goal in mind. (3)-(7) are all perfectly effective; going to the largest grocer in Oxford is sufficient for going to Grimbly Hughes is sufficient for going to the largest building on this block, and vice versa. The second is how easy it is for the addressee to follow the advice. Here there's a big difference between (3) and (5), and (6) and (7). (The example I use in "Why Does Justification Matter?" is that "In order to bowl a strike, knock down all the pins" isn't very useful advice, unless the addressee can do whatever she wants with the ball but doesn't know what a strike is.)
I'm not sure whether this poses a problem for Kratzerian semantics for the anankastic conditional (see von Fintel and Iatridou's paper). Sometimes it'll be felicitous for the speaker to say "If you want to bowl a strike, hit the 1-3 pocket" or "If you want to bowl a strike, you ought to hit the 1-3 pocket," even though the addressee does not hit the 1-3 pocket in all the worlds in which she achieves the designated goal of bowling a strike. Hitting the 1-3 pocket is just the most doable advice available that is reasonably effective with respect to bowling a strike. But it may be that these cases can be accommodated as false but helpful utterances, as in cases where telling the truth would be misleading.
*Rereading the old old draft in which I discussed Grimbly Hughes, I found this sentence: "It is also clear that knowledge how to do something is not just knowledge that something is true." So there!
If you're in Pittsburgh and you like the kind of jazz I like, you should definitely check out these two shows (descriptions lifted from Manny Theiner's e-mail alert):
Tonight! SATOKO FUJII (piano) and NATSUKI TAMURA (trumpet) from Japan
often compared to Cecil Taylor, Fujii has stormed out of the gate with
two dozen releases in the past decade including a quartet with drummer
Tatsuya Yoshida of Ruins, and CDs on Victo, Tzadik, Enja, and others.
with local improv ensemble Terpsichorribles
8 pm $10 all ages Frick Fine Arts Auditorium, Oakland
(the building with the fountain, between Carnegie & Hillman Libraries)
[My 2 cents: Fujii isn't that much like Taylor, and may be more accessible. WPTS had a big-band CD of hers that I liked to play. I haven't heard as much of her music as I'd like, but she's clearly become one of the musicians to watch--this'll be a rare chance to see her, so jump at it. Also, I think the Terpsichorribles is the improvising ensemble I played with in Pittsburgh.]
Fri Apr 16 avantjazz/improv
DAVID BOYKIN TRIO AACM-style free jazz from Chicago, on Thrill Jockey
and Boxmedia Records
featuring: David Boykin (sax), Karl Siegfried (bass) and Mike Reed (drums)
very much in the tradition of the Chicago radical black avantgarde, Boykin
cut his teeth in jam sessions at Fred Anderson's jazz lounge in the 90s.
with solo performance by Ben Opie (sax, from Opek and Thoth Trio)
8 pm $10 all ages Garfield Artworks
[My 2 cents: My other Pittsburgh improvising band, the one that consisted mostly of philosophers, opened for Boykin once. He's fantastic. Exploratory but with a great grasp of the tradition. Don't miss it.]
Also look for the Fall Apr. 22 and Erik Friedlander Apr. 25.
Archeological evidence of 9,500-year-old pet cat on Cyprus. The last two sentences of the article are very sad.
For a little while I've been using "Potrzebie bounces" as my in-class example of a nonsense sentence. I got it out of Mad Magazine, and I just realized it's a good idea to check Polish words you get from Mad Magazine to make sure that they're not nasty obscenities that could get you ad-boarded.
it's a Polish word - means something along the lines of "necessary" (needed/required/wanted) modulo losses in translation.... Mad publisher (in both senses) William Gaines encountered the word "potrzebie" on a the label of a bottle of Polish aspirin. Most Americans pronounce it potter-zeebie, but the Polish pronunciation is nearer poh-CHEB-yeh. If I recall correctly -- from a conversation with a Defense Language Institute translator some thirty-eight years ago -- it's the genitive form of a noun meaning need.
Who knew I was saying something of such profound philosophical import?
Speaking of Wikis, This page still seems to be the only place on the Web that you can find any appreciable chunk of the lyrics of Culture's fantastic song "See Dem A Come." The poster seems to be Croatian. You figure it out.
Do check out this post from Sappho's Breathing about women in philosophy, with a very interesting discussion. The profession needs to do more to make women feel at home, though I hope that Jason Stanley is right that things may be beginning to change.
I'm pleased to announce that I've accepted a visiting position for next year at the University of Wisconsin-Milwaukee. UWM is a great department with a great Masters' program and faculty; I've got several friends there already and, having met a lot of the faculty, expect to make more. There's also at least one former blogger on the faculty, but if I told you who he'd have to kill me.
Milwaukee also seems like a great town, and one of the only cities where you can expect the Pirates to play and win--though as I type the Brewers have the best record in the NL, so maybe I shouldn't boast.
Speaking of parity, as of this morning no team in the NL had played more than three games, and there were no winless or undefeated teams. That seems as though it must be pretty well unprecedented; anyone with statistics? Detroit opening 4-0 seems as though it must be pretty well unprecedented as well.
(Milwaukee link via Mom.)
Hawthorne's Knowledge and Lotteries starts (more or less) with an account of why we are reluctant to say we know our lottery ticket won't win. (I don't think he needs to start with this--given what he does, he's not obligated to explain our intutions--but he does.) His explanation is that we use "parity reasoning"--we think of "This ticket won't win" as a member of an epistemic space divided into subcases p1...pn such that we have about the same reason to believe that each of p1...pn will not obtain. In this case p1...pn will be the propositions that the different tickets win. If it's absurd to think that we can know that all of p1...pn will not obtain, we reckon ourselves unable to know our favorite member of p1...pn.
The "about the same" qualification takes care of minor variations among the probabilities--the lottery tickets needn't be evenly weighted for the paradox to go through. And the schema is flexible enough to take care of some cases of radically different weightings--on p. 15 Hawthorne discusses a case in which 5 tickets have 10% chance each and the rest have a tiny chance, and points out that we'll divide that up as [one of the biggies wins/one of the rest wins], with parity reasoning operating within [one of the rest wins].
But I'm not sure it can account for coin flips; on the other hand, I'm not sure it has to.
Suppose someone's going to flip a coin till it comes up tails; is there some n such that you know it will come up tails within n flips?
It's not obvious that parity reasoning prevents us from answering "Yes." Take the proposition "The coin will come up tails within 1000 flips." Let pi be "Tails come up first on the ith flip"--were interested in the proposition that p1000 and higher don't obtain.
But the members of this space aren't roughly equal. Each pi is twice as likely to obtain as the next. So I have more reason to think that the thousandth subcase won't obtain than that any of the previous 999 won't. Thus, though it's absurd to think that we can know that all of p1...pn... will not obtain, this doesn't imply it's absurd to think that we can know that p1000 can obtain, or similarly for any of the higher ones.
You might say that "about the same" should be measured in terms of absolute probability--that p1000 and p999 are about equally likely because the one is only 2-999 more likely than the other. But if there's a threshold for what differences count as "about the same" here, that threshold should be pretty low. So this cutoff still won't get you from the absurdity of knowing that no pi will obtain to the absurdity of knowing that some particular pi will obtain, if i is high enough.
Say the cutoff is 1/1000: if the probability of q is within 1/1000 of the probability of r, we have about the same reason to believe that q and r will obtain. Then we have about the same reason to believe that any of the pi will obtain for any i higher than 9. But that doesn't mean that it's absurd to think that we can know that pi won't obtain, i > 9, unless you think it's absurd to think that we can know that every pi won't obtain for i > 9. And that's not obviously absurd--it's a 1/512 chance, which is less than double the threshold for epistemic insignificance.
(Anyway, the absolute probability standard for "about the same" is obviously wrong. Probability q is not about the same as probability 0, no matter how low q is. Sorry I ascribed this to you.)
Perhaps Hawthorne would be OK saying that we do think we know that the coin will come up heads within 1000 flips. I'm more or less OK with saying that--but I'm also OK with saying that we know a lottery ticket will lose in many circumstances (as Hawthorne acknowledges). I'm not sure you can get one without the other.
Rather than depend on parity reasoning, I think we're better off saying that we're unwilling to ascribe knowledge as soon as we conceptualize the space in probabilistic terms. The cases in which we do ascribe knowledge (Hawthorne's example: Of course all 60 golfers won't get a hole in one!) are ones in which probability isn't front and center.
The better part of valor, however, may be to go ask the psychologists. Why we don't ascribe knowledge is an empirical question. And it's not the question that Hawthorne winds up concerning himself with; that's when we should ascribe knowledge. (I think philosophers should give up that question, too, but that's another post.)
From the NY Times:
Experts on compensation say that the illegal doctoring of hourly employees' time records is far more prevalent than most Americans believe. The practice, commonly called shaving time, is easily done and hard to detect — a simple matter of computer keystrokes — and has spurred a growing number of lawsuits and settlements against a wide range of businesses.
It may be commonly called "shaving time," but it should be commonly called "theft" or "fraud." If a worker, with a few keystrokes, transferred money from her employer's account to her own, she would be prosecuted for embezzlement. There's no moral difference when the employer does the same--except that the workers who lose their pay are probably less able to afford it.
It's worth reading this to look at the institutional structure. Various company spokespeople claim they're shocked, shocked to find that rogue managers are shaving time. I'm not qualified to discuss the law here--except for the Wal-Mart one-minute clockouts, I'd be surprised if the companies didn't have plausible deniability--but I think there are actually interesting moral questions involved.
Namely: How are we to allocate the blame for this? The managers may not be under orders to cheat their underlings, and I think the morally required action is to quit (and blow the whistle) rather than do so, but the blame certainly doesn't stop with them. Upper-level management who put pressure on lower-level management to achieve impossible results surely bear some blame, but they didn't order the theft.
What's going on here is an institutional evil, and I think that's a category that's much underdiscussed in philosophical ethics.* The institutions seem to be set up to put pressure on underpaid district managers, to make cheating easy, and to make it easy for the corporations to turn a blind eye to what's going on. The culpability of the whole is greater than the sum of the culpabilities of the parts. It's worth noting here that institutional practices can make a difference; note the contrast between Wal-Mart and McDonald's, which gives employees printouts of hours worked and doesn't have time shaving problems.
*I'm sure there's a lot of work of which I'm ignorant here, but it's still underdiscussed--it should be intro-level material.
It just occurred to me that the prevalence of the Adkins diet is probably a good thing for a sloppy not really observant Jew who wants to avoid obvious bread products during Passover. There's at least a few restaurants around with protein menus, and a carb-less dish won't have leavened bread products in it either.
(Avoiding leavened bread products is about all I try to do during Passover--I don't try to keep Kosher in any other way.)
See here for why you should put the word "Jew" on your website with the link "http://en.wikipedia.org/wiki/Jew".
How do these two [UPDATE: um, four] sentences strike you?
(1) There are cookies in the pantry, if you want any and my roommate hasn't eaten them all.
(2) There are cookies in the pantry, if you want any and if my roommate hasn't eaten them all.
(3) If you want cookies and my roommate hasn't eaten them all, there'll be some in the pantry.
(4) If you want cookies, and if my roommate went shopping, then there'll be some in the pantry.
My thoughts below the fold.
None of them strike me as decisively right or decisively wrong. (1) seems weakest, (3) and (4) pretty strong. I wouldn't want to bet the farm either way on any of these--I'll have to get together with a linguist sometime to see if there's a good way to test them.
The point is that these sentences coordinate a biscuit condition ("If you want any") and an ordinary conditional ("My roommate hasn't eaten them all"). When you say "There are cookies in the pantry if you want any," you convey that there are cookies in the pantry whether or not the audience wants any; when you say "There are cookies in the pantry if my roommate hasn't eaten them all," you convey that there are cookies in the pantry unless your roommate has eaten them all.
If it is grammatical to coordinate these two kinds of condition, then that provides some evidence that the same sense of "if" is at issue. And (1)-(4) don't repulse me the way (5)-(6) do:
(5) Jordan knows where the all-night diner is and Morgan, who is working there.
(6) Valerie and Pierpont both went to banks--Valerie to the Left Bank and Pierpont to the First National Bank.
But maybe (1)-(4) seem awful to people who aren't predisposed toward them.
(As I type this, I'm listening to Digital Underground, and "Both how I'm livin' and my nose is large" sounds fine....)
John Hawthorne, Knowledge and Lotteries, p. 168n19:
Suppose... there were an oracle who could resolve skeptical doubts. The philosopher goes running to the oracle to find out if the world was created, complete with pseudo-memories, five minutes earlier. Is it really so strange to suppose that the philosopher, before arriving at the oracle, does not know he has been around for a while even though the dullard does know?
The sensitive invariantist should answer, "I say no, you say yes, and you will change your mind."