I just twitted Brian for advertising a hiatus with rare TAR updates, followed by substantial posts the next two days (to be fair, I bet he's keeping his promise not to update the papers blog). But I found a better example of this on Mark Becthel's The Daily Blog at Sports Illustrated Online:
Mark Bechtel is the Scorecard editor for Sports Illustrated and writes a Daily Blog every Wednesday for SI.com.
It's not a blog either. I will grant 'the'.
Anyway, I have some post ideas in the backlog, but I also have several things to do between now and the New Year. So I expect not to update this site for a little while. Happy holidays, all. A couple of thoughts below the fold, one of them mildly philosophical.
Those of us who have driven back and forth between Pittsburgh and points west may sometimes have been heard to wonder whether Ohio is really necessary. All in good fun, of course--keep exporting stuff like this and no one will ever question your worth (the key word is exporting). But--when it snows, could you try to keep I-80 clear? Thank you.
And, since I was somewhat trying to outrun a storm*, I was reminded of Jack L. Chalker's "The Stormsong Runner," which I'm going to spoil. (It's in Great American Ghost Stories, the anthology that needs to be read only for the top-nine "One of the Dead" by William Wood. Anyway, "The Stormsong Runner" is narrated by a teacher in the mountains of West Virginia. One girl in his district is marked and outcast because, as she and everyone else thinks, she is called by the dead to bring down storms. The climax comes when a storm will burst the dam; the girl is heard arguing with her dead father (or talking to herself in a deep voice? there are some things not dreamt of in the narrator's philosophy), protesting that she can't do it, while the father says even worse things will happen if the storm isn't called. Somehow the dam holds, or the valley is evacuated--I forget--and the girl explains that she didn't have to do it, because the dead got some woman in Kentucky to call it.
"Well," I thought. "That's too cheap. Ghosts just don't act that way." Isn't that an odd thing to think? I don't believe in ghosts. And I can't say, within the story, ghosts don't act that way--obviously they do. Maybe in ghost stories as a genre ghosts don't act that way, but it's Chalker's prerogative to break conventions, isn't it? So whence my imaginative resistance?
*Though what turned out to be key was outrunning the cold front--when the snow turned into freezing rain driving became much easier. And don't you think, if I were posting about philosophy, it would be worth wondering what "the snow turned into freezing rain" means?
Mark A.R. Kleiman asks, on a reader's behalf, "Are there any modern (say post 1700) novels of high literary merit that can reasonably be characterized as pro-war? Or, at least as pro-war as the Iliad?"
Well, "high literary merit" is hard to define, and as Holbo points out, "something that an MLA member wouldn't be ashamed to be seen reading" won't work at all. MLA members wouldn't be ashamed to be seen reading Star Wars novels, let alone Starship Troopers. But we know what he means, maybe.
So: How about Patrick O'Brian's Jack Aubrey series? Those don't seem to question the war in any way--the people fighting the war are seen as noble, brave, etc. In fact, in the ones I've read, it's the civilian overlords who come out worst, for playing politics and favorites rather than focusing on who's the best fighter. And O'Brian has much literary cred.
Kingsely Amis's The Anti-Death League is kind of an odd case--there's no war in it per se, but it's absolutely in favor of an aggressive military posture that... well, I can't really describe it without spoiling it. It's oddly sweet for Amis. Though the plot does somewhat turn on the idea that the UK is a Force To Be Reckoned With, which I thought had gone out with Suez.
I had been thinking that Yukio Mishima might have something, but Brad Plumer "can't think of any specifically pro-war novels he wrote," and he's surely read more than I have. (Runaway Horses is somewhat militaristic but it's got no war.)
The Red Badge of Courage depicts war as awful--but I imagine any novel of merit that depicts war would have to depict it as awful. It's been a long time since I've read it but I do think that it doesn't undercut the virtue of the war or the merit of the soldier's courage; any thoughts?
And a left-field suggestion; it's been a while since I read this too but in One Hundred Years of Solitude Colonel Aurelio Buendia is I think a noble figure. The wars he fights don't turn out so well but I think he's not condemned for fighting them. Would that count?
via unfogged, I got a lousy 46. But my starting point is obviously much different from the author's--no one ever gets 55? That's a total classic. And of course I wasn't guessing on 97. And "what a stupid song" is an extraordinarily inappropriate comment to make about 9. OK, go take it, and come back to brag about how much you beat me by.
An interesting use of "becomes" in this Clark Judge article:
Remember, when the Chargers chose Manning they held him until the Giants made them an offer they couldn't refuse: their first-round pick, quarterback Philip Rivers; a third-rounder, who became kicker Nate Kaeding; and next year's first-and fifth-round choices.
I believe the background is that the trade took place after the Chargers and Giants had chosen Manning and Rivers with their respective first-round picks, but before the Chargers picked Kaedig with the pick they got from the Giants. At least that's what this conveys to me.
A ways back, I posted on "become," and Kent Bach remarked in comments that
'becomes' is almost as big a "disgrace to the human race" (in Russell's famous phrase) as 'is' is. It can mean, roughly, come to have a certain property or come to be a certain other thing. Few metaphysicians think that the latter is possible....
(and you'll have to click the link to see his qualifiers. It's at the bottom of the page.)
In the Nate Kaedig example, compare the following:
(1) Nate Kaedig became a third-round pick.
My naive analysis of "becomes" is as follows
"X becomes F" is true (evaluated at time t) iff in the times immediately before t, X was not F, and after t, X is F.
This analysis will not yield any trouble if F is a predicate and the 'is' involved is the 'is' of predication. If F is a term and the 'is' is the 'is' of identity, then we have the disgrace Kent mentioned.
It seems to me that (1) need not be disgraceful. We can interpret 'being a third-round pick' as involving the 'is' of predication, so that we say Kaedig did not have the property of being a third-round pick until the moment he was picked in the third round. (Take someone who doesn't sign with the team that drafts him and then reenters the draft later--I think this can happen in baseball. We might say "Shlabotnik was a fifth-round pick in 2002 and a third-round pick in 2004.")
But that move isn't available for (2), which is approximately what Judge wrote:
(2) The third-round pick who was traded became Nate Kaedig
because "third-round pick" is in the subject position. So "the third-round pick," naively, has to be treated as a definite description; and "Nate Kaedig" certainly seems to be a name that refers to Nate Kaedig.
Incidentally, this example doesn't look amenable to the metalinguistic solution that Will Davies and Jamie Dreier suggested in the earlier thread to deal with "Leningrad became St. Petersburg." The metalinguistic interpretation here would be that the third round-pick, whoever he was, started calling himself "Nate Kaedig." That's not what's meant.
Of course we might be happy here saying that Judge isn't using words literally. Certainly, if he'd written "which the Chargers used on kicker Nate Kaedig," he would've spared the world a long post.
Bonus question: Should Judge have written "which became kicker Nate Kaedig"? Does it make a difference that he said "third-rounder" instead of "third-round pick"?
via Brian Weatherson I discovered Dennis Des Chene's two weblogs. (User tip 1: On my browser, the title bar of Philosophical Fortnights shoves all the text to the right--you have to scroll to the right to find the actual blog.)
Des Chene comes down on the right side of the most crucial issue of our day, and has great taste in blog titles and URLs. He calls this blog "Etymologically dubious, but epistemologically sound." As a specialist in early modern philosophy, he knows better than me about the first part; so much better than me that I hereby ask him to let me know. (My explanation for the name of the blog is here and should really be in the sidebar; I am also trying to set a record for the blog name with the most variant spellings.)
He also gives us his interesting impressions of Pittsburgh (user tip 2: For the life of me I can't figure out the permalink; scroll down to "Impressions of Pittsburgh.") I have one tiny caveat to add to this:
The area around the Universities (Pitt and Carnegie-Mellon) is called Oakland. It was once the cultural center, built away from the congested, dirty downtown area in the twenties and thirties. Since then the symphony and the opera have moved downtown, but the buildings remain (the link is to an informative essay by Walter Kidney)
which is that the art and natural history museums remain in Oakland, along with the main library; the art museum is currently hosting the Carnegie International, one of the big international art contemporary art exhibitions; the curator's introduction quotes Kant (and doesn't seem to me to say very much), while the list of artists is shockingly skewed toward the beginning of the alphabet. I hate that. (I'll be seeing it in January, and I probably will not review the whole thing.)
Also, though Pitt and CMU are both generally referred to as being in Oakland, for the life of me I've never been able to figure out why they're thought to be in the same neighborhood. Their respective areas look very different (CMU is much greener, Pitt much more built up) and are separated by natural barriers (various hollows and parks)--to me they look like a paradigm case of contiguous areas in different neighborhoods. The territory across Forbes Ave. from CMU is called "Bellefield" on several maps that I've seen (look dead center here), but I've literally never heard anyone refer to that area as Bellefield. Yet is completely unclear what that neighborhood should be--it's not clearly Shadyside, Squirrel Hill, or Oakland either. I think people say "by CMU."
While we're on Pittsburgh: This ESPN page contains an example of the positive 'anymore' that so perplexed Brian: "I stopped reading at 'white-collar employees.' Anymore, those words just make me giggle." That translates into Standard American English as "Nowadays, those words just make me giggle." I suspect that the positive 'anymore' comes from the dialect of the anonymous Page 2 staffer who wrote that rather than Ken Jennings, into whose mouth the words are being put; I don't remember hearing that formulation in Utah. But maybe I'm just forgetting. I also hope that the question (sorry, 'answer') ESPN quotes is not the one Jeopardy actually asked Jennings, because to my eyes it's so ungrammatical its meaning isn't clear (I think "season" should be "seasoned"). And I take no responsibility for that photo caption.
Des Chene--remember him?--also makes an interesting point about this passage from Timothy Williamson:
“Would it be a good bargain to sacrifice depth for rigour? That bargain is not on offer in philosophy, any more than it is in mathematics. (p. 15 of "Must Do Better")
which is (roughly) that it is not obviously not on offer in mathematics. This reminded me of Mark Wilson's "Can We Trust Logical Form?" (JSTOR), which includes the story of Oliver Heaviside's operational calculus (p. 528ff.), which involved manipulating expressions in differential equations using techniques from numerical algebra, producing expressions such as '1/d/dt'. The operational calculus seems not to have been rigorous. Was it deep? Well, apparently it was fruitful. [Disclaimer: I claim no authority on the history of mathematics here.]
Of course Williamson's point about the methodology of philosophy doesn't stand or fall on the success of the analogy with mathematics. And it's the point about the methodology of philosophy that I should really be concerned with. But that's something to address another day, if at all. (I believe that ending lives up to the post's title.)
Friday, December 10th, 3:30
The paper is currently available for review in the Philosophy Office,
The Colloquium is scheduled to take place in Curtin Hall 124 at 3:30pm.
For additional information, please call the Philosophy Department at
I have just been informed that some campus computers block my blog as "hate speech." I wonder why. ("Booger"?)
The Brian Murphy column noted below raises a question about the proper morphology* of Football Team Nations. Murphy begins by saying
Looks like we might have to hold off on the founding of "Eagles Nation." And, for that matter, "Patriots Nation"
In sum: Yes to Raider Nation and Steeler Nation. No to 49er Nation (uni change) and Ram Nation.
Well, should "nation" be preceded by the singular or the plural?
Data and a hypothesis below.
Google overwhelmingly says singular in most cases:
Thrown out for way too many false positives: 'Eagle Nation' (none of the top 10 hits have to do with Philly football)
It seems to me as though English often calls for singular nouns in adjectival uses: If you sell cars, you're a car salesperson, not a cars salesperson. On the other hand, here the idea is that you root for the Steelers, not for individual Steelers, so that might provide a reason to use the plural. Still, the singular seems to be in the ascendancy on this extremely limited sample.
Looking at origins won't tell us much, because probably the original of all Sports Team Nations is Red Sox Nation with 164,000 hits. And 'Red Sox' is a morphological mystery, to me anyway. Is it singular? Plural? A mass noun? Who knows? Anyone?
Clarification on the "Singular/Plural Nation" phenomenon would also be welcome and I think only fair.
*At least, I think it's morphology. I'm not sure.
Or, if you've got nothing else to say, nitpick.
I have personally attended games on the West Coast where Steeler fans were cheering louder than the fans at the following stadiums: Candlestick, San Diego, Arizona and, the latest crusher, Dallas. We went into Dallas and took over the stadium.
This is in the service of a good point--Steeler Nation exists if any Football Team Nation does--so I guess we'll count it as a useful falsehood.
Also, on this morning's bus ride, Transit Television Network offered the following cooking tip (approximately):
To avoid tears when cutting onions, pass it under running cold water several times while cutting.
Is this an example of booger anaphora, or do I get to count it as a mistake about agreement?
I've just put up a political post, so why not another one. First a caveat: Orin Kerr has an explanation of why our government's recent attempt to claim that evidence obtained by torture can be used against Guantanamo detainees might not be meant to make torture official U.S. policy. (Short version, as far as I can tell: The idea is that the government is "not inclined to say that a detainee has a particular right unless a court affirmatively rules that this is so"; that includes the right to challenge evidence, even--perhaps hypothetically--should it be based on torture.)
Nevertheless, the evidence seems to me clear that in fact the U.S. government is at the very best countenancing torture and using "evidence" based from it. (See Katherine's first comment to this post, which she backs up here.)
Over to Ogged:
Now we know how it happens. I remember, as a kid, seeing news footage of people on the streets in Moscow, wondering what was wrong with them, why they were willing to live under a repressive regime, what about the Russian (or East German, or Romanian...) character allowed them to become repressors and repressed. But, of course, there was nothing special about them at all. In "response" to whatever threat, they and their government allowed some curtailing of freedom, and the logic of that move (threat necessitates greater control and less liberty) is inexorable. Most people, because they're not directly affected, don't think about their liberty at all; some people (like me), are upset, complain, but do nothing substantive; and a few people (always too few), try to make a difference.
I had to restrain myself from copying the whole post. Read it.
I (and many other blind cc's) just received an e-mail from David Velleman entitled "New Blog -- Please Post, Please Link." I hear and obey--here's the post and the link (http://left2right.typepad.com/main/). (I'm blogrolling it under "Philosophy" because many of the posts are philosophically informed.)
The blog is called "Left2Right" and, to quote the initial post,
In the aftermath of the 2004 Presidential election, many of us have come to believe that the Left must learn how to speak more effectively to ears attuned to the Right. How can we better express our values? Can we learn from conservative critiques of those values? Are there conservative values that we should be more forthright about sharing? "Left2Right" will be a discussion of these and related questions.
Although we have chosen the subtitle "How can the Left get through to the Right?", our view is that the way to get through to people is to listen to them and be willing to learn from them. Many of us identify ourselves with the Left, but others are moderates or independents. What we share is an interest in exploring how American political discourse can get beyond the usual talking points.
A list of the bloggers below the jump. I'm particularly happy that I get to blogroll someone whose work I've taught this term.
philosophy: U Vermont
environmental studies, philosophy: NYU
philosophy: Ohio State
philosophy, economics, U Arizona
law, political science: U Michigan
philosophy, women's studies: U Michigan
law, economics: USC and RAND Corp.
philosophy: U California, Davis
J. DAVID VELLEMAN
philosophy: U Michigan
philosophy, law: U Texas, Austin
political science, philosophy : MIT
philosophy, Inst. for Phil. & Pub. Policy: U Maryland
KWAME ANTHONY APPIAH
philosophy, Center for Human Values: Princeton
political science: U Virginia
philosophy, Prog. on Ethics & Pub. Life: Cornell
PAUL F. VELLEMAN
statistics, ILR: Cornell
philosophy: U Michigan
philosophy, comparative literature: Stanford
philosophy, law: UCLA
philosophy: U Michigan
A couple of interesting dialogues about knowledge out of Ross MacDonald. (Mild spoilers for The Doomsters and The Zebra-Striped Hearse ahead.) The first is an example of assertion-without-knowledge that's much better than the last one I got out of a detective story. The first speaker is Slovekin, a reporter; the second is the detective, Lew Archer.
"So if you know something that will let Hallman off the hook, you better spill it.  I can have it on the radio in ten minutes."
" He didn't do it."
" Do you know he didn't for certain?"
" Not quite.  I'd stake my reputation on it, but I have to do better than that.  Hallman's being used as a patsy, and a lot of planning went into it." (Ross MacDonald, Archer in Jeopardy, p. 178; my numbering)
Here Archer explicitly acknowledges  that he doesn't know what he originally asserted . Slovekin's asking if Archer knows for certain  might be taken to shift the standards for knowledge up, but Archer goes on to assert something  that pretty much entails ; so he evidently doesn't know  by the new standard either. Not only that, Archer makes clear  that he'd stake his reputation on the truth of his assertions, which is eerily like my account of assertion on which the speaker stakes her credibility on the truth (or at least justification) of what she says. I could hardly have stacked the deck better myself.
There's an interesting cross-play of purposes in this dialogue. A Hawthornean practical-environment view of knowledge has a natural connection to the knowledge account of assertion, or rather an assertability account of knowledge. Suppose your evidence that p is good enough for knowledge only if it's good enough to make it practically rational to act on p. Suppose also in your practical environment you're faced with this choice: assert p, or don't. Then you know that p only if your evidence is good enough to make p assertable.
But in this case there are two different purposes to which the assertion , that Hallman didn't do it, can be put. One question is whether Slovekin is to believe that Hallman didn't do it (which will affect the information he gives Archer, and how he directs his further investigations); another question is whether Slovekin will broadcast a report that Hallman didn't do it. Archer's evidence is good enough for him to try to get Slovekin to believe, but not good enough for him to try to get Slovekin to broadcast the report. Hence it's OK for Archer to assert that Hallman didn't do it, but it's not OK for Archer to claim knowledge. Slovekin can't broadcast anything unless he knows, and Archer can't sign off on that higher epistemic standard.
The second dialogue is about weak standards for knowledge, and doesn't really require scene-setting.
 "But why would anybody want to kill Ralph?"
 "There's one obvious possibility. He may have known who murdered Dolly."
 "Why didn't he say so, then?"
 "Perhaps he wasn't sure. I believe he was trying to investigate Dolly's murder...." (Archer in Jeopardy, p. 447, my numbering)
The purpose of the modals in  and  clearly isn't to establish that in one epistemically accessible world Ralph knows who murdered Dolly and in a different epistemically accessible world he isn't sure.  and  pick out one and the same possibility. So in  we have a knowledge ascription that's compatible with the subject's not being sure of what he knows. (It's a knowledge-who ascription rather than a knowledge-that ascription, but I don't think that makes much difference.) Nor do I think there's much plausibility to the idea that standards for knowledge have shifted between  and , though I don't have a knockdown argument against it either. Perhaps Ralph's doubts, whatever they may be, become salient.
Anyway, the standards for the knowledge-ascription in  seem to be determined by the purpose of the action it is meant to explain, as specified in . And this explanation requires very permissive standards for knowledge--even more permissive than saying that knowledge is true belief. If you're thinking of rubbing out someone who knows too much, you're not going to worry about whether he is Gettiered* or has a true but unjustified belief, or even whether he has full-fledged belief. Even someone who has a little evidence that you murdered Dolly and is thinking about the possibility that you did could be dangerous enough.
Then  calls for the explanation of a different action, and that explanation raises different concerns. Ralph wouldn't have told people who killed Dolly unless he was quite sure about it. A half-hearted true belief with some reservations wouldn't have been enough for him to tell people. But it would be enough to get him killed. That's why, under the standards for knowledge set by the inquiry in , Ralph can have known who killed Dolly, without being sure of it (as  brings out).
Haven't I just provided an explanation for how the standards shift between  and ? Maybe so. But it doesn't seem as though any of the obvious standard-shifting devices come into play. I don't hear Archer as emphasizing "sure" or any other part of his sentence, and no specific doubt has been made salient. I'm more inclined to say that this indicates that "knowledge" is a somewhat slippery word that can be used to pick out many different kinds of epistemic valuation while seeming to mean the same thing all along. But that's exactly the sort of thing I would say--people who take knowledge seriously will differ.
*Title for a philosophers' thriller: The Man Who Had Too Many Gettiered Beliefs.
In my previous post on causal and evidential decision theory I mentioned my suspicion that Newcomb's problem rests on some assumptions about free will that might make us uncomfortable when looked at closely. Herewith a stab at explaining some of what I mean.
All the way back in March I mentioned an inverted Newcomb problem; it's just like the regular Newcomb problem, except you get to decide whether to take the box with the $1000 after you see whether there's $1 million in the other box. Brian spiked my guns in that post by pointing out that the evidential decision theorist would say you should take the $1000 in that case. But I still think I can draw a moral from it (without relying on saying "evidential decision theory involves managing the evidence in a goofy way, so it can't provide a reason to two-box in this case and one-box in the straight Newcomb.")
Because the point I'd been hoping to make about the inverted Newcomb is this: We should be suspicious that it's possible. In particular, we should be suspicious of the stipulation that, in lots and lots of past observed trials, the being has left the opaque box empty every time the subject has gone on to take the $1000, and has put $1 million in the opaque box every time the subject has gone on to leave the $1000 on the table. That's logically possible, and it's metaphysically possible, and maybe even physically possible, but it's not going to happen--if the subjects have anything resembling free will.
In particular--why would the subjects leave the $1000 on the table if they already have the $1 million? There might be answers involving superstition and the like, but if the subjects don't have superstitious beliefs I can't see any plausible story about why they wouldn't take the extra $1000. Unless the being exerts or detects some spooky compulsive force that leaves them not to take an extra $1000 when they've just picked up a million under these strange circumstances.* If that kind of spooky force is operating, it seems as though the subjects must be seriously lacking in free will; and the whole problem of decision theory depends on the assumption that you're deciding freely. (So I claimed with respect to the Sonny example, anyway.)
I carry this response back over to the original Newcomb's paradox, to some extent. We have to ask: How does the being manage to predict right every time? Unless some spooky backward causation is happening (in which case all bets are off), it must be latching onto something in the past that enables it to predict for sure whether the subject will one-box.
What might that thing be? Well, in my case, it might do a bit of Googling and discover all the times I say I'm a one-boxer. So I've already formed an intention to one-box, in the unlikely event that I find myself in this situation. And so for me the problem reduces (as has oft been observed) to whether it is rational to stick to a plan when making the plan was advantageous, and following through on the plan is disadvantageous. I think it is rational; but the point is, at this point it's no harder to argue for one-boxing than it is to argue for follow-through on this sort of plan. (I think this may be analogous to follow-through when throwing a baseball, but that'd take more work to explain.)
This is basically a variant on what I imagine is a ploy that's been tried before: "There ain't no such thing as a perfect predictor, so why should I worry about cases that depend on one?" But I hope giving a bit more detail makes that ploy a bit more respectable.
As Bob Stalnaker points out in comments here, Newcomb-style problems usually don't require perfect predictors. Raw evidential decision theory says that I should one-box even if the predictor is right only 75% of the time; if that's all the evidence you have. And that's not nearly as implausible as a perfect predictor, let alone a perfect predictor in the meta-Newcomb case.
Yet the fact that the predictor is right 75% of the time in all cases does not necessarily mean, as you deliberate over your decision, that you should think that the probability of there being $1 million in the box, conditional on your one-boxing, is 75%. The predictor might have told you the following (recycled from Brian's comments):
“I’m really good at predicting whether someone is going to hesitate about whether to 1-box or 2-box, or whether they’re just going to go for one choice or the other without even considering the other alternative. If I think they’re going to go straight for one choice, I act appropriately. If I think they’re going to hesitate, I never put the million dollars in.”
This is compatible with the Predictor being right 75% of the time, if most people just go straight for one box or for both. But in this case, EDT says that if you’re wondering what to do you should one-box, since you’ve already most likely lost the million no matter what. (You'll be best advised to decide in advance that you'll go straight for the one box.) And, I think, some such story is the most plausible story on which the predictor can be right most of the time even though the subjects do make their decisions freely.
(Alternatively, the predictor may put the $1 million in for declared one-boxers, as well--in which case my declaring as a one-boxer is still a wise strategy. You don't think this post is meant as a philosophical argument, do you?)
All this is not pointing to one theory of decision or another. Nor would I wish to argue that if we're really free our decisions can't be predicted in the ways that are called for in these situations; I think that's clearly false. (Give me a choice between listening to Duke Ellington or Kenny G and you can predict that I will freely choose to listen to Duke every time.) But I do think that there may be issues here concerning whether we can simultaneously see ourselves as making free choices and as influenced by certain sorts of factors. When we flesh out Newcomb-style stories so that we can see ourselves as both free and predictable, they may look less paradoxical.
(And conversely, there's something funny going on in lesion cases, but I'm not quite sure what it is or what to do about it. Lame, isn't that?)
*Another story that would be consistent with the description is that the being never puts the $1 million in the opaque box, and the subject always takes the $1000. But that won't generate any paradoxes.
This is totally irrelevant, but also not worth its own post: Natalie Portman's Fame Audit gives her current fame as Sarah Michelle Gellar and her deserved fame as Sarah Jessica Parker. But Katie Holmes's Fame Audit gives her current fame as Sarah Michelle Gellar and her deserved fame as Natalie Portman! From step 1, Portman = Gellar, so from step 2 ought(Holmes) = Portman = Gellar = Holmes. But I think the Fametrackers are using an unacceptably roundabout way of saying that Katie Holmes should remain exactly as famous as she is. Haven't they heard of the maxim of manner?
It is not acceptable to explain this away by relativizing to time and arguing that Gellar12/2/04 does not necessarily equal Gellar11/7/03, or that Portman12/2/04 does not necessarily equal Portman11/7/03. That would prevent the argument from going through, but it would require that Fame can vary appreciably with time, which we already know to be false.
It's been kicking around the back of my mind for a while that the Newcomb Paradox sometimes embodies some uncomfortable assumptions about free will. Not that that's necessarily a problem, but I think it's worth getting those assumptions (if they exist) out in the open.
That was brought to mind by Andy Egan's paper on some counterexamples to causal decision theory (via Brian). To make his counterexamples go through, Andy has to make some stiuplations about the credence an agent has in the proposition that she will perform a certain action; the very action she is deliberating about. In one way, this seems unproblematic; we can predict what we might do.
But in another way, it seems odd. We see what we will do as a product of our deliberations. Should we take the probability that we will do it as fixed even as we're deliberating about whether to do it? That can seem like an abdication of free will. Libertarians may have a particular problem with this. Nuel Belnap (my advisor) has at least floated the idea that no probabilities can be assigned to agent-caused indeterministic transitions.
So I wonder if looking at free will issues will affect some of the examples in this vicinity. The following, I think, is a limiting case, though a limiting case of what process I don't know:
Sonny is riding in a handcart that in fifteen seconds will stop at the lip of a mineshaft. Smith is stuck at the bottom of the mineshaft who will drown if not rescued. When the handcart reaches the lip of the shaft, Sonny will either jump into the mineshaft to attempt to rescue Smith, or will not do so.
Sonny knows this: His society is filled with robots. Robots are conscious, and indeed their conscious lives are indistinguishable from those of people. So Sonny doesn't know whether he's a robot or a person. However, while people really have free will, robots' actions are really determined by their programming. It only feels (to them) as though they're exerting control over their actions. [I'm assuming this is coherent--some agent-causationists might think there's a special volitional feeling, but I'm assuming that even if there is one it can be faked.]
Robots are, in fact, programmed to rescue people whenever they can; so if Sonny is a robot he is inevitably going to jump into the mineshaft. Furthermore, all robots, and only robots, are equipped with special rockets in their heels that go off only when necessary to perform a rescue. If Sonny has these rockets, he will succeed in rescuing Smith. If Sonny does not have these rockets and he jumps in to the mineshaft, both he and Smith will drown. If Sonny stays out of the mineshaft, only Smith will drown. Whether or not Sonny is a robot, he ranks the outcomes as follows: Rescuing Smith > Smith drowning > both drowning.
It seems plausible that if Sonny jumps into the mineshaft it provides some evidence that he is a robot (at least, it provides evidence to bystanders), and thus that he will succeed in rescuing Smith. Nevertheless, it seems clear to me that Sonny should decide not to jump into the mineshaft. That is: If Sonny is a robot and jumps into the mineshaft, then he is exempt from rational criticism, because he hasn't made a choice at all--he's just followed his programming. If Sonny isn't a robot, then jumping into the mineshaft will lead to the worst possible outcome. So in any case in which Sonny is making a choice, the right choice is to stay out of the mineshaft.
How this plays out with evidential and causal decision theory, I'm not sure. But my argument goes through (if it does) even if you assume that Sonny cannot assign credences to the possibilities that he is a robot, or that he will jump into the mineshaft, or any combination thereof. All that is required is the assumption that Sonny lacks the power to rescue Smith iff he has free will. And maybe this is the limiting case of examples like the ones Andy cites, in which people have reason to believe that the people who will choose a certain course of action are those who suffer from a condition that will make them less likely to succeed at it.
I'm going to try to illustrate that point a little bit with the "psychopath button" case, which Andy says was suggested by David Braddon-Mitchell. And the reason I especially like this case is that it's just like Price Day's story "Four O'Clock"; also a Twilight Zone Episode. My discussion of this case may spoil the story, so go read it.
OK--so there's a button that will kill all psychopaths. Paul would like to live in a world with no psychopaths. However, Paul is also confident that only a psychopath would push the button, and Paul would rather live in a world with psychopaths than die.
If Paul is confident that only a psychopath would think about pushing the button, then there's no paradox; Paul can be confident that he's a psychopath, and that his plan will lead to the worst of all outcomes. (This I think is a "tickle defense.")
So it must be that Paul is confident that only a psychopath would actually push the button. Does this mean that psycopathy impels someone to push the button, in an unfree way? In this case, it's like the Sonny case in reverse--Paul might as well push the button, since if he's a psychopath he's doomed anyway. Conversely, if non-psychopaths would find the option of pushing the button so repellent that they wouldn't be able to bring themselves to do it, then Paul had better not press the button; it's exactly like the Sonny case.
I'm pretty sure that there's some middle ground here; but I'm not quite sure what it is. If, for instance, not being a psychopath is a question of being moved by certain reasons, then when you're deliberating "only psychopaths would push this button" you already know whether you're being moved by those reasons, if the deliberation is effective. So we're back to the tickle defense--if Paul's failure to be moved by the value of (psychopaths') life gives him evidence that he's a psychopath, then the causal decision theorist can explain why he doesn't want to push the button.
Andy makes the point that evidential decision theorists can also advert to tickle defenses; I don't know whether these points would help EDT against its counterexamples.