December 03, 2004

Newcomb's Problem: The Ain't-No-Such-Thing Response

In my previous post on causal and evidential decision theory I mentioned my suspicion that Newcomb's problem rests on some assumptions about free will that might make us uncomfortable when looked at closely. Herewith a stab at explaining some of what I mean.

All the way back in March I mentioned an inverted Newcomb problem; it's just like the regular Newcomb problem, except you get to decide whether to take the box with the $1000 after you see whether there's $1 million in the other box. Brian spiked my guns in that post by pointing out that the evidential decision theorist would say you should take the $1000 in that case. But I still think I can draw a moral from it (without relying on saying "evidential decision theory involves managing the evidence in a goofy way, so it can't provide a reason to two-box in this case and one-box in the straight Newcomb.")

Because the point I'd been hoping to make about the inverted Newcomb is this: We should be suspicious that it's possible. In particular, we should be suspicious of the stipulation that, in lots and lots of past observed trials, the being has left the opaque box empty every time the subject has gone on to take the $1000, and has put $1 million in the opaque box every time the subject has gone on to leave the $1000 on the table. That's logically possible, and it's metaphysically possible, and maybe even physically possible, but it's not going to happen--if the subjects have anything resembling free will.


In particular--why would the subjects leave the $1000 on the table if they already have the $1 million? There might be answers involving superstition and the like, but if the subjects don't have superstitious beliefs I can't see any plausible story about why they wouldn't take the extra $1000. Unless the being exerts or detects some spooky compulsive force that leaves them not to take an extra $1000 when they've just picked up a million under these strange circumstances.* If that kind of spooky force is operating, it seems as though the subjects must be seriously lacking in free will; and the whole problem of decision theory depends on the assumption that you're deciding freely. (So I claimed with respect to the Sonny example, anyway.)

I carry this response back over to the original Newcomb's paradox, to some extent. We have to ask: How does the being manage to predict right every time? Unless some spooky backward causation is happening (in which case all bets are off), it must be latching onto something in the past that enables it to predict for sure whether the subject will one-box.

What might that thing be? Well, in my case, it might do a bit of Googling and discover all the times I say I'm a one-boxer. So I've already formed an intention to one-box, in the unlikely event that I find myself in this situation. And so for me the problem reduces (as has oft been observed) to whether it is rational to stick to a plan when making the plan was advantageous, and following through on the plan is disadvantageous. I think it is rational; but the point is, at this point it's no harder to argue for one-boxing than it is to argue for follow-through on this sort of plan. (I think this may be analogous to follow-through when throwing a baseball, but that'd take more work to explain.)

This is basically a variant on what I imagine is a ploy that's been tried before: "There ain't no such thing as a perfect predictor, so why should I worry about cases that depend on one?" But I hope giving a bit more detail makes that ploy a bit more respectable.

As Bob Stalnaker points out in comments here, Newcomb-style problems usually don't require perfect predictors. Raw evidential decision theory says that I should one-box even if the predictor is right only 75% of the time; if that's all the evidence you have. And that's not nearly as implausible as a perfect predictor, let alone a perfect predictor in the meta-Newcomb case.

Yet the fact that the predictor is right 75% of the time in all cases does not necessarily mean, as you deliberate over your decision, that you should think that the probability of there being $1 million in the box, conditional on your one-boxing, is 75%. The predictor might have told you the following (recycled from Brian's comments):

“I’m really good at predicting whether someone is going to hesitate about whether to 1-box or 2-box, or whether they’re just going to go for one choice or the other without even considering the other alternative. If I think they’re going to go straight for one choice, I act appropriately. If I think they’re going to hesitate, I never put the million dollars in.”

This is compatible with the Predictor being right 75% of the time, if most people just go straight for one box or for both. But in this case, EDT says that if you’re wondering what to do you should one-box, since you’ve already most likely lost the million no matter what. (You'll be best advised to decide in advance that you'll go straight for the one box.) And, I think, some such story is the most plausible story on which the predictor can be right most of the time even though the subjects do make their decisions freely.

(Alternatively, the predictor may put the $1 million in for declared one-boxers, as well--in which case my declaring as a one-boxer is still a wise strategy. You don't think this post is meant as a philosophical argument, do you?)

All this is not pointing to one theory of decision or another. Nor would I wish to argue that if we're really free our decisions can't be predicted in the ways that are called for in these situations; I think that's clearly false. (Give me a choice between listening to Duke Ellington or Kenny G and you can predict that I will freely choose to listen to Duke every time.) But I do think that there may be issues here concerning whether we can simultaneously see ourselves as making free choices and as influenced by certain sorts of factors. When we flesh out Newcomb-style stories so that we can see ourselves as both free and predictable, they may look less paradoxical.

(And conversely, there's something funny going on in lesion cases, but I'm not quite sure what it is or what to do about it. Lame, isn't that?)

*Another story that would be consistent with the description is that the being never puts the $1 million in the opaque box, and the subject always takes the $1000. But that won't generate any paradoxes.

Posted by Matt Weiner at December 3, 2004 01:16 PM
Comments