December 02, 2004

A Limiting Case for Causal/Evidential Decision Theory

It's been kicking around the back of my mind for a while that the Newcomb Paradox sometimes embodies some uncomfortable assumptions about free will. Not that that's necessarily a problem, but I think it's worth getting those assumptions (if they exist) out in the open.

That was brought to mind by Andy Egan's paper on some counterexamples to causal decision theory (via Brian). To make his counterexamples go through, Andy has to make some stiuplations about the credence an agent has in the proposition that she will perform a certain action; the very action she is deliberating about. In one way, this seems unproblematic; we can predict what we might do.

But in another way, it seems odd. We see what we will do as a product of our deliberations. Should we take the probability that we will do it as fixed even as we're deliberating about whether to do it? That can seem like an abdication of free will. Libertarians may have a particular problem with this. Nuel Belnap (my advisor) has at least floated the idea that no probabilities can be assigned to agent-caused indeterministic transitions.

So I wonder if looking at free will issues will affect some of the examples in this vicinity. The following, I think, is a limiting case, though a limiting case of what process I don't know:

Sonny is riding in a handcart that in fifteen seconds will stop at the lip of a mineshaft. Smith is stuck at the bottom of the mineshaft who will drown if not rescued. When the handcart reaches the lip of the shaft, Sonny will either jump into the mineshaft to attempt to rescue Smith, or will not do so.

Sonny knows this: His society is filled with robots. Robots are conscious, and indeed their conscious lives are indistinguishable from those of people. So Sonny doesn't know whether he's a robot or a person. However, while people really have free will, robots' actions are really determined by their programming. It only feels (to them) as though they're exerting control over their actions. [I'm assuming this is coherent--some agent-causationists might think there's a special volitional feeling, but I'm assuming that even if there is one it can be faked.]

Robots are, in fact, programmed to rescue people whenever they can; so if Sonny is a robot he is inevitably going to jump into the mineshaft. Furthermore, all robots, and only robots, are equipped with special rockets in their heels that go off only when necessary to perform a rescue. If Sonny has these rockets, he will succeed in rescuing Smith. If Sonny does not have these rockets and he jumps in to the mineshaft, both he and Smith will drown. If Sonny stays out of the mineshaft, only Smith will drown. Whether or not Sonny is a robot, he ranks the outcomes as follows: Rescuing Smith > Smith drowning > both drowning.

It seems plausible that if Sonny jumps into the mineshaft it provides some evidence that he is a robot (at least, it provides evidence to bystanders), and thus that he will succeed in rescuing Smith. Nevertheless, it seems clear to me that Sonny should decide not to jump into the mineshaft. That is: If Sonny is a robot and jumps into the mineshaft, then he is exempt from rational criticism, because he hasn't made a choice at all--he's just followed his programming. If Sonny isn't a robot, then jumping into the mineshaft will lead to the worst possible outcome. So in any case in which Sonny is making a choice, the right choice is to stay out of the mineshaft.

How this plays out with evidential and causal decision theory, I'm not sure. But my argument goes through (if it does) even if you assume that Sonny cannot assign credences to the possibilities that he is a robot, or that he will jump into the mineshaft, or any combination thereof. All that is required is the assumption that Sonny lacks the power to rescue Smith iff he has free will. And maybe this is the limiting case of examples like the ones Andy cites, in which people have reason to believe that the people who will choose a certain course of action are those who suffer from a condition that will make them less likely to succeed at it.

I'm going to try to illustrate that point a little bit with the "psychopath button" case, which Andy says was suggested by David Braddon-Mitchell. And the reason I especially like this case is that it's just like Price Day's story "Four O'Clock"; also a Twilight Zone Episode. My discussion of this case may spoil the story, so go read it.

OK--so there's a button that will kill all psychopaths. Paul would like to live in a world with no psychopaths. However, Paul is also confident that only a psychopath would push the button, and Paul would rather live in a world with psychopaths than die.

If Paul is confident that only a psychopath would think about pushing the button, then there's no paradox; Paul can be confident that he's a psychopath, and that his plan will lead to the worst of all outcomes. (This I think is a "tickle defense.")

So it must be that Paul is confident that only a psychopath would actually push the button. Does this mean that psycopathy impels someone to push the button, in an unfree way? In this case, it's like the Sonny case in reverse--Paul might as well push the button, since if he's a psychopath he's doomed anyway. Conversely, if non-psychopaths would find the option of pushing the button so repellent that they wouldn't be able to bring themselves to do it, then Paul had better not press the button; it's exactly like the Sonny case.

I'm pretty sure that there's some middle ground here; but I'm not quite sure what it is. If, for instance, not being a psychopath is a question of being moved by certain reasons, then when you're deliberating "only psychopaths would push this button" you already know whether you're being moved by those reasons, if the deliberation is effective. So we're back to the tickle defense--if Paul's failure to be moved by the value of (psychopaths') life gives him evidence that he's a psychopath, then the causal decision theorist can explain why he doesn't want to push the button.

Andy makes the point that evidential decision theorists can also advert to tickle defenses; I don't know whether these points would help EDT against its counterexamples.

Posted by Matt Weiner at December 2, 2004 10:53 AM
Comments

I like the Sonny case. I'm a bit nervous about the "if he's making a real choice, the thing to do is not jump in the mineshaft" line, though. Have to think a bit about just *why* I'm nervous...

Posted by: Andy at December 2, 2004 03:45 PM

Well, I'm not as confident about it as I sound. But if I don't defend it who will?

Posted by: Matt Weiner at December 3, 2004 01:09 PM