December 15, 2007

One Box, Two Box, Red Box, Blue Box

Dan Korman posted on the Newcomb paradox a few days ago, and I just left approximately the following comment there:

At time t0 [now] it's rational to form the intention to one-box if you ever find yourself in this situation, because that can affect what's in the boxes later; at time t3 [when you're faced with the decision problem] it's rational to follow through on your previous intention, because it's rational to follow through on your previous intentions, unless you have a reason not to, and "I knew when I formed the intention that it would be rational to form it even though it would be to my advantage to break it later" isn't a reason not to. (Every part of the last sentence is controversial, I expect, especially the last.) But that reasoning may not help Sally. [Kenny Easwaran, who presented basically the same argument, was asked to consider the Newcomb problem from a third-person perspective; to think about what Sally should do if she's faced with the Newcomb problem right now.]

I also have a persistent little gripe about these kinds of problems, which is that I think they're underspecified: How does the predictor work? If the predictor is able to figure out what Sally will do because it's determined by your brain state at t1, then:

If you're a hard determinist, it doesn't really make any sense to ask what Sally should do. She has no choices.

If you're a compatibilist, then it's not utterly obvious to me that we have to say that Sally can only consider things that her action will effect. The compatibilist is happy to praise Sally for doing the right thing, even though it was determined before she was born that she would do the right thing, because it's good that Sally is the sort of person who is determined to do the right thing. Why can't the compatibilist praise Sally for one-boxing because being the sort of person who is determined to one-box gets her more money?

But there are ways to set up the problem so that it's rational to two-box, though we have to tweak the problem so that the predictor has a lower accuracy. Suppose that there are three kinds of people in the world: People who one-box without thinking hard about it, people who two-box without thinking hard about it, and people who think through these arguments as in the post. The predictor might be very good at guessing which kind of person you are -- say 95% accuracy -- but unable to figure out what reflective people do. Suppose 80% of people fall into the first two camps, and the predictor just flips a coin for the reflective 20%. Then the predictor is right around 83% of the time, which is enough to get the paradox going.

But if you are reflecting about the problem, you already know that the predictor has flipped a coin for you (or guessed wrong about what you are, which amounts to the same thing). So even an evidential decision theorist would say you should two-box. (And, under these circumstances, my initial argument doesn't go through either -- once you start reflecting on the problem, you're already in the coin-flip category, and forming the intention to one-box won't help.)

And, I claim, the scenario I've described is less unrealistic than the perfect predictor scenario by several orders of magnitude. (Which still leaves it pretty unrealistic.)

[Previous related posts here and here. The link to Stalnaker in the second post now should go here.]

Posted by Matt Weiner at December 15, 2007 09:55 AM
Comments

Maybe all this shows is that I don't know what you mean by the word "rational", but why does the rationality of being-a-one-boxer imply the rationality of taking one box? If you believe in the awesomeness of expected value, you should *be* a one-boxer if you can, then take two boxes if you can manage to change your mind when you're in the room.

The tweaked scenario is the same deal. It still maximizes expected value to *be* an unreflective one-boxer, and to actually take both boxes.

Posted by: dr. zeuss at December 17, 2007 03:31 PM

Well, that's the burden that's meant to be carried by this:

it's rational to follow through on your previous intentions, unless you have a reason not to

and

"I knew when I formed the intention that it would be rational to form it even though it would be to my advantage to break it later" isn't a reason not to

which together imply that it's not always rational to maximize expected value, since you can rationally form an intention to do something that won't maximize your EV when you do it. (It's more like Gauthier's constrainted maximization, except I'm not saying you have to be any kind of maximizer.) And I didn't argue for that.

Partly because I think it just has to be the case that it's rational to follow through on your intentions, and in particular that it can be rational to follow through on your promises even if it's not to your advantage at the time. If a theory says it isn't rational, I say so much the worse for that theory. I feel the same way about skepticism; where the skeptic says, "If you can't prove that the outside world exists by these constraints, then you're not justified in believing in the outside world," I respond, "I'm definitely justified in believing in the outside world, so your constraints can't be the constraints on justification." The EV theorist may say, "If following through on your intentions doesn't maximize EV, then it isn't rational," and I reply "Following through on your intentions is definitely rational, so rationality isn't defined by maximizing your EV."

But as I said this is controversial.

It might help, though, to think of it this way: If what you describe is what you should rationally do, then being a one-boxer won't get you the million, because the predictor will see that you'll change your mind once you're in the room. I guess that's what you were getting at by saying "if you can manage to change your mind though"; it'd be hard to get the benefits of being a one-boxer without one-boxing.

Posted by: Matt Weiner at December 17, 2007 06:50 PM

Determinism it seems to me, doesn't effect much. It's kind of like saying that everything is an hallucination---okay, so all is an
hallucination, but even if we accept that, the peach we eat still has that peach taste and we still have to go to work in the morning.
Likewise, If all is deterministic then even if there is an individual entity we call a person, there is no individual entity which actually makes decisions-- it is an illusion (so which entity makes decisions then, the universe as a whole?)
Fine, it's an illusion and all is determined ahead of time. Decisions just appear and all that happens just happens automatically--ok then the sense of personhood just happens and the sense of
making a decision as a person just happens--- I choose to eat the peach and I still get up and go to work in the morning. Not much difference.
One thing though-- on what basis does the determinist claim that her point of view is the correct one? Seems all she can say is that it is
one among many points of view--and correctness or incorrectness is moot. If knowledge and confirmation of knowledge is significantly based upon deciding on one point of view versus another--- well, no such decision
by any individual entity is made, in the determinist view. If knowledge is confirmed via experiment or other experience---no such decision about correctness or incorrectness has actually been made--despite appearances.
So, how does the determinist show she is correct?
Determinism seems to end in relativism.
So, what is knowledge to a determinist? What isit to know then, and to be correct?
But ultimately, as I said before, it doesn't seem to matter much----some of what happens in this world
will continue to be characterized as freely decided----even if to a determinist--it is a sort of illusion.

Posted by: Alexander Bezang at December 20, 2007 03:18 PM
Post a comment









Remember personal info?