March 16, 2004
Leaving Money on the Table Is Dumb, Dumb, Dumb
[Title courtesy of Brian; this post was promised about a month and a half ago.]
Allan Hazlett talks about Newcomb's problem. He's a one-boxer, as am I, and he thinks that one-boxing is associated with compatibilism and with virtue theories--both true in my case.
Here's a version of Newcomb's problem that might seem to create lots of trouble for one-boxers. I don't think it does, but I won't say why just yet.
A highly superior being presents you with two boxes on a table. One box is opaque; the other is clear, and can be seen to have $1000 in it. The being says that it can predict human behavior, and it has put $1 million in the opaque box if and only if it predicts that you will not take the clear box. You have observed millions of similar trials, and the being has predicted accurately every time.
So far that's just the classic Newcomb problem. Here's the twist: You get to open up the opaque box first, and then decide whether to take the $1000 in the clear box.
It seems that there is no justification whatsoever for not taking the $1000. You already have the contents of the opaque box, if any; why leave the $1000 on the table. But ex hypothesi the people who leave the money on the table get rich, and the ones who don't don't, just like in the original Newcomb problem. All the arguments for one-boxing carry over.
So why do I remain a one-boxer? That's for me to know and you to find out, possibly by asking me or waiting.
Posted by Matt Weiner at March 16, 2004 10:31 AM
Well presumably in this case you will take both boxes, right? After all, the visual evidence trumps the inductive evidence that the demon got it right, so there's a higher expected utility from taking both boxes. So if you're a one-boxer because you're an evidential decision theorist, you should in this case take both boxes.
So I don't understand why you say "all the arguments for one-boxing carry over". The argument from maximising expected utility doesn't.
By the way, someone who reasons this way should be prepared to pay a *lot* of money to *not* see what's in that box. Some people think paying for ignorance is weird. Some days I'm one of them.
The reason I said all the arguments carry over is I hadn't thought of that.... I could try to set it up so that your senses are less reliable than the induction from the past trials (I'm envisioning a tiny chance of hallucination and a LOT of past trials), but I suspect that would wreck the example.
So the evidential decision theorist will say you ought to do what, according to your evidence, will have the highest EU--even if part of the evidence is the choice you're going to make? That's not my reason for one-boxing, but perhaps I'll have to say what my reasoning is if I'm to salvage any reputation here.
Why is it that you should pay not to see what's in the box, on that reasoning? Is it that you know that if you do so, you'll take the $1000, and so you won't get the million? This reminds me of Broome's infallible Dutch Book, and I guess it should.
If you take the $1000, then the superior being had predicted you would take both boxes and the opaque box doesn't contain $1 million. If you take the opaque box after having seen the $1 million, then superior being had predicted you would take only one box - the opaque one. I think that's why you remain a one-boxer, but I could be gravely mistaken.
The real deal for Newcomb's problem is that the superior being is a dialetheist. Accordingly, the money exists and does not exist; moreover, he can predict what my action will be and he will not predict what my action will be. Newcomb's problem is nothing other than the problem of Schroedinger's Cat. Until it's observed, the money doesn't take on existence (or lack thereof). So, I am a one-boxer and it is not the case that I am one-boxer -- this way I defeat the dialetheist at his own game.
I'm assuming the second paragraph is a joke--it's so hard to apply the Gricean maxims around dialethists....
Your first paragraph says what I was thinking about why the arguments should carry over. But it seems weird to one-box even after you know what the being predicted. (As Brian pointed out, if you're an evidential decision theorist you don't have that problem.) The reason I remain a one-boxer has to do with the tensions I think this scenario exposes in the original problem. More anon.
And I've also realized that my argument for one-boxing may only apply to people like me, who've been arguing for one-boxing for a while. ("But I don't know what those arguments are yet!" Exactly.)
This is barely related (though I do want to hear your one-box argument), but I had never realized how useless the word anon was. According to m-w, anon means any of "now," "soon," or "later." Great - you've eliminated "before."
Admittedly, "now" is an archaic definition, but I usually think of anon as a hoary word anyway.
Off-topic rant ended.
So Matt, why are you a one-boxer?
Note to self: never promise to explain yourself later.
The short answer is that I think it's part of rationality to be able to bind yourself to a future course of action and to follow through on that, treating your past intention as creating a reason to act. (Ted Hinchman has some work on that last part, concerning self-trust.)
I've publicly been espousing one-boxing, and have formed the intention to one-box in the unlikely event that it comes up. So, since I believe all this stuff, the alien will give me the million; and then I will rationally choose to one-box based on the intention that I have already formed for good reasons that have not changed.
For those of you who haven't already formed the intention to one-box by the time you're faced with the problem, there may be nothing for you to do.
This doesn't explain how I deal with the case I presented. Anon... (carefully ambiguous between "now" "soon" "later" and "when I feel like it if ever").