January 25, 2004

Two Envelopes: The Set-Up

This post won't say anything new, it'll just explain how the two-envelope problem works and how the paradox arises. The basic two-envelope problem is explained here. The St. Petersburg variant is here. Here is Brian's rant about how it's well-known what the faulty step is. Or, if you don't want to click all those links, read on....

The basic two-envelope problem is as follows:

God writes a rational number on a slip of paper, and writes twice that number on another slip of paper.* Each slip goes in an envelope. He sends an angel down to you who gives you one of the envelopes randomly. The angel says: "You can open that envelope and read the number, and God will give you that many units of utility.** Or you can exchange that envelope for the other envelope, and God will give you as many units of utility as the number in the other envelope." Should you switch?

Argument that you should: Suppose the number written in your envelope is x. There's a 1/2 chance that the number written in the other envelope is x/2, and a 1/2 chance that it's 2x. The expected utility [EU] of a 1/2 chance of x/2 and a 1/2 chance of 2x is 5x/4, which is greater than x. So you should switch.

Paradoxical consequence of this argument: A symmetrical argument, taking x to be the other envelope, tells you that switching will make you worse off. Both arguments can't be right, but they're exactly the same (so both are wrong).

*On the original formulation, each rational number is equally likely--but Brian pointed out that there's no way to define a probability distribution on which that's true. Eventually I hope to say something about what this means for decisions under risk vs. decisions under uncertainty.
**I'm not happy with the idea of utility units, and I'm really not happy with the idea of unbounded utilities--but let that pass. It may come up again in a later post.

The St. Petersburg variant, due to David Chalmers, gives us a situation where we actually can define a probability distribution, but the same problem arises. A St. Petersburg bet works as follows: A coin is flipped until it comes up heads. If it comes up heads on the nth flip, you get 2^n utility units. The EU of the St. Pete is: 1/2 chance of 2 + 1/4 chance of 4 + 1/8 chance of 8 +..., i.e., infinite.

So the St. Petersburg two-envelope problem is as follows:

God runs two St. Petersburgs and writes the result in each envelope. The angel randomly gives you one envelope, as above, and offers to let you switch, as above. Should you?

Argument that you should: Suppose the number written in your envelope is 2^n. No matter what that value is, the EU of the number in the other envelope is higher--since the EU of the other envelope is infinite. So no matter what you have, you could expect to gain by switching. So (if you're an EU maximizer), you should switch.

Paradox: Exactly as in the original two-envelope case. An exactly symmetrical account tells you that you'd be better off with the envelope you've already got.

What goes wrong? Chalmers and Brian point out that each argument seems to use the following principle (my formulation):

Take a partition P of the probability space. Suppose that, for every cell C of the partition, strategy S yields a greater EU than strategy T given that you're in cell C. Then strategy S is preferred over strategy T.

(Here P is the partition yielded by dividing probability space up according to the number in your envelope; S is switching envelopes; T is standing pat.)

This principle is guaranteed to work when P is a finite partition. But when P is an infinite partition, and in particular when strategies S and T have an infinite EU, the principle won't always work.

So it might seem that we should throw out our decision principles whenever we're faced with infinite partitions of probability space. But there are cases in which it seems like the decision should be obvious, even though there's an infinite partition. I'll talk about those in the next post.

Posted by Matt Weiner at January 25, 2004 03:21 PM
Comments

Hey, can you explain why people think the partition-dominance principle is so crucial here? Suppose in either case I open the envelope and see the contents. Then, doesn't the reasoning go through just as before? It seems to me that any mistake introduced by the partition-dominance assumption can't be the root of the matter.

Posted by: Mark at January 26, 2004 01:00 PM

I'm going to have to think about that, especially for the traditional two-envelope case. In fact, I'm going to have to think about it for so long that I won't be able to say anything cogent right now.... If someone comes along with something reasonable to say I may steal it. (I think the fact that it's impossible to define a distribution in which every possible rational is equally likely may be causing some problems here.)

For the St. Petersburg case, I think EU maximization does yield the paradoxical result that it's irrational to switch before you open the envelopes, but it's rational to switch after you open the first envelope, no matter what is in the first envelope. The EU of the second St. P is infinite, so it's rational to give up the first St. P for the second, no matter what the result of the first St. P is.

(I think that's what's driving the end of this very long year-old post by Brian, which I just stumbled across.)

I'd like to blog about this paradox sometime--it seems to me related to the paradox of the bet that increases in EU the longer you wait--so that it's rational never to take it, and you never collect (blogged by Chris Bertran here). But first I have to come up with something sensible to say....

Posted by: Matt Weiner at January 26, 2004 06:52 PM

I think if you can have an infinite number of trials (coin tosses), which is required for this paradox to work, then you could just as easily achieve a truly random distribution of values across the rational numbers. So the fact that the latter is impossible makes me suspect that the former is as well. But maybe the infinities have different cardinality, and maybe it matters...
Also, utility units are an economic model construct whose reality can easily be called into question. Introducing them makes the whole thing look shaky (but I see you addressed this in a later post...)

Posted by: bbartlog at January 27, 2004 07:34 AM

bb--
It's mathematically possible to define the probabilities that result from an unlimited number of coin tosses--1/2 chance you get heads first, 1/4 chance you get heads second, etc. But, as probability theorists define it, it's not even mathematically possible to define a probability distribution on which each random number is equally likely. That's because probabilities are supposed to add up across countably infinite sets--there's no non-zero probability you can assign each rational number, and assigning zero probability obviously doesn't work either.

This leads me to think that Mark's point makes the two-envelope case look much trickier. The way the argument is usually phrased, you say "Suppose there's 4 utils in your envelope. It's just as likely that the other one has 2 or that it has 8." But, if you can't actually define a probability distribution for the choice of the pair [x, 2x] that gets written in the envelope, then I don't think you can actually say that.

To spell it out: Here's one cell of the possibility space: The case in which I have 4 and the angel has 8, and the case in which I have 4 and the angel has 2. The paradoxical argument assumes that, conditional on your being in this cell, each possibility is equally likely--so you can calculate the EU of switching (conditional on your being in this cell) at 5.

Now, I don't think that the conditional probability [we're in this cell/the angel has 8] actually can be defined. Which means you can't define the EU of switching, given that you're in this cell.

What seems to be going on here is an illegitimate use of the Indifference Principle--which says, "If there are n possibilities, and you have no idea which obtains, assign each a probability of 1/n." Brian has made some criticisms of the Indifference Principle (which I don't understand right now). But this might be a new one.

Now, I don't think this harms my ultimate point--the Chronological Ordering Principle still works as a guide to when you can calculate those conditional probabilities. But these are tricky waters.

(I'm going to post Mark's question and this comment separately, since I've gone on for so long.)

Posted by: Matt Weiner at January 27, 2004 06:40 PM