Unfortunately, they're already asking me questions I can't answer. In the comments to my first two-envelope post, Mark asks:

*Hey, can you explain why people think the partition-dominance principle is so crucial here? Suppose in either case I open the envelope and see the contents. Then, doesn't the reasoning go through just as before? It seems to me that any mistake introduced by the partition-dominance assumption can't be the root of the matter.*

Actually, I can answer his first question--I probably can't explain this. But here's my response (patched together from those comments).

First, for the St. Petersburg case, I think EU maximization *does* yield the paradoxical result that it's irrational to switch before you open the envelopes, but it's rational to switch after you open the first envelope, no matter what is in the first envelope. The EU of the second St. P is infinite, so it's rational to give up the first St. P for the second, no matter what the result of the first St. P is. [This is a paradox I hope to say more about eventually.]

(I think that's what's driving the end of this very long year-old post by Brian, which I just stumbled across.)

But for the traditional two-envelope problem, I think we have a problem. I don't think we want it to be the case that, once you open the first envelope, you have a reason to switch. Now, one issue here is the mathematical impossibility of defining a probability distribution on which every rational number is equally likely--probabilities are supposed to be countably additive, which makes it impossible to assign the same value to an infinite number of different ones. [Afterthought: Anyone know if non-standard analysis can do any work here?]

This leads me to think that Mark's point makes the two-envelope case look much trickier. The way the argument is usually phrased, you say "Suppose there's 4 utils in your envelope. It's just as likely that the other one has 2 or that it has 8." But, if you can't actually define a probability distribution for the choice of the pair [x, 2x] that gets written in the envelope, then I don't think you can actually say that.

To spell it out: Here's one cell of the possibility space: The case in which I have 4 and the angel has 8, and the case in which I have 4 and the angel has 2. The paradoxical argument assumes that, conditional on your being in this cell, each possibility is equally likely--so you can calculate the EU of switching (conditional on your being in this cell) at 5.

Now, I don't think that the conditional probability [we're in this cell/the angel has 8] actually can be defined. Which means you can't define the EU of switching, given that you're in this cell.

What seems to be going on here is an illegitimate use of the Indifference Principle--which says, "If there are n possibilities, and you have no idea which obtains, assign each a probability of 1/n." Brian has made some criticisms of the Indifference Principle (which I don't understand right now). But this might be a new one.

Now, I don't think this harms my ultimate point--the Chronological Ordering Principle still works as a guide to when you can calculate those conditional probabilities. But these are tricky waters.

Comments

Thanks, Matt. I think you're right about the St. Petersburg case. There is no well-defined expectation of the profit in switching without opening the envelope (which is already odd; intuitively shouldn't it be 0?). After opening the envelope, no matter what you see the expected profit is infinite (which again can seem troubling).

Your point about the two-envelope case is very much what I was wondering about: the suggestion that the key problem isn't in the dominance principle but in the assumption that each cell has the feature the dominance principle relies on---something like: equiprobabilty of x/2 and 2x. Apparently there is no distribution over the possible contents of the envelopes which isn't biased towards the lower values, so this equiprobability assumption can look questionable. (There are, however, related gambles in which the expectation of trading is always positive despite despite a decreasing distribution---I take it that Broome's and Chalmers' cases are like this. Perhaps these are indeed best treated like the original St. Petersburg problem above.)

But the lack of a flat distribution seems already very striking, if our credences are supposed to be probabilities. Surely there is such a thing as knowing only about a certain value that it is, say, a rational number, and having no bias as to its value. Is this an illusion, or are probability distributions unable to model certain cases of uncertainty?

If such uncertainty is possible, then it seems natural, on finding $4 in your envelope, to assign 50/50 odds to $2 and $8 in the other. Is this a mistake? Does it require sneaking in assumptions about an underlying distribution? Maybe.

But supposing it's not a mistake, should one trade based on the $5 expectation? That seems reasonable. But then should one switch before opening the envelope, in the knowledge that this reasoning would be available were one to open it? Surely not, on pain of inconsistency (because we can also imagine opening just the second envelope), but why not? Perhaps here too we need to point to the ill-definedness of the expectation of profit.

Posted by: Mark at January 28, 2004 03:32 PMThe point about the flat distribution is true, and relates to something that's been at the back of my mind with respect to all this. That is--What we're trying to get at when we say "Every rational number is equally likely" is really that we're in a situation of uncertainty rather than risk. There it seems (to me!) like your second alternative is right--we don't know how to model certain kinds of uncertainty using probability distributions.

I'm inclined to say that that means that, after you find $4 in the first envelope, you still shouldn't assign 50% credence to $8 in the other envelope and 50% to $2 in the other envelope. It's just uncertain. This means that you've gone from 50% credence that you got the big envelope to uncertainty about the chance you got the big envelope, but that seems like the least paradoxical consequence to me.

(Sidebar 1: Part of what I'm trying to get at in working on the Two-Envelope problem is to rescue some decision principles in the face of uncertainty. The results may wind up as obvious things that people already know--don't know.)

(Sidebar 2: Brian has had a lot to say about the difference between risk and uncertainty here. I more or less stole the point about going from 50% credence to uncertainty from his point about the roulette player, if I'm understanding it aright.)

Posted by: Matt Weiner at January 28, 2004 06:42 PMExcellent point about the EU of switching in the St. P case being undefined. I was just about to say that it should be, but I realized that it looks like a series where the negative and positive terms both diverge (as Chalmers discusses). That is, you have:

{1, 1} (1/4 chance); {1,2} (1/8); {1,4} (1/16); etc.

{2, 1} (1/8 chance); {2, 2} (1/16); {2, 4} (1/32); etc.

{4, 1} (1/16 chance); {4, 2} (1/32); {4, 4} (1/64); etc.

etc. etc. etc. etc.

The curly brackets are meant to be ordered pairs: {first envelope, second envelope}.

If you add up the rows first you get an EU of positive infinity for switching; if you add up the columns first you get an EU of negative infinity; and (this is a theorem, I think) you can group them in such a way as to make the sum come out any way you like.

Well, I believe I've come up with an argument (under "My first shot") that says you should be indifferent about switching even though the EU of switching is undefined. That's an interesting result.

Posted by: Matt Weiner at January 28, 2004 06:53 PM