I started this blog in part to work through some stuff about the two-envelope problem (my stab at a theory, some more thoughts, some later stuff, and the links will give you more posts). That trailed off eventually, but the problem has popped up again on Crooked Timber and TAR--John Quiggin, Brian W., Brian on TAR (same post, different comments). John says that the problem goes away if you create a probability distribution with a finite mean, Brian agrees but points out that we can create a well-defined distribution without a finite mean.

In the comments Bill Carone takes the line that we're impermissibly messing with infinity, and if we would just realize that the problem would go away. I disagree; Bill's got infinity wrong.

In his 5:18 comment here Bill observes that one step in the argument regroups the terms in a divergent series, and that this yields paradox because there's no guarantee that you'll get the same sum when you change your groupings. (Say the series a_{1} - b_{1} + a_{2} - b_{2} ... diverges. It may be, for instance, that a_{1} + the sum of all (a_{i + 1} - b_{i}) converges to a different limit than does the sum of all (a_{i} - b_{i}).)

That's true, an accurate diagnosis of the problem, and exactly what Dave Chalmers said.

Bill continues:

take the standard numbered ball and urn example:

1. Put balls labelled 1 through 10 in, take the ball labelled 1 out.

2. Put balls labelled 11 through 20 in, take the ball labelled 2 out.

…

Some will say that, in the limit, there are zero balls in the urn, since each ball is taken out. This is incorrect.

Clearly, if you take the limit, you see that it diverges, as the number of balls increases without limit.

If you use hyperreals, you get the same (correct) answer; if you do this N times, where N is an infinite integer, you still have 9N balls left in the urn. All the finite numbered balls are out, but 9N infinite numbered balls are in. Again, no paradox.

I'm afraid I can't see what Bill is talking about here. Let's think of it synchronically instead of diachronically. Suppose that you define two sets of sets of integers as follows (i ranging over positive integers):

A_{i}= {10i - 9, ..., 10i} for all i.

B_{i}= {i} for all i.

Let D

How many integers are there in the union of all A_{i} but not in the union of all B_{i}? Zero. Both unions comprise all positive integers.

Nor does taking the union of all A_{i} require us to throw in any hyperreal integers. We have taken the union of an infinite number of sets, but it would be very hard to do mathematics without such operations. And I defined an infinite number of sets above, but it would be completely impossible to do mathematics without doing that.

This reminds me of Bill's comment here, about a problem that's similar to the ball-and-urn problem (Bill Gates is presented with an infinite number of bets, each of which has positive utility but that when summed together guarantee him a loss of $1):

The last step is invalid, when they say “Bill Gates accepts [all the bets].” This assumes that he has an actual infinite number of deals before him. Mathematics should not use infinities as actual quantities.

But it's no more impossible to present an infinite number of bets than it is to define an infinite number of sets, the way I just did. You can accept an infinite number of bets, too, by saying "I'll take 'em all." In fact, I hereby make an infinite number of conditional commitments: If someone leaves an integer as the first comment to this thread, I will leave twice that integer as my first comment to that thread. I have committed myself to all of the following: If you write i, I will write 2i. Is there a problem with that?

Now, Bill may wish to lean on the synchronic vs. diachronic distinction. As far as the ball-urn paradox goes, you might say that it doesn't make sense to say, "Put in balls 10i - 9 through 10i and take out ball i. Do this for all i. How many balls are left when you're done?" Because you might wonder whether it makes sense to talk about when you're done with an infinite number of tasks.

I don't think this will help, though. Say that you stipulate that the first insert ten-remove one takes place at t=1/2, the second at t=3/4, the third at t=7/8... For every t, it's well defined exactly which balls are in the urn. As t approaches 1, the number of balls in the urn increases without limit. And at t=1, there are no balls in the urn.

Bill complains here that Brian is assuming that an infinite number of coin flips has been completed and then using that infinity in his calculations. But Brian isn't actually using any infinite numbers. He's using a well-defined positive number obtained by a certain procedure--flip a coin until it comes up tails--which intuitively yields a certain probability distribution. That procedure isn't even a supertask except in the probability 0 case in which the coin never comes up tails. And the objection that it's physically impossible to flip a coin that fast seems as much beside the point as John Quiggin's comment that the problem is theologically inaccurate.

Bill may have a point that you just can't work with probability distributions defined over infinite sets, except as limits of finite sets. Part of the point of working through these paradoxes is to see whether and how we can extend our intuitive conceptions of probability theory. The problem I discuss here is meant to put some pressure on those intuitions--we have some potential global weirdnesses created by the two-envelope problem, but we also have a local question (should I take another one-flip bet?) that seems easy to resolve. It may be that whenever we have infinite probability distributions things will break down. But we won't be able to show that just by making the sign of the cross whenever the notion of infinity comes around.

(Why am I ranting on so? Basically, because Bill's comments assume that every philosopher who's ever worked on this is an idiot who doesn't understand how to take a limit to infinity, and because he won't stop saying this. I used to be a bit of a math jock myself, Brian knows a bit about probability theory himself, and the collective wattage of the philosophy profession is higher than you might think.)

Posted by Matt Weiner at May 20, 2004 01:38 PMComments

Matt, hope I haven't insulted you or anything; if so, I apologize.

Short answer: infinite sets are fine, as long as you treat them as particular well-behaved limits of finite sets. Gauss agrees, Cantor disagrees (I think). I am simply showing that Gauss's way avoids paradoxes.

If the problem doesn't specify how to take the limits, and different limits lead to different results, then the problem is ill-posed. As modelling advice, take every limit as late as possible to avoid counterintuitive results.

"Bill's comments assume that every philosopher who's ever worked on this is an idiot who doesn't understand how to take a limit to infinity, "

Every philosopher is not an idiot. I may be.

I am also not saying that you don't know how to take a limit; I am disagreeing with your methods. For example:

"Let Di be the number of members of A1 U A2 U ... U Ai - (B1 U ... U Bi). For any N, DN = 9N, so as N goes to infinity so does DN."

Here, you take the difference before the limit (lim (UAi - UBi))

"How many integers are there in the union of all Ai but not in the union of all Bi? Zero. Both unions comprise all positive integers."

Here you take the limit before the difference (lim (UAi) - lim (UBi))

This is the source of this paradox. In your second case, if you take the limit after, by asking how many integers are there in the union of Ai but not Bi as i goes to infinity, the number of integers diverges.

The limit of a function isn't necessarily equal to a function of the limit. This isn't a paradox, is it? They are different mathematical models.

You might ask, then how should we model this problem? I always take any limit as late as possible (this is my take on Gauss's dictum, from Edwin Jaynes). However, in general, if a problem can be accurately modelled in two ways that give two answers, the problem is ill-posed; you must define any infinite set as a limit of a finite set.

But what if we actually did the experiment? Then we could figure it out, right? And how we modelled it wouldn't matter. Well, kind of.

"Say that you stipulate that the first insert ten-remove one takes place at t=1/2, the second at t=3/4, the third at t=7/8... For every t, it's well defined exactly which balls are in the urn. As t approaches 1, the number of balls in the urn increases without limit. And at t=1, there are no balls in the urn."

I disagree.

Your "infinite number of steps" issue is clouding the issue of how to take the limit; there are two ways, and each gives different answers. That is why I made the move to hyperreals, which shine some light on this particular problem.

Here, we go ahead and do an infinite number of steps. Call that infinite integer N. Now, in the urn, no ball has a finite integer on it. However, 9N balls have infinite integers on them. So the urn is not empty. This supports the idea that the correct way of taking the limit is to take it at the end.

If you have a problem visualizing an infinite integer, do this easy experiment: in a 1/2 second, write the number 1; in a 1/4 second, write the number 2; in a 1/8 second, write the number 3, ... In a mere second, you will be looking at the infinite integer N. You can see that at least the infinite integer N+1 will still be in the urn, right?

Do I think this experiment works? Of course not, and it never could in the real world. But if it did, then this would be the result.

"We have taken the union of an infinite number of sets, but it would be very hard to do mathematics without such operations."

Hmmm... my understanding of Gauss's dictum is that when we take an infinite union, we should treat it as taking N unions and seeing what happens when N increases without bound. This is how an infinite sum is defined, as a limit of partial sums. Is an infinite union defined as a limit of "partial unions"? I can't lay my hands on a source. Do you have counterexamples?

"If someone leaves an integer as the first comment to this thread, I will leave twice that integer as my first comment to that thread. I have committed myself to all of the following: If you write i, I will write 2i. Is there a problem with that?"

I don't think that this is a problem, since you can safely treat your "infinite number of conditional comments" as a limit of a finite number of conditional comments "If you put any i from -M to N, I will write 2i" and increase M and N without bound. Since any way you take the limit, you get the same f(i) for any i (right?), then it doesn't matter if you take the shortcut and say "any i from minus infinity to plus infinity."

With the "accepting an infinite number of bets" problem, taking the limit gave a non-paradoxical result (What is the certain equivalent of taking N bets, as N increases indefinitely?)

Problems enter when different methods of taking the M,N limit give different results (say, summing i where i goes from minus infinity to plus infinity: is it zero? or does it diverge?). This is the typical way to produce "infinity" paradoxes:

1) Start with an infinite set, without specifying the limiting procedure, then

2) ask a question that depends on the limiting procedure.

Note that, in the math texts I've seen, the definitions bend over backwards to avoid this problem. For example, they define "summing from minus infinity to plus infinity" as "sum from minus infinity to zero, then add the sum from zero to plus infinity." They have specifically stated where and how to take the limits.

The infinite set masks the fact that the problem is ill-posed; you must specify the limiting procedure used to create the infinite set.

"Bill may have a point that you just can't work with probability distributions defined over infinite sets, except as limits of finite sets."

Again, problems only occur when the infinite sets aren't treated as specified limits, and even then they don't occur all the time. There are many times that "plugging in infinity" works out fine, and it is useful to show that in these cases it works, and in these other cases it doesn't (if nothing else, plugging in infinity is often much easier to do than taking limits).

However, when plugging in infinity gives a paradox, we should redo the problem using limits and see if the paradox disappears (which is what has happened in all cases I have seen).

The problem I have been having with Brian's posts and papers are that he is claiming

- there is one answer to the problem and

- it is paradoxical.

He often then goes on to say something like "Therefore, you can't use the principle of indifference because it leads to this paradox."

My position is that, as long as you specify the limiting procedure used to create the infinite set, there will be no paradoxes, and I have had to show this over and over simply because he keeps repeating it over and over, claiming to have overcome my objection (if you look at his Dr. Evil paper, or the Elga et. al. paper he posted, or these St. Petersburg-type paradoxes, you will see the same issue over and over).

I guess I shouldn't care, except that he is claiming things about decision theory that, if true, worry me. I therefore want to figure out if he has found a real problem or just an illusory one based on infinite sets. Up til now, he hasn't been able to produce a paradox that hasn't disappeared when I have modelled it as a limit.

Bill, sorry for the intemperate nature of this post--we were, as you mentioned elsewhere, talking past each other, and I was finding it frustrating.

Last thing, then first thing, then maybe the middle.

I'm not sure about Brian's Dr. Evil paper, and about the results for decision theory; I don't have much to say about the Indifference principle. I don't think that Brian does claim there is one paradoxical answer to St. Petersburg-type problems; my involvement in this issue came about because of the post where he claimed that there was a definite non-paradoxical answer to the two-envelope problem, and I think you agree with that analysis (it's basically that contained in the Chalmers paper, where he talks about reordering the terms in a divergent series, which you've also cited). Usually there's no question that the finite cases are non-paradoxical. The problem is that there are times when it seems as though the answer in the infinite case is obvious, and we want to have principles that generate the right answer without generating inconsistencies.

First thing: I think there's a philosophical difference here between Gauss and Cantor. Finitism is an extremely controversial position in philosophy of math; and it does seem to me as though mathematics has to deal with infinite sets a lot. Perhaps you can use well-behavedness to treat them all, but it seems to me that you'll have to rule out some functions that can be well-defined.

Let's take the ball and urn problem, for instance. I define this function from the real line to sets of integers:

If t is outside [-1, 0), f(t) = {},

If t is in [-2-i+1, -2-i), f(t)={i ... 10i}.

That's a perfectly well-defined function, and it is also the only function (I think) that meets the following criteria:

For any t
otherwise, j is in f(t) iff j is in {10k-9,...10k} and t is in [-2-k+1, -2-j ).

Which intuitively should be the same as saying that at geometrically decreasing intervals you put in ten balls at the end and take out one from the beginning. I just don't see why it'd be necessary to take a limit or to invoke hyperreals to explain this.

In the example you gave about writing down the integer, it seems to me that the answer "what's on the page after a second?" just isn't defined; every number that has been written down has been erased later. The ball-and-urn problem is actually more tractable, since the question is whether any of the balls are left in the urn; if every ball that gets put in gets taken out later, then there are no balls left in.

I don't know hyperreals, and they sound interesting; are they like limit ordinals? It seems possible that they might provide a tool for dealing with the infinite decision-theory problems, and I'd be interested in that. (Though I must admit that this stuff is just a sideline with me.)

Posted by: Matt Weiner at May 21, 2004 03:05 PMIs the 2-envelopes paradox really anything to do with infinities? I'm not so sure. (Rather, I thought it was more to do with misusing variables in such a way as to imply a sort of '3-envelope' scenario.) I blogged about this a while ago... is there a flaw in my analysis there? Or are we talking about different problems?

Posted by: Richard Chappell at May 22, 2004 12:46 AM"Is the 2-envelopes paradox really anything to do with infinities?"

Some forms of it are.

One form is where you are assigning a prior probability to the amount of money in the "small" envelope. Some people will argue that, since you don't know anything about its contents, you should put a uniform probability from 0 to infinity.

Another is in the St. Petersburg form of the paradox; it is in the Chalmers paper that Matt cited above (a page or two down).

I disagree with your analysis when you say that there is a 50% chance ...

(from my Crooked Timber comment)

Define x as the “small” amount in the envelopes (so the two envelopes have x and 2x in them). You don’t know what x is, so you assign a probability distribution to describe your information about x.

For example, my distribution for x would be different if I were playing with Bill Gates than if I were playing with my professor.

You now take one envelope and look inside. Define y as the amount you see.

Now, either y=x or y=2x. The probability isn’t necessarily 50%; you need to calculate it using your initial distribution for x.

For example, if you know that I have decided to limit my losses to $100, your probability for x will be zero for any x>$50. If you observe y=$75, then you won’t switch, since seeing the $75 has told you that you have the higher envelope for sure (since I wouldn’t risk $150, the other must have $37.50).

After you see y, you can use standard probability calculations to find the probability that y=2x (call it p1). You should switch only when p1 is less than (2/3) it turns out.

In practice, here is how it works: if I open the envelope and see $100, I think “Before I saw this, what were my probabilities for x=$50 and for x=$100? If the former was less than twice the latter, I should switch.”

So, depending on your initial distribution for x, you might want to switch or you might not, depending on what you see in your envelope.

(end quote)

The infinity issues come in in the first step, assigning the distribution to x. For example, Chalmers assigns a distribution that mimics the St. Pete's paradox (one envelope is the St. Pete's paradox, the other is double it).

I disagree with Chalmers's analysis of this when he says:

"So (2) for all x, if I knew that A contained x, I would have an expected gain in switching to B."

This is the "taking limits too soon"/"calculating directly on an infinite set" issue I have been discussing. If we fix the number of possible flips at N, (2) is false (as N gets large, I think that most possible values of x will be better than switching). So, if we take the limit as N goes to infinity, it will still be false.

So two limit methods give two possible answers. The problem may not tell us which limit method is correct, but I have never found a paradox that didn't disappear if you take the limits as late as possible. (This may change soon, as I am having a discussion with someone over on the CT thread about this sort of thing).

Matt,

"If t is outside [-1, 0), f(t) = {},

If t is in [-2-i+1, -2-i), f(t)={i ... 10i}."

I don't understand: if t = -2, then it is outside [-1,0) and it also is in [-2-i+1, -2-i) when i=1 i.e. [-2,-3). What is f(-2)?

Am I just completely confused? It happens a lot :-)

Also, I'm confused about the following:

"For any t otherwise, j is in f(t) iff j is in {10k-9,...10k} and t is in [-2-k+1, -2-j )."

Was there an HTML issue? Or am I, again, just confused?

Damn it, I thought that these comments supported HTML. Sorry, there were supposed to be a lot of exponents in there. Pretty much everything that comes after a 2 is supposed to be an exponent. I'm on a public computer so I don't have enough time to say anything substantive, but when I have time maybe I'll correct the original (and respond to you guys).

Posted by: Matt Weiner at May 22, 2004 02:15 PMAh, cheers Bill. The crucial difference is that in the version of the paradox I'm interested in, you don't get to open the first envelope before swapping, so you don't get to revise the probabilities to anything other than 50/50.

Given that we still face the apparent paradox (that, when viewed in terms of "doubling or halving" your money, it seems better to swap to the other random envelope) in this simpler version, is there any advantage to be gained from adding further complications?

Or would you agree with my solution to the simple paradox, but go on to say that the "infinity" versions of the paradox are more interesting, because my solution no longer applies? (That would make more sense, I guess.)

Posted by: Richard Chappell at May 22, 2004 07:00 PMRichard,

Oh dear; I guess I fail reading comprehension as well as logic :-)

You have it exactly right if you don't look at the money inside. There is no need to introduce y at all, and the e-value is 1.5 times the e-value of x, where x is the (unknown) amount in the smaller envelope.

You do say the following:

"After all, however much money you have at the moment, the other envelope either has half of that, or else double."

You have introduced the idea here that is similar to looking inside the envelope. You are saying, "Even though I'm not going to look inside, what if I did and saw y dollars?"

The paradox is that, even though it is clear from your analysis that switching is silly, it seems that, no matter what is in your envelope, you would be better off switching.

That move is the one my analysis tries to fend off; in fact, the probabilities might change from 50-50 when you observe y, and there must be some values of y where keeping what you have is the better choice.

So I do think you need to explicitly address the idea of looking inside the envelope to defeat the paradox.

You do this by reminding us that:

"The problem is that, within the E(z) calculation, the y in "2y" is different from the y in "y/2". That is, the variable y is being used to simultaneously represent two different values (x in one place, and 2x in the other)."

However, although I understand your point, I don't think I am entirely convinced. If I look in my envelope and see y=$100, then the other envelope will contain either $50 or $200. That is, it will contain either 2y or y/2, right? So I'm not sure how your analysis can argue with this.

My analysis allows for this; it argues that, although those are the two possibilities, the probability 50-50 might change when you observe y.

Matt,

"In the example you gave about writing down the integer, it seems to me that the answer "what's on the page after a second?" just isn't defined; every number that has been written down has been erased later."

How about if I never erase any numbers? I can fit them all on one sheet of paper, right? (1/2, 1/4, 1/8 ...)

The idea I was trying to get at is that, if you can finish an infinite number of coin tosses in a finite time, then I can write down an infinite integer, and then go on to write a different infinite integer that is one more than the first. Once you have these ideas, then the paradoxes disappear; the urn is filled with an immense number of balls with these infinite integers on them, even after we have removed all the finite integers.

"it does seem to me as though mathematics has to deal with infinite sets a lot."

I can deal with them.

Example: I integrate continuous (gasp!) functions of real numbers (aack!) from minus infinity (shock!) to plus infinity (horror!) all the time. I model all of these infinities as limits, though: the continuous function as a limit of discrete functions, the real numbers as a limit of rational numbers with finite decimal expansions, the minus and plus infinities as a limit of finite numbers.

Example: I don't think that there is a highest positive integer. In any finite set of positive integers, there would be a highest, right? So I must have some concept that the set of all positive integers isn't finite; mine is just that the highest positive integer in the set {1..N} diverges as N increases indefinitely.

Example: Gauss proved things about all integers, all real numbers, etc. so he definitely could deal with these infinite sets; I believe he simply modelled them as limits.

So I have nothing against infinite sets; they are simply shorthand for limits.

My position is that all infinities can and should be modelled as limits (Anarch, on the CT thread, has told me he has counterexamples, so I may change this position soon).

Among other reasons, one thing that makes me believe this is the following. If we define "infinite setters" as people who use infinite sets directly, then

1) when infinite setters don't produce paradoxes, they and I get the same answers, and

2) when infinite setters do produce paradoxes, I do not. I either refuse to answer (by showing that the problem is ill-posed), or produce a non-paradoxical answer.

Counterexamples to any of the above would be welcome. I would really like to see examples of infinities that either can't be modelled as limits or where such modelling would lead to gross misunderstandings of them.