(1) The maintenance man came by to fix the radiator and with three quick turns of his wrench... seems to have accomplished nothing. Developing...
(2) Employing my radical interpretation skills, it seems as though my cat has one basic, other-regarding preference: She wants to sit in the chair I want to sit in. Sometimes this preference is so strong that she will sit in it regardless of whether I am already sitting in it. This way of looking at things might console one of my commenters...
(3) I reconfirmed this morning that, of all the irritating music I listen to, there are only three things that bother my cat. (Actually, I just reconfirmed one. To reconfirm all three in one day would be sadistic.) Perhaps I will describe them on Monday; I haven't been doing enough [any] music blogging.
The title's supposed to be something of a play on words, but it didn't really come off. Sorry.
Anyway, this will be one (possibly not the first) of an ongoing series of crotchets, in which I take aim at things most other philosophers seem to believe, but I don't. Today's is the idea of a de re attitude, as opposed to a de re ascription of an attitude. I'm not saying I can prove there's no such thing as an attitude that itself is de re; but it seems to me that you don't need the idea of a de re attitude to do the work that people usually want done.
The target is (naturally) a remark of Brian Weatherson's--in very short, he considers a case in which he's running for office, he doesn't want to win, but he sees himself on TV dressed up in a funny costume and forms "the desire that that guy wins the election. In this context, which of (1) and (2) are true?
(1) Brian wants Brian to win the election. (2) Brian wants to win the election.
Brian says: (2) false, (1) true, and I agree at least about (2). But then he proposes:
De Se Hypothesis
(2) is only true if Brian has a de se desire, a desire that is essentially self-directed. It's false in the case described because he has a de re desire that that guy wins, a desire that is directed at the guy on TV, which just happens to be him.
But it seems to me that both the desires can be described perfectly well de dicto. Brian has desires that he could express as follows:
(A) I desire that I not win the election
(B) I desire that that guy [on TV] win the election.
In fact, let's add another case: Brian looks at a wall that happens to be a mirror, sees a perfectly staid respectable guy [aside--I'm going to catch it now], and forms the desire:
(C) I desire that that guy [in the mirror] not win the election.
(We can have it that Brian is watching TV in a bar with a mirrored wall, so he forms the desires simultaneously.)
(B) can be described de re as the desire that Brian win the election; just as (A) and (C) can be described de re as the desire that Brian not win. But I just don't see what the gain is of saying that the desire in (A) or (C) is de re--that Brian is involved in it, without the mediation of a description. For one thing, it sticks us with saying that Brian both wants and doesn't want Brian to win, while obscuring how this conflict comes about.
But thinking of an attitude as de re is always going to lose us information. Objects just don't fit into desires (or other attitudes) unless they're thought about. And then why not include the way way in which they're thought about as part of the desire?
So I think (2) is false, and the de se hypothesis is good: "W desires to phi" means that W has a desire that could be expressed "I want to phi." (Or "I wanna," I guess.) As for (1)... it seems as though it's true if Brian has a desire "I want X to win," where X refers to Brian--for some range of possible descriptions of Brian, set by the context. I think the restricted range is necessary to avoid ascriptions such as "Alfred wants the most evil candidate to win," in cases where Alfred wants Senator Palpatine to win but doesn't know that Palpatine is the most evil.
(I might entertain views on which semantically, the range is unrestricted, but "Alfred wants the most evil candidate to win" is pragmatically disallowed. But I should warn you that they can lead to a paradox. More on this later, maybe.)
I should say that I think a lot of this stuff is cribbed from Bob Brandom's Making It Explicit; possibly more than I know, since I absorbed a lot of the book by osmosis while I was at Pitt.
I forgot to blog the release party for the Salt robots issue, at 9 pm EST at Games 'N At in Pittsburgh--though if you're reading this blog and you're in Pittsburgh, you undoubtedly already know about it.
For the rest of ya, "Salt is a journal [edited by my friends Ellen and Berry] that modernizes and reinterprets archetypes. Each issue explores a theme aesthetically, literally, and metaphorically." I have an article in the robots issue about robots and hip-hop, focusing on electro and Kraftwerk. It's not that scholarly, because I don't know that much about electro and Kraftwerk.... I also apparently played a small part in helping the editors realize that robots were an archetype.
[The set-up for the Two-Envelope Problem is here.]
The Chronological Ordering Principle relied on the idea that, if a process with a finite number of possible outcomes takes place after a process with an infinite number of possible outcomes, the outcome of the finite process can't affect the outcome of the infinite process. So, if a strategy yields the best expected utility (EU) no matter what happened in the finite process, that strategy should be followed. The conditional EUs will be depend on the probability distribution of the finite process, but there's no way that can cause trouble.
The COP can't handle the Advance Coin Flip, because in the Advance Coin Flip the finite process (the coin flip) takes place before the infinite process (the St. Petersburg bet). But it seems as though finite process doesn't affect the outcome of the infinite process anyway. The real point is whether the infinite process is independent of the finite process, not whether it takes place first.
So let's try modifying the Chronological Ordering Principle to state that the outcome of the infinite process is independent of the outcome of the finite process, no matter the order in which they occur.
(What makes processes independent? I'm not sure, really. In fact, I'm not even sure how to define processes--you might be able to gerrymander them in all sorts of tricksy ways. At the end of the post we'll see how this might cause trouble.)
(Independent Process Principle) Suppose that probability space is partitioned by performing process Q, which has an infinite number of outcomes, and process R, which has a finite number of outcomes, and that the outcomes of Q are independent of the outcomes of R. Let PQ be the partition induced by the outcomes of Q alone--that is, two ultimate outcomes are in the same cell of PQ iff they result from the same outcome of Q. Suppose that, in every cell C of PQ, strategy S yields a higher EU than strategy T given that you're in C. Then strategy S is preferred over strategy T.
The IPP yields the right answer for the Advance Coin Flip; take R to be the advance coin flip, and Q to be the St. Petersburg; no matter how Q comes out, your EU is higher (probabilizing over R) if you take the coin flip.
In the Traditional Two-Envelope Problem, can you take Q to be the process of writing down the number that you actually get and R to be the process of deciding whether you get the bigger number or the smaller number (x as opposed to 2x)? I think it's pretty clear that you can't. The way the problem is set up, the pair [x, 2x] is chosen first, and then you're given one of the two envelopes. So, if you try to set up Q as an infinitary process such that the outcome of Q is the number you get in your envelope, it's pretty clear that the outcomes of Q are not independent of the finite process of deciding which envelope you get. So the IPP is not applicable.
(We can reframe the traditional two-envelope problem as follows:
(R) one angel flips a coin to decide whether you get the bigger or the smaller envelope;
(Q) then another angel, not knowing the outcome of (R), randomly picks the x such that x is in the smaller envelope and 2x is in the bigger;
and then a third angel hands you the envelope and gives you the option of switching.
But when we try to apply the IPP to the St. Petersburg two-envelope problem, we run up against what might be a gerrymandered process. The St. P two envelope problem can be framed either of the following ways:
(1) First (R) they decide whether to give you the red envelope or the blue envelope (1/2 chance of either). Then (Q) they run a St. Petersburg for the red envelope and a St. Petersburg for the blue envelope.
(2) First (R) they decide whether to give you the red envelope or the blue envelope (1/2 chance of either). Then (Q) they run a St. Petersburg for the envelope they're going to give you and a St. Petersburg for the one they're not going to give you.
Either (1) or (2) could be used to describe the same process. If (1) is the description at issue, then the IPP tells you that it doesn't matter whether you switch. The cells of PQ are of the form [outcome of red St. P, outcome of blue St. P], and in each such cell the EU of switching and the EU of standing pat (probabilized over R) are the same.
But if (2) is the description at issue, the IPP yields no verdict. The cells of PQ are of the form [outcome of the St. P you have, outcome of the St. P you don't]. Obviously, in some of those cells the EU of switching is positive, and in some the EU is negative.
Nor do I think that it's obvious that, in (2), Q is not independent of R.
Now, I'm not sure this is fatal. The IPP might best be phrased existentially--"If there is some way of partitioning probability space such that it is partitioned by process Q and process R" etc. "and that in every cell C of PQ, strategy S yields a higher EU than strategy T given that you're in C. Then choose S over T." Then the possibility of framing the St.P2NV as in (1) would mean that the IPP says it doesn't matter whether you switch--and (2) doesn't yield a conflicting verdict. One problem is that I don't know whether it might be possible to come up with a problem that can be framed in two ways so as to yield a conflicting verdict. (Any proof might depend on a rigorous definition of "independent process.")
In fact, there's something even more worrying here. Take again the case in which they run a St. P for the one you're going to get, and another for the one you're not going to get. There's just no coin flip. It seems as though you have to describe this as in (2). And then, as above, the IPP doesn't yield an answer, though it should tell you (I think) that it doesn't matter whether you switch.
[AFTERTHOUGHT: Actually, I'm leaning towards the view that the utility of switching is undefined rather than zero. I'm certainly leaning toward that thought in the Traditional Two-Envelope problem. Maybe Brian's new problem will shed some light.]
Now, you could try to decompose this into two concurrent processes: one process that yields an unordered pair [outcome of the two St Ps], and one process on which, given that, you have a 50/50 chance of getting the bigger or smaller (if there's a difference). But it seems to me that the first process really--really really--wouldn't be independent of the second.
One of the radiators in my apartment has decided to crank itself up full-time, without regard to the thermostat. (Temperatures in Salt Lake have got high enough that this is a bad thing--sorry, East Coasters.) Yesterday I called my landlord agency to complain and had this conversation:
Woman in Office: If it's not responding to the thermostat, you can shut off the valve manually.
Me: OK; where's the control.
WiO: It's on the radiator; there's a brown knob to the right, marked with the directions for "open" and "close."
Me: I don't see it.
WiO: Look around. It's a brown knob on the bottom.
Me: [burns hand] Ow! That's not it. Ok, I'm looking at the right, and there's a white plastic thing saying "umpty-ump valve company; default setting: closed."
WiO: It's definitely brown.
Later I got my neighbor to come in and confirm that the valve is, indeed white. (And that there isn't any way for me to turn it off. The maintenance man responded admirably quickly and is coming this afternoon.)
Spent most of yesterday (well, besides grocery shopping) worrying over a single paragraph in my paper on testimony and agency--and, like most of the paragraphs that take that long, it's not that great and will probably have to go in the revision. Also, Brian Weatherson has begun to swarm all over the two-envelope problem, which looks to give me a lot more stuff to think about--including some things I may never figure out. But I did write another two-envelope post, which will be up soon.
The Powell's synopsis for the Third Policeman starts like this:
"One of the most elegant and inventive contemporary writers, Harry Mathews has created an accomplished and diverse body of work"--true but irrelevant, as they say.
But I did pick up Mathews' Tlooth at Sam Weller's yesterday (along with Les Murray's Fredy Neptune, Derek Walcott's Omeros, and Junichiro Tanizaki Some Prefer Nettles--bets that I actually read all these books will not be accepted). The first short chapter--an attempt at an assassination with an exploding baseball--had me hooked. The whole book seems (so far) like a series of nutty episodes, full of crossword puzzle words ("urubu" for buzzard)--as if it were being generated by some hidden scheme, rather than the necessities of plot and character. Given Mathews' membership in the Oulipo, that's not unlikely.
All those things I said about the book were meant as compliments, by the way. I warned you I was pretentious.
Nothing new about the two-envelope paradox today, but I'd like to bring up another one: The Surprise Examination Paradox (aka the Unexpected Egg, Senior Sneak Week, the Suprise Hanging, the Class A Inspection--does anyone have a centralized list of all the variants?)
The basic set-up is as follows: I have a class that meets Monday through Friday. One Friday I tell the students, "There will be a pop quiz next week. The day it is given, five minutes before class, you will not know that the quiz will be that day."
The students say: The quiz can't be Friday. If they haven't had the quiz by Thursday, they'll know the quiz has to be Friday--and then it won't be a surprise, contradicting what I said.
What if I haven't given the quiz before Thursday? Well, Thursday morning, they reason as follows: "The quiz can't be Friday, as above. So it has to be today (that's the only day left)." But that means that, Thursday morning, they know the quiz it'll be Thursday--and that can't happen.
But on Wednesday, they can go through the same reasoning... the quiz can't be on Thursday or Friday, so it has to be today, so we know it'll be today, so it can't be today. And so on back through the week, no matter how many days it is.
Yet on Wednesday, when they get the quiz, it's a surprise. So I was right after all.
I'm going to argue that this is still paradoxical, even if the class only meets once a week.
Suppose I say:
[S] Class, you'll have a test next week. And you won't know that you have the test until you get it.
Seems obviously self-contradictory, right? I just told them that they would have the test.
Except... just because I told them they have the test, does that mean they know it? That assumes that they can gain knowledge by employing this rule:
[M] Accept what Matt tells them.
If Rule [M] yields knowledge, then the students know the truth of [S]:
(1) the students know that they will have a test next week, and
(2) the students know that they don't know that they have a test next week.
By the factivity of knowledge, (2) yields
(3) the students don't know that they will have a test next week
which is a flat contradiction of (1). So we have reduced to absurdity the premise that rule [M] yields knowledge.
(If you're familiar with Timothy Williamson's analysis of the surprise examination, you're looking around for an illicit use of the KK principle--that if you know p, you're in a position to know that you know p. I don't think there is one, but let me know if you find it.)
So, rule [M] does not yield knowledge. So the students have no way of coming to know that they have an examination next week. Which means, that when they get the test next week, they don't know about it until they get it. So my original statement [S] turned out to be true after all.
Williamson uses this paradox to make same deep points about margin-of-error requirements for knowledge. I've used it to make some cheap points about the epistemology of testimony (although not as cheap as in this post). It seems to me that it might be possible to turn my argument into something deeper about the nature of knowledge--in multi-day inspection paradoxes, the students can start the week knowing that what I said was true, and then lose that knowledge before the end of the week, even though it remains true all along.
But I'm not sure that works. As a great man once said, more on this later, maybe.
From Flann O'Brien, The Third Policeman, p. 94:
Standing at a point on the postulated spherical earth, [de Selby] says, one appears to have four main directions in which to move, viz., north, south, east and west. But it does not take much thought to see that there really appear to be only two since north and south are meaningless terms in relation to a spheroid and can connote motion in only one direction; so also with west and east. One can reach any point on the north-south band by travelling in either 'direction', the only apparent difference in the two 'routes' being extraneous considerations of time and distance, both already shown to be illusory. Instead of the four directions there are only two. It can be safely inferred,5 de Selby says, that there is a further similar fallacy inherent here and that there is in fact only one possible direction properly so-called, because if one leaves any point on the globe, moving and continuing to move in any 'direction', one ultimately reaches the point of departure again.
5 Possibly the one weak spot in the argument.
Most of tonight's blogging time has been spent playing defense (see previous post), so I'd like to quickly sketch another problem for the Chronological Ordering Principle.
It concerns a case that Brian e-mailed me. Call it the Advance Coin Flip.
(Advance Coin Flip) The angel offers you the St. Petersburg in a sealed envelope, as in the Extra Coin Flip. He also remarks, "God flipped a fair coin this morning, before S/He ran the St. Petersburg, but S/He didn't tell me what it was. Anyway, you can take this deal: If the coin came up heads, you gain 3 utils, but if it came up tails, you lose 1--in either case, over and above what's in the envelope." Do you take the extra deal?
It seems obvious that you should take it--at least as obvious as that you should take the Extra Coin Flip. It doesn't matter that the coin was flipped before the St. Petersburg was run. Except--for the Chronological Ordering Principle, it matters a great deal, since the COP only works when the finite process takes place after the infinite process.
Obviously, what's going on here is that the two processes are independent, so it doesn't matter which is done when. If the infinite process is run first, it has to be independent of the outcome of the finite process--that's how the COP works. But it would be nice to have a characterization of this independence that didn't rest on quirks of chronology.
Maybe I'll ask for one for my birthday.
Unfortunately, they're already asking me questions I can't answer. In the comments to my first two-envelope post, Mark asks:
Hey, can you explain why people think the partition-dominance principle is so crucial here? Suppose in either case I open the envelope and see the contents. Then, doesn't the reasoning go through just as before? It seems to me that any mistake introduced by the partition-dominance assumption can't be the root of the matter.
Actually, I can answer his first question--I probably can't explain this. But here's my response (patched together from those comments).
First, for the St. Petersburg case, I think EU maximization does yield the paradoxical result that it's irrational to switch before you open the envelopes, but it's rational to switch after you open the first envelope, no matter what is in the first envelope. The EU of the second St. P is infinite, so it's rational to give up the first St. P for the second, no matter what the result of the first St. P is. [This is a paradox I hope to say more about eventually.]
(I think that's what's driving the end of this very long year-old post by Brian, which I just stumbled across.)
But for the traditional two-envelope problem, I think we have a problem. I don't think we want it to be the case that, once you open the first envelope, you have a reason to switch. Now, one issue here is the mathematical impossibility of defining a probability distribution on which every rational number is equally likely--probabilities are supposed to be countably additive, which makes it impossible to assign the same value to an infinite number of different ones. [Afterthought: Anyone know if non-standard analysis can do any work here?]
This leads me to think that Mark's point makes the two-envelope case look much trickier. The way the argument is usually phrased, you say "Suppose there's 4 utils in your envelope. It's just as likely that the other one has 2 or that it has 8." But, if you can't actually define a probability distribution for the choice of the pair [x, 2x] that gets written in the envelope, then I don't think you can actually say that.
To spell it out: Here's one cell of the possibility space: The case in which I have 4 and the angel has 8, and the case in which I have 4 and the angel has 2. The paradoxical argument assumes that, conditional on your being in this cell, each possibility is equally likely--so you can calculate the EU of switching (conditional on your being in this cell) at 5.
Now, I don't think that the conditional probability [we're in this cell/the angel has 8] actually can be defined. Which means you can't define the EU of switching, given that you're in this cell.
What seems to be going on here is an illegitimate use of the Indifference Principle--which says, "If there are n possibilities, and you have no idea which obtains, assign each a probability of 1/n." Brian has made some criticisms of the Indifference Principle (which I don't understand right now). But this might be a new one.
Now, I don't think this harms my ultimate point--the Chronological Ordering Principle still works as a guide to when you can calculate those conditional probabilities. But these are tricky waters.
Geoff Pullum, a couple weeks back, asked whether there's a hint of the liar paradox in this New Yorker caption:
There's something you need to know about me, Donna. I don't like people knowing things about me.
His philosopher partner is surely back from Ohio, but I'll try anyway.
(1) First of all, we need to assume that the speaker is reliable enough that Donna can gain knowledge by believing what he says. Let's do that.
(2) If this were said in real life, the implicit quantifier in the second sentence would probably be restricted. That wouldn't be very funny, would it?
(3) OK, then, the speaker is causing Donna to know something about him, even though he doesn't like anyone to know anything about him. Geoff asks if this is coherent. Surely it is--we deliberately cause things we don't like all the time. For instance, I don't like to wake up at 7 in the morning, but I set my alarm for that time anyway, because I also don't like having to rush to school or to be late to my classes.
In this case, the speaker is letting Donna know one thing about him, probably to forestall her learning anything else about him (say, by asking). This is the sort of tradeoff we make all the time--it doesn't even require weakness of the will or any of that other fun stuff.
So the self-referentiality makes the sentence funny. (Well, it probably does not seem funny after you've waded through my pedantic analysis, but trust me, it's funny.) But it doesn't make it incoherent, or truth-valueless, or even self-contradictory. It just reflects a hard truth--sometimes you have to do things you don't like.
(BTW, Geoff presupposes that "This sentence makes a false claim" is truth-valueless. Them's fighting words! But not when addressed to me.)
Teaching the allegory of the cave this morning. I used the laugh line about how it's an excuse for the general spaciness of philosophers. It seems appropriate that, as I was doing this, I also managed to forget that class ends at 10:30 rather than 10:40.
Concerning the theory discussed here, on which "tall" is not context-sensitive:
Consider the following sentence:
My cat has trouble sitting on my lap, because, though I'm big, my cat is bigger.
On at least one view (I think it's the one Jason Stanley calls "pretty hopeless"), this should come out true: I'm big for a person, but my cat is even bigger for a cat.
I have conflicting observations about this:
(1) It seems completely ridiculous. I'm a lot bigger than my cat.
(2) I had this thought, in almost exactly those words, the other day when my cat was drooping off my lap.
The traditional two-envelope problem can be described as follows:
You have an envelope with x in it. The angel has an envelope with either x/2 or 2x in it; each is equally likely. If you want to increase the expected value of the number in your envelope, should you switch envelopes with it (the angel)?
[UPDATE 1/27: I don't think this is an accurate description anymore. See here.]
I'm going to try to show that the answer is "Depends on how things got that way."
In the traditional two-envelope problem, God randomly picks a number, S/He writes that number and its double in two envelopes, and then the angel randomly gives you one of the two envelopes. If you open your envelope and see that it has x in it, it's meant to be equally likely that it's the smaller envelope (and hence the angel has 2x) and that it's the larger envelope (and hence the angel has x/2).
(Brian pointed out to me in D.C. that it's actually impossible to define a probability distribution such that each rational number is equally likely, and that makes it hard to say that it really is equally likely, given that you have x, that it's the smaller or the larger number. But let me bracket that--which, as Jerry Fodor says, means "try not to think about it"--it shouldn't matter for the larger point.)
(Reversed Two-Envelope Problem) God randomly picks a rational number x. He writes down x in a blue envelope. The angel then flips a fair coin: heads, it writes down 2x in a red envelope; tails, it writes down x/2 in the red envelope. The angel hands you the blue envelope, and then offers to switch envelopes with you. Should you switch to the red envelope?
The answer, I think is yes. The case is no different than if you had opened the envelope, and the angel had offered a fair coin flip that would double or halve your stake. The EU of taking the coin flip is 5x/4, and that's better than the x you have.
Note also that this argument doesn't work in the other direction (and a good thing, too). If you already have the red envelope, your EU is 5x/4, and you don't want to exchange that for the x in the blue envelope.
What does my Chronological Ordering Principle say? Well, the infinite process Q consists of God's putting x in the blue envelope. The finite process R consists of the angel's flipping the coin. So the partition PQ is determined by the value x. Each cell of PQ consists of two equally likely outcomes, one with x is in the blue envelope and x/2 in the red, one with x in the blue and 2x in the red. In each cell, the EU of the blue envelope is x, and the EU of the red envelope is 5x/4. So in every cell you're better off taking the red envelope. According to the Chronological Ordering Principle, you should take the red envelope.
Note that the description at the top of this entry applies equally well to the Reversed Two-Envelope as to the traditional two-envelope problem. In each case, one envelope contains twice the other, and it's equally likely that your x is the smaller number as it is the bigger number. But it matters whether they started by giving you x or by determining what the two numbers in the envelopes were going to be.
I think that this is related to the notorious Monty Hall problem, but I'm sticking my chin out when I say that. More on this later, maybe.
Perhaps this is more like my second shot, as it's paraphrased from my comment on this thread of Brian's. Anyway:
We know that the Dominance Principle cannot be applied without restriction to processes that have an infinite number of possible outcomes. But we would also like to apply it in the presence of some such processes.
In the previous post, discussing the Extra Coin Flip, I argued that it was possible to disentangle the outcomes of the coin flip (two possibilities) from the outcomes of the St. Petersburg (infinitely many possibilities). So it should be OK to argue from "No matter what the St. Petersburg gives me, I expect to gain by taking the coin flip" to "I should take the coin flip" in this case. (All this is vague talk, and I reserve the right to take it back at any time.)
Here's one way in which we can be sure that the outcomes of the coin flip don't interfere with the outcomes of the St. Petersburg: The coin flip comes later. My first attempt at formulating a decision principle relies on that:
(Chronological Ordering Principle) Suppose that probability space is partitioned by performing process Q, which has an infinite number of outcomes, and then process R, which has a finite number of outcomes. Let PQ be the partition induced by the outcomes of Q alone--that is, two ultimate outcomes are in the same cell of PQ iff they result from the same outcome of Q. Suppose that, in every cell C of PQ, strategy S yields a higher EU than strategy T given that you're in C. Then strategy S is preferred over strategy T.
In the case of the Extra Coin Flip [Sherlock Holmes, call your office!], Q is the St. Petersburg and R is the coin flip. No matter what the outcome of the St. Petersburg, you've got a higher EU from taking the coin flip than not. Say the St. Petersburg gave you 4 utils; if the flip comes up heads, you get 8 utils; tails, you get 3 utils; the EU is 5.5 utils, better than the 4 utils you get from standing pat. Obviously it's the same no matter what the St. Petersburg. So the Chronological Ordering Principle says you should take the flip; good.
In the first two-envelope case, Q is God's choosing a rational number x and writing down x and 2x in the envelopes, and R is the angel's randomly giving you one. So the cells of PQ are determined by the values of x. Each cell consists of two equally likely outcomes. In one of those outcomes you have x and the angel has 2x, in the other you have 2x and the angel has x. In none of these cells do you expect to gain by switching envelopes. No matter what cell you're in, the EU of switching is 3x/2, and the EU of standing pat is 3x/2. So the Chronological Ordering Principle says it doesn't matter what you do; good.
In the St. Petersburg two-envelope case, Q is God's running the two St. Petersburgs and writing down the results in the two envelopes, and R is the angel's randomly giving you one. The cells of PQ are determined by the outcomes of the St. Petersburgs, say 2^m and 2^n. Each cell consists of two equally likely outcomes; in one you have 2^m and the angel has 2^n, in the other it's the other way around. In none of these cells do you expect to gain by switching envelopes. The EU of switching is (2^m + 2^n)/2, and so is the EU of standing pat. So the Chronological Ordering Principle says it doesn't matter what you do; good.
The Chronological Ordering Principle is three for three so far. Some selection bias is at work here, natch, since I designed it to deal with these three cases. Later* I'll talk about some cases it doesn't deal with so well, but first I think I'll post a bit about another variant of the two-envelope problem that it does handle.
*Even Josh Marshall doesn't say "More on this later" as much as I do!
Last night I went to a movie with a few grad students and faculty here. Afterwards we went to nice crepe shop called the Greenhouse Effect. They had an article about their shop from the Salt Lake Trib on the wall, containing this sentence (approximately; the article isn't online):
"Being committed environmentalists, the owners named their cafe the Greenhouse Effect, after the phenomenon that sustains life on earth [emphasis mine]."
[UPDATE: A correspondent e-mails a link to this page, which says that the "greenhouse effect" refers to the atmosphere radiating heat to the earth after absorbing radiation from the earth, which is different from the expected global warming due to an increase in the greenhouse effect. Well, all right. My apologies to the Trib.]
I was dumbfounded.
In the previous post, we saw that the two-envelope paradox arose from unrestricted application of this principle:
Take a partition P of the probability space. Suppose that, for every cell C of the partition, strategy S yields a greater EU [expected utility] than strategy T given that you're in cell C. Then strategy S is preferred over strategy T.
The principle can be proved to maximize EU whenever P is a finite partition, but it just can't be applied to every case in which P is an infinite partition.
But it seems like there are some cases in which you do want to apply the principle over an infinite partition. The problem is to come up with a principle that gets the right answer in these cases without yielding paradoxical or wrong results.
Take what I'll call the Extra Coin-Flip (the case Brian mentions in his first comment here:
God runs a St. Petersburg bet and puts the result in an envelope. An angel comes down and hands you the envelope and says, "You can take this envelope, and you'll get the number of utility units there are in the envelope. Or you do the following: Give me a util, and I'll flip a coin. If it comes up heads, you get what's in the envelope plus four utils; tails, you just get what's in the envelope, and lose your util."
Here, you get what's in the envelope no matter what happens. The only question is whether you're also going to spend a util to get a 1/2 chance of four utils. If you're an EU-maximizer, it seems obvious that you should, no matter what number is in the envelope.
But when I said "No matter what number is in the envelope," I introduced an infinite partition. And the moral of the two-envelope paradox was supposed to be that the decision principle breaks down when you're faced with infinite partitions. Except in the Extra Coin-Flip it seems as though the principle shouldn't break down--because the outcome of the St. Petersburg is just irrelevant to the gains or losses you can expect from taking the coin flip.
This case, like the original two-envelope and the St. Petersburg two-envelope, involve more than one indeterministic process:
In the original two-envelope problem, you have, first, the process of God choosing the number x such that x and 2x go into the envelopes, and second, the process of the angel choosing one of the envelopes to give to you.
In the St. Pete's two-envelope problem, you have, first and second, the process of God running the two St. Pete's and putting the results in the envelopes, and third, the process of the angel choosing one of the envelopes to give to you.
In the Extra Coin-Flip, you have first, the process of God running the St. Pete's, and second, the process of the angel flipping the coin.
In the Extra Coin-Flip, it seems possible to disentangle the payoffs of the coin-flip from the payoffs of the St. Petersburg. So you can evaluate the coin-flip independently, without worrying which of the infinitely many possible payoffs the St. P had. The challenge is to characterize this intuitive judgment somehow.
Next, my first stab at a solution.
This post won't say anything new, it'll just explain how the two-envelope problem works and how the paradox arises. The basic two-envelope problem is explained here. The St. Petersburg variant is here. Here is Brian's rant about how it's well-known what the faulty step is. Or, if you don't want to click all those links, read on....
The basic two-envelope problem is as follows:
God writes a rational number on a slip of paper, and writes twice that number on another slip of paper.* Each slip goes in an envelope. He sends an angel down to you who gives you one of the envelopes randomly. The angel says: "You can open that envelope and read the number, and God will give you that many units of utility.** Or you can exchange that envelope for the other envelope, and God will give you as many units of utility as the number in the other envelope." Should you switch?
Argument that you should: Suppose the number written in your envelope is x. There's a 1/2 chance that the number written in the other envelope is x/2, and a 1/2 chance that it's 2x. The expected utility [EU] of a 1/2 chance of x/2 and a 1/2 chance of 2x is 5x/4, which is greater than x. So you should switch.
Paradoxical consequence of this argument: A symmetrical argument, taking x to be the other envelope, tells you that switching will make you worse off. Both arguments can't be right, but they're exactly the same (so both are wrong).
*On the original formulation, each rational number is equally likely--but Brian pointed out that there's no way to define a probability distribution on which that's true. Eventually I hope to say something about what this means for decisions under risk vs. decisions under uncertainty.
**I'm not happy with the idea of utility units, and I'm really not happy with the idea of unbounded utilities--but let that pass. It may come up again in a later post.
The St. Petersburg variant, due to David Chalmers, gives us a situation where we actually can define a probability distribution, but the same problem arises. A St. Petersburg bet works as follows: A coin is flipped until it comes up heads. If it comes up heads on the nth flip, you get 2^n utility units. The EU of the St. Pete is: 1/2 chance of 2 + 1/4 chance of 4 + 1/8 chance of 8 +..., i.e., infinite.
So the St. Petersburg two-envelope problem is as follows:
God runs two St. Petersburgs and writes the result in each envelope. The angel randomly gives you one envelope, as above, and offers to let you switch, as above. Should you?
Argument that you should: Suppose the number written in your envelope is 2^n. No matter what that value is, the EU of the number in the other envelope is higher--since the EU of the other envelope is infinite. So no matter what you have, you could expect to gain by switching. So (if you're an EU maximizer), you should switch.
Paradox: Exactly as in the original two-envelope case. An exactly symmetrical account tells you that you'd be better off with the envelope you've already got.
What goes wrong? Chalmers and Brian point out that each argument seems to use the following principle (my formulation):
Take a partition P of the probability space. Suppose that, for every cell C of the partition, strategy S yields a greater EU than strategy T given that you're in cell C. Then strategy S is preferred over strategy T.
(Here P is the partition yielded by dividing probability space up according to the number in your envelope; S is switching envelopes; T is standing pat.)
This principle is guaranteed to work when P is a finite partition. But when P is an infinite partition, and in particular when strategies S and T have an infinite EU, the principle won't always work.
So it might seem that we should throw out our decision principles whenever we're faced with infinite partitions of probability space. But there are cases in which it seems like the decision should be obvious, even though there's an infinite partition. I'll talk about those in the next post.
The name of this blog comes from Locke:
"So much as we ourselves consider and comprehend of
truth and reason, so much we possess of real and true knowledge. The
floating of other men's opinions in our brains, makes us not one jot
the more knowing, though they happen to be true. What in them was
science, is in us but opiniatrety; whilst we give up our assent only
to reverend names, and do not, as they did, employ our own reason to
understand those truths which gave them reputation."
I chose the name because:
(1) My main field is the epistemology of testimony (you can download my whole dissertation from this page, or find some other papers here)
(2) As a position in the epistemology of testimony, Locke's quote is a complete non-starter; frequently you have to take other people's word for something, or you'll never be able to confirm anything for yourself
(3) Given the view Locke is expressing, it's kind of appropriate that (2) is true
(4) The only reason Locke used the word "opiniatrety" is the word "blog" hadn't been invented.
Brian Weatherson has been after me to start a blog for a little while, and here it is--I am nothing if not a slave to my fans. My plan is to focus on analytic philosophy-type stuff--some coherent arguments, some bitty puzzles, some links to my papers, some cheap shots. Also I'll probably talk about music and books some (my tastes skew esoteric and pretentious), and there will be the occasional random nonsense. Actually, those categories may overlap. You'll notice I didn't mention politics, though--you can probably guess my politics from my blogroll, but I'd like to keep this site free of them (it?).
Thanks to the King of Fools for getting this site running.