March 09, 2004

If You're So Smart, Why Ain'tcha Rich?

[Title from David Lewis, of course.]

In this TAR thread, Brian says he doesn't understand why Nash equilibria* are especially interesting, or why conforming to a Nash equilibrium should be taken to be the essence of rationality:

In the most famous game of all, Prisoner's Dilemma, we know that the best strategy in repeated games is not to choose the equilbrium option, but instead to uphold mutual cooperation for as long as possible.

This annoyed economist Kevin Quinn:

We play Nash when we are rational and respect one another's rationality; playing anything other than the Nash means we think our opponent is either irrational or is mistaken about what we will do.

Not too surprisingly, I'm with the philosopher over the economist. In fact, I think that this whole debate reveals a problem with economists' conception of rationality, and shows how they would be better off paying more attention to philosophers.

*When none of the players in a game can make herself better off by unilaterally changing her strategy, we have a Nash equilibrium. There's always at least one Nash equilibrium (under certain constraints, I suppose). In the Prisoner's Dilemma, the only Nash equilibrium is for both players to defect. I'm told that the situation that inspires Nash's big breakthrough in the movie A Beautiful Mind wasn't even a Nash equilibrium.

The Prisoner's Dilemma is supposed to model many cooperative situations, where everyone will be better off if everyone cooperates, but each individual has something to gain by not cooperating. According to the Nash equilibrium strategy, a rational person should realize that, no matter what everyone else does, she will be better off if she does not cooperate. So, if everyone is rational, no one will cooperate, and everyone will be worse off than if they were all irrational and cooperated.

I think this is a reductio ad absurdum of the economic conception of rationality. The whole point of economic rationality is to achieve what's best for you. If a group of allegedly rational people all achieve what's worst for them by acting rationally, that shows that they've got the wrong conception of rationality.

My attitude toward this question is somewhat like my attitude toward foundationalism in epistemology. I think that I am justified in believing that the earth goes round the sun. There are various foundationalist views on which justification is defined in terms of what can be inferred from various starting points according to various rules. On some of those views, I may not be justified in thinking the earth goes round the sun--I may not be justified in believing in the earth, or in believing the testimony of others about astronomy. In my opinion, that shows that those views get justification wrong, not that I should restrict my beliefs to what's justified by those views.

Similiarly, I am more certain that it is rational to cooperate in (some) Prisoner's Dilemma situations than I am of any particular conception of rational decision-making. If the economic conception of rationality says that we should always defect, then the economic conception just doesn't capture what it is to decide rationally. Period.

Here we're in the vicinity of Gibbard and Harper's famous remark about Newcomb's problem: Approximately, if someone decides to reward irrationality, then those who behave irrationally will be rewarded. Arntzenius, Hawthorne, and Elga take a similar line about what happens to agents who can't bind themselves to strategies when faced with one of their infinite decision problems. To which I respond: If you're so smart, why ain'tcha rich? If someone decides to go around giving $50 to all irrational people, then irrationality will be rewarded. But if you know in advance that the decisions you make will leave yourself worse off, then those decisions just aren't rational. And if you know in advance that the decisions you and everyone else will make will leave you all worse off, then those decisions just aren't perfectly rational.

Newcomb's problem and AH&E's infinite problems are sufficiently outre that we might just say: Here, rationality hits its limits. But the Prisoner's Dilemma is not so. The Dilemma is meant to encapsulate many of the situations we face every day, and a theory of rationality on which it's not rational to cooperate has serious problems.

Frequently economists can get past their theory of rationality. I remember a Krugman column in which he urged everyone to vote, even though it wasn't rational (since the chances that a single vote will determine the outcome are minuscule), and a James Surowiecki column in which he discussed how we were all better off for the enforcement of certain norms even though it wasn't rational to enforce them (I think it was about the disgrace of the NYSE chairman--on the economic view, post facto punishment is cutting off your nose to spite your face). They came to the right conclusions--I just wish they wouldn't make the traditional obesiance to economic conceptions of rationality. There is more to rationality than economists dream of. Economists would be better off if they read what philosophers had to say about it.

Posted by Matt Weiner at March 9, 2004 03:20 PM
Comments

``The whole point of economic rationality is to achieve what's best for you. If a group of allegedly rational people all achieve what's worst for them by acting rationally, that shows that they've got the wrong conception of rationality.''

Not necessarily. The issue, I think, is whether one member of the group has any control over how the other members of the group will choose.

Consider the one-shot Prisoners Dilemma. The assumption must be that each prisoner is only looking out for himself; that is, he only cares how many years of prison he has to serve, not how many his partner in crime has to serve. Given that assumption, and given the fact that the prisoners are forbidden to communicate with one another, the best strategy is to snitch. Whatever your partner is crime does, you're better off snitching.

The conclusion is the same for an iterated Prisoners Dilemma, as long as one prisoner cannot do anything to change the other prisoner's strategy. If, however, you have reason to believe that your refusing to snitch will make it more likely that your partner in crime will also refuse to snitch, then you might have reason not to snitch, depending on how the probabilities and sentence lengths turn out.

Posted by: Jeff at March 9, 2004 06:33 PM

I guess I should say that I don't think it's always rational to cooperate--only sometimes. Whether it's rational can depend on the likelihood that the other person is cooperating. If you know that you're playing Prisoner's Dilemma with a lot of economically rational people, you'd best defect.

Given that, it's still not clear to me that it's rational for someone who wants what's best for herself to snitch, even if she doesn't think her strategy affects what the other person will do. Again--if everyone defects, everyone will be worse off. Why must we hold the actions of everyone else fixed when we calculate the consequences of our actions? If we are in a cooperative situation, why can't we reason as though we were acting collectively--"If we all do this, we will all be better off"? Here you have reason to worry about whether everyone will independently come to this conclusion, but it's not clear to me that that shouldn't be chalked up to a failure of rationality on the part of the defectors.

The economic conception of rationality--which you're expressing, I take it--assumes that the rational action is the one that has the best consequences for you, holding everyone else's choices fixed. I'm tempted to say--this is the right way to handle cases in which everyone else is of dubious rationality, but if you're allowed to assume that everyone else is rational, then part of what's at issue is the very idea of how rational people choose. Hence, if people (as a whole) get better outcomes by seeing themselves as collective agents, and doing whatever they would do as part of the best collective agent, then that's an argument for that conception of reality.

But in doing so I would beg the question against economic rationality. So I think I should stick with the epistemic analogy--if a conception of rationality tells you to defect in all PD situations, it's failing one of the criteria for a good conception of rationality. A good conception of rationality isn't one that is almost guaranteed to leave everyone worse off if it's universally followed.

[My real target isn't philosophers who argue for economic rationality--it's economists who refuse to believe that there could be any other conception of rationality. Unfortunately, it's philosophers I'm going to get here, if anyone. See you in Pasadena!]

Posted by: Matt Weiner at March 11, 2004 06:12 PM

I'm not sure what you mean by holding every else's choices fixed. As long as you grant my assumption that in the Prisoners Dilemma there's no reason to think that one prisoner's strategy affects the other prisoner's strategy, we don't need to make any assumptions about the rationality or irrationality of the prisoners. The point is that it doesn't matter how the other prisoner chooses: no matter what the other prisoner chooses, it is always better to snitch.

What does it mean to say that if everyone defects, everyone will be worse off? In a two-person Prisoners Dilemma, there are four possibilities: (1) both prisoners snitch, (2) neither snitches, (3) only the first prisoner snitches, or (4) only the second prisoner snitches. It's true that if both prisoners snitch, the total number of years served by the prisoners is greater than for any other possibility. However, we are assuming, I take it, that each prisoner is only looking out for himself. Now, it's true that if both prisoners snitch, each prisoner individually is worse off than if neither prisoner snitches, but how can we rule out a priori the possibilities in which one prisoner makes a different choice than the other?

Here's a reason not think of defection as failure of rationality. Suppose you have two people in a Prisoners Dilemma, one who reasons collectively and one who reasons non-collectively. The person who reasons collectively (doesn't snitch) will serve more years in prison than the person who reasons non-collectively (snitches). Use the word 'rational' however you please, I'd rather be the person who serves less prison time.

I admit that your way of thinking is very tempting. Indeed, for a number of years I argued vehemently that cooperation must be rational in the Prisoners Dilemma. I finally came to realize, though, that my qualms about the problem of collective action were more about ethics than logic. One could argue for some kind of Kantianism, according to which rationality automatically includes morality, but as long as rationality does not require looking out for the interests of the group, it seems to me highly questionable to assume that individuals acting rationally would produce an outcome that is best for the group as a whole.

Posted by: Jeff at March 12, 2004 12:35 AM

What I meant by "holding the other prisoner's choices fixed" is more or less what you say: Arguing that, no matter which particular thing the other prisoner chooses, you will be better off if you snitch; as opposed to arguing that you yourself will be better off if the two of you collectively keep silent than if the two of you collectively snitch. I do mean to grant that there's no reason to think one choice affects the other, but to argue that that doesn't settle the question of what it is rational for the prisoner to do.

I am actually of at least two minds on this. One is something like David Gauthier's argument (as I'm reconstructing it at the moment): What's at issue is a choice between competing conceptions of rationality, and people who choose a conception of rationality on which one cooperates with people who share one's conception of rationality are better off--so one should choose that conception of rationality, and thus one has reason to cooperate with others who share that conception. This does seem to presuppose that one can tell what conceptions other people have, which is a dubious assumption--but the economic conception won't allow us to cooperate even on the assumption that we can tell that the other party has the Gauthieran conception.

[BTW, I think this argument presupposes that it's OK to one-box in the Newcomb problem. I don't mind that, but lots of people do.]

But I'm at least as much tempted by something Gauthier says in an early paper, "Morality and Advantage" IIRC: We should not expect to have non-moral reasons for moral actions (here, cooperating). It's just moral to cooperate in a wide range of situations, and that's the reason to do it. BUT--that doesn't make it irrational to do so. Maybe this is what you're saying when you say your qualms are more about ethics and logic, but economists' use of "rational" seems to bar anyone from having moral reasons to act.

(Which again, is why I'm not bugged by philosophers who uphold this conception of rationality so much as by economists who stare blankly if you suggest there could be another.)

Posted by: Matt Weiner at March 13, 2004 11:22 AM

Well, I guess I'm not sure what the economic conception of rationality is supposed to be. If it required that I only look out for myself and ignore everyone else, then obviously it should be rejected. That seems like a straw man, though.

The Prisoners Dilemma is designed so that moral issues don't really come up. Being criminals, the prisoners are 'ethically challenged' in the first place. And I don't think it could be argued that there is a moral obligation not to snitch. If anything, there's a moral obligation to snitch, admitting your wrongdoing to the authorities. When I was talking about ethical qualms, I was of course not thinking of the Prisoners Dilemma literally but rather about problems of collective action (e.g., tragedy of the commons) that may resemble the Prisoners Dilemma. I still think that on any good conception of rationality, snitching is the rational action in the Prisoners Dilemma, taken literally.

Suppose we vary the Prisoners Dilemma so that at least one of the prisoners cares about how much prison time the other one gets. Now it's not so clear what would be rational for our 'caring' prisoner. We could rank the prisoner's preferences about the various possible outcomes, but that by itself wouldn't necessarily determine whether an action is rational. In the original Prisoner's Dilemma it was a lot simpler, because the preference was always for the outcome resulting from snitching, no matter what the other prisoner did. If we have to worry about what the other prisoner does, then we have to assign probabilities to his possible actions. We could further rank the 'caring' prisoner's preferences with regard to his attitude toward risk, and then we might be able to decide the question of rationality relative to a given attitude toward risk. In my view, though, it's not clear either that we ought to accept any given attitude toward risk as rational or that there is a uniquely rational attitude toward risk.

Posted by: Jeff at March 13, 2004 02:38 PM

I actually think you're probably right about the Prisoner's Dilemma as such. If it's given that the Prisoners are positively unethical (of course we don't know what they're being charged with!) then, in the Gauthierian terms, there's at least some reason to think the other guy's conception of rationality is such that he will defect. Then, even on Gauthier's favored conception of rationality, you should defect. Alternatively, if you simply take the line that it's not irrational to behave morally, then there aren't any moral reasons to cooperate here.

Note that morality, in cases where cooperating is moral, doesn't so much require that you care about what happens to the other person, but that you take morality into account--perhaps that you care about being moral. (Revealed preferences may provide a way for economists to address these cases anyway, but IMO a weak way.)

As for the economic conception of rationality: Economists employ a particular conception of rationality, as can be seen in the first three sentences of this link:

It is a fundamental principle in economics that individuals, acting independently, in a decentralized manner, and according to their own self-interest, will reach a "desireable" state on their own accord, without the help of government or other intervention. Rationality is a key assumption made on human behavior that leads to this nice result. Rationality is well-defined in economics, being characterized by a handful of axioms.

I haven't been able to Google up the list, but I'm pretty sure it includes not choosing a dominated strategy; and as we've seen, self-interest already is assumed. And Paul Krugman did say that voting is irrational--apparently on the assumption that rationality is determined by a narrow calculation of self-interest, and that it's in your self-interest to free ride. (First three and last paragraphs only are relevant here.) So I fear that lots of economists really do hold the straw man conception of rationality.

I like your ultimate conclusion--it's not clear that there is a uniquely acceptable rational attitude towards risk. I think in PD-type cases (and in many others) there are reasons to both things, and that it's unclear whether one reason should override the other. That's currently scheduled to be my fifth or sixth book....

Posted by: Matt Weiner at March 15, 2004 08:42 AM