April 19, 2004

More On Ecumenicism

I'd like to add a bit to my entry on ecumenicism in epistemology and the metatheory/normative theory distinction. Hopefully this will be a bit clearer (to those who made it through the previous sentence).

I argued that Newcomb's problem and the problem of what justification is are different enough that ecumenicism makes sense for justification but not for Newcomb. Newcomb's problem is a normative problem; it asks the question "What to do in this case?" The answer is an action--the subject has to take one or both boxes. The definition of justification is metatheoretical; it asks the question "What property should we use 'justification' for?" The answer is a conceptual definition, and we can simply decide to define two senses of 'justification', an internalist and an externalist one.

But here's an objection: Newcomb's problem is meant to illuminate theories of rationality. We're supposed to conclude that it's rational to maximize expected utility, or that it's rational to act according to causal decision theory. Isn't this a metatheoretical question about the use of 'rationality'? Shouldn't it be analogous to the metatheoretical question about 'justification'?

One answer may be--to say "Action A is rational" is to say "Do A." So the metatheoretical question turns into a normative one. But then you can (so the objection goes) raise the same question about belief--to say "Belief P is justified" is to say "Believe P." If that's true, then you can't be ecumenical about justification any more than about rationality--you'll be counselling someone both to believe and not believe something.

Two responses to the objection:

(1) [This depends on a difference between theoretical and practical reasoning.] "Believe P" isn't actually advice, because we can't believe or not believe at will. This means that any counsel about what to believe is moot. We can, on the one hand, advise people about what actions to perform to get their beliefs in working order, and, on the other hand, evaluate the beliefs they do have according to various good-making properties.

Epistemic justification addresses the other hand only; the one hand is a kind of practical rationality about achieving epistemic ends. So, anyway, I claim--there's a kind of purely epistemic justification that doesn't allow that beliefs can be justified because the believer is incapable of believing otherwise. Then the believer may be free from blame, but the belief can be criticized. There's nothing to stop us from using two different criteria to evaluate the belief. In contrast, in Newcomb's problem you're actually looking for practical advice, and there's no room for criticizing the action without criticizing the actor who chose it.

(2) In Newcomb's problem the actor has full information (well, except about what's in the opaque box--in most versions). There's a debate over rationality that's more analogous to the internalism/externalism debate: Should you adopt the course of action that actually will have the best consequences, or the course of action that you expect to have the best consequences? (That question, I admit, carries a presupposition that not everyone will grant.)

There are those--I think Parfit does this--who say that you have reason to do something if it would have a good consequence, even if you have no way of knowing that that good consequence will come to pass. But I don't think most people will say that and deny that there is some sense of rationality in which it is practically rational to do what you have epistemic reasons to think you have practical reason to do. For instance, Keith DeRose says that we don't blame someone for violating the knowledge norm of assertion when they have reason to think that they knew what they said, calling this a secondary norm that derives from more general norms.

Let's transfer this to the epistemic case, supposing that you could choose your beliefs at will. Which beliefs should you choose? On externalism, those beliefs that are justified by the externalist criterion of justification. What if you choose a belief that is internalism-justified but not externalism-justified? Well, then you haven't believed what you have most reason to believe. But you had no way of telling what you did have most reason to believe. In fact, on the information you had, it could not but seem that what you had most reason to believe was what you did believe. So it seems as though the externalist ought to say, along with the externalist about practical reason, that there's a sense in which you were rational to believe what you did, even though you didn't believe what was justified.

In short: In some sense, the externalist should admit that it's OK to believe what's internal-justified. The advice "Believe what's external-justified!" is not always one that can be followed, no matter how smart you are, if you don't have enough information. Then the best you can do toward following that advice will be to believe what's internal-justified.

This argument puts pressure on the externalist to be ecumenical, but I don't think there's any similar pressure on the internalist. The internalist should acknowledge that there is more than one way to evaluate beliefs--internalist justification is the best attempt a goal that's partly determined by externalities. But I think that goal is truth. Externalist justification is an uneasy compromise between truth, which we actually want, and internalist justification, which is the best we can do.

Posted by Matt Weiner at April 19, 2004 01:24 PM
Comments

I'm not exactly sure how to tie this into the big picture here but I'm not sure that externalists should admit in quite the sense you want them to that there is always a sense in which someone is rational just in case they are internally justified (in actions or beliefs). The quote from DeRose has it that we wouldn't blame someone from failing to live up to an external norm if he complied with a secondary norm. But to say we wouldn't blame someone for X-ing is in no way to say that X-ing was thereby what they ought to have done. It's in fact to say that they failed to do what they ought to have done - what there was most justification for doing - but that their failure was understandable, or even inevitable in the circumstances.

Also, and again, I'm not sure how relevant this is, I'm a bit unclear about the contrast you want to draw between practical and theoretical reason when you suggest that "believe p" isn't advice because we can't believe at will. That's because it doesn't seem that we can just do things at will either - as doing things requires certain desires, on most stories, and we can't desire at will. Given that "do X" clearly counts as advice despite this, why shouldn't "believe X".

Posted by: jonathan way at April 19, 2004 02:14 PM

Jonathan, in re the second paragraph: You're right that there are lots of things you can't do at will--in an obvious sense, you can't hit the head pin at will when bowling (I can't, anyway), but "Hit the head pin" counts as advice. What I mean is that you can't even try to believe--you can try to hit the head pin, and you may or may not succeed. At best you can try to bring about a belief.

As for action requiring desire, I'm not sure I buy the belief-desire psychology behind this. But if I do, I'll say--the desire would come before the will, so the fact that you can't desire at will won't hold here. The point maybe is that it doesn't even make sense to think of someone willing to believe that p (in most cases), but it does make sense to think of someone willing to do A, even though thinking of that involves thinking of them as having a desire to do A, which they can't adopt at will.

(I'll try to get to the first paragraph later.)

Posted by: Matt Weiner at April 19, 2004 06:49 PM

[T]o say "Action A is rational" is to say "Do A."

Reponse (3) is: No it isn't. "Do A" is sometimes implicated by "Action A is rational," but those implicatures are cancelled in this case (because we all know we're talking about a hypothetical Predictor, or because we're doing philosophy, or because we're aware that the two notions of rationality diverge on this question, or whatever). Do A isn't part of the content of Action A is rational, so the meta question isn't a normative one.

(Compare "Actions that maximize overall happiness are morally right" with "Do actions that maximize overall happiness". The first doesn't imply (whatever that would mean) the second, nor does it implicate it - at least when we cancel it by realizing that utilitarianism qua theory won't always help us decide what we ought to do. The second sentence is bad advice, and it's not advice that one commits oneself to giving when one adopts utilitarianism.)

Posted by: Allan Hazlett at April 20, 2004 09:49 AM

I've got no problem with saying that--I think it prevents the objection from getting off the ground more than it undermines my objection. If you don't think that the meta question is normative, then there's no reason to reject ecumenicism about practical rationality. You can say, in one respect one-boxing is rational and in another respect two-boxing is rational.

Of course these notions of rationality won't do you a damn bit of good when you're trying to figure out what actions to carry out, but we already agreed that that's not what a notion of rationality is meant to do.

I actually lean toward thinking this is more or less right. There's no univocal conception of rationality to which we can appeal when we're deciding what to do. We just have to do things, and those things can be evaluated by various standards, none of which is the standard of rationality. This is something I've been chewing over for a while (and will probably continue to chew over a while longer).

[As for the point about utilitarianism--it's bad advice because it's not doable, but it is effective with respect to the goal we should have. So I think you do commit yourself to it, you just don't commit yourself to "Try to do actions that maximize overall happiness."]

Posted by: Matt Weiner at April 20, 2004 03:54 PM