January 16, 2007

Why Truth Is the Norm of Credibility

The positive part of my account of the norms of assertion is to argue that testimony, at least, is subject to a norm of truth insofar as if you assert something false you should lose credibility. That argument is discussed in a long blog post here and at even greater length here.

One of the obvious questions is "Why should the truth of your assertions be what counts; isn't the important thing for future credibility whether your assertion was justified?" That's a good question; but I still think truth should be seen as the primary norm of assertion, because it's usually much easier to judge truth than justification, and so judging credibility by justification (among other things) will lead you to give too much credence to smooth talkers who can come up with plausible-sounding explanations of why their past false assertions were justified.

The comments about 20/20 hindsight here and here support that view, I think. (via Daniel Davies.) That's actually a serious philosophical claim; in most cases trying to judge truth rather than justification will send you wrong, because it's easier to fudge judgments of justification the way you'd like. As Mark Thoma says (in the second link):

maybe we don't drum you out of the [pundit] profession -- there aren't simply two extremes where we listen fully or don't listen at all -- but we are going to pay less attention to what you have to say. That's how it to goes when you are wrong about important things.

The mildly philosophical point made, some petty political point-scoring below the fold.

Especially rich is Jane Galt's disquisition on hindsight bias, following her claim that "precisely none of the ones that I argued with predicted that things would go wrong in the way they did"; a claim that I suspect rests on a bit of cherrypicking of dovish arguments. (It's proverbial that J.G.'s liberal friends have convenient opinions.) Also rich is this:

I don't see any way that I could have known, without actually checking, that he didn't have at least an advanced [WMD] programme.

I seem to remember one Hans Blix doing some actual checking on Saddam's WMD progams, and being utterly reviled for it. Also, I believe many war opponents argued that the existence of chemical and biological weapons programs wasn't a serious threat. etc.

For what it's worth, my position (and I don't think I was in print on this) was that war is almost guaranteed to have some bad consequences and is very likely to have unpredictable results, some of which could be quite bad. (cf. Quiggin: Few wars go well for those who start them.) As such, we're obliged not to start one without a very good reason. None of the reasons that were presented were good. For about five minutes I thought that war might be a good idea because Saddam might develop nuclear weapons and Very Bad Things could ensue, but then I decided that for the Very Bad Things to happen with war about three unpredictable things had to happen, whereas war is only one or two unpredictable things away from Very Bad. In hindsight, I feel pretty good about this.

One last cheap shot; I believe this:

As I see it, doves have, in effect, benefitted from winning a random game. Not that the result was random--obviously, there was only one true state of the world. But at the time of making the decision, the game was random to the observer, with no way to know the true state until you open the box and poke the cat

makes a hash of the Schrödinger's Cat example, the point of this is that there isn't one true state of the world until an observer opens the box. (I could be wrong, though.)

Posted by Matt Weiner at January 16, 2007 07:37 PM

I don't think you even need to bring up the specter of smooth talkers. If we're going to use witnesses' past records to judge credibility, it's perfectly reasonable to use their past records of correctness as opposed to justification. Suppose there is an ensemble of pundits ...

Now that you've finished gouging your eyes out at that idea, suppose that set A of them have a record of making predictions that are well-justified on the evidence accessible to them, but turn out wrong. As opposed to set B whose arguments maybe seem a little less well-justified, or use different packets of evidence, but turn out right more often. Based on past records, are you going to go with the well justified but possibly sophistical As, or the Bs? Are the Bs lucky, or do they have a more reliable nose for good evidence, or maybe just have a better feeling for picking horses? The Bayesian doesn't know the mechanism, but she knows which gang of idiots to bet the retirement money on.

As for net experts on the war, if it was "random" before the box was opened, why were they so certain at the time? "In matters of experiment, chance favors the prepared mind," Pasteur said.

Posted by: Ben at January 16, 2007 09:29 PM

Yeah, that's another argument for the position. In a single case, justification may be a better indication than truth, but in the long run, truth is a better predictor of future credibility. Or rather, in the long run, they should converge, and if they don't that's an indication that your judgments of justification are going wrong. (I would tend to say that the reliable nose and the better feeling actually count as increasing their justification.)

Posted by: Matt Weiner at January 16, 2007 10:05 PM

I remember thinking the same thing about her cat example. But I was happy with it, because she was making the larger point that it's really only the subjective credence that counts, not the objective probabilities (regardless of whether or not there are such things in the relevant circumstances).

Posted by: Kenny Easwaran at January 16, 2007 11:51 PM

Yes, that's what makes my cheap shot particularly ill-mannered. Though I might want to quibble about whether it's subjective credence per se as opposed to the credence that's justified by the relevant evidence.

(I think that credence was much different than what JG is saying it was, but that probably doesn't need repeating.) [UPDATE: That sentence edited for clarity.]

Posted by: Matt Weiner at January 17, 2007 09:31 AM

Hi Matt --

A few comments:
1. I think, as a descriptive claim, you're probably right that truth, not justification, is the norm of credibility. Since we're in the middle of the NFL playoffs, the cases I'm thinking about are those where a coach makes an irrational gamble (in the sense that the expected value of p is less than the expected value of some other available play), but the team gets lucky and succeeds. Just about everyone in the commentariat then says what a great call it was -- and I gnash my teeth, because it was irrational. (If you don't like that example, take a lottery winner, where it really is clearly a chance outcome.)

2. Since I am (nominally) a philosopher of science, I wonder whether you really want _truth_ as the norm of credibility, as opposed to _empirical adequacy_ (= truth about observable properties, processes etc.). In the case of scientific theories, it seems to me to be much easier to judge whether a theory is justified given the present evidence -- because we never know whether many theory is true.

3. You write: "the Schrödinger's Cat example, the point of this is that there isn't one true state of the world until an observer opens the box. (I could be wrong, though.)"

I think (at least on what's probably the most common way of interpreting the QM formalism) that's not quite right. There is 'one true state of the world' when the cat is in the box; it's just that this state is a superposition of |alive> and |dead> (or whatever quantities you're looking at). When you open the box and look at the cat, the 'one true state' changes to one of either alive or dead. But there is an increasing tendency, I think, to 'take superpositions seriously.' (I heard David Albert, in a session at the 05 Eastern APA, say: "It's here, it's clear, get used to it.")

There is a 'many worlds' interpretation of QM, and it would be closer to what you've said here, but I'm pretty sure that that is very much a minority view.

Posted by: Greg F-A at January 17, 2007 10:55 AM

Greg, thanks!

1. I do not, not, not want this to be a descriptive rather than a normative claim. I don't think the descriptive claim, "People who speak the truth are trusted in the future," would ground the claim that truth is a norm of assertion. (See the post referenced in the first link above.) I also don't think it's true. (See that post again, or the 2004 election.)

2. "[W]e never know many theory is true" -- sounds controversial, but I don't have a developed enough view of the epistemology of science to contradict it. In that case I'd probably go for empirical adequacy. One thing here is that it's easier to judge whether scientist A's past prediction is justified by the current evidence than whether it's justified by A's past evidence, so there's still a divide between "the norm on A's prediction was whether it was justified" and "The norm on A's belief is whether it was true/adequate."

3. Yeah, I got too fancy there -- I was right to say that JG messed up the analogy but wrong about exactly how. My credibility is dinged a little.

Posted by: Matt Weiner at January 17, 2007 11:48 AM

JG's original statement is mixing up an analogy about determinism and an analogy about quantum mechanics. She wants to say that the result wasn't random, it was predetermined, but that observers had zero knowledge of what it would turn out to be. (This is transparently self-absolving; if one is wrong and others are right, calling the others lucky guessers is the pundit's equivalent of the fox and the sour grapes. But I digress.)

In the QM analogy of Schrodinger's cat, the lethal apparatus is triggered by an apparatus - say a radioactive decay - that is truly random, not deterministic. Even given perfect knowledge of the apparatus you can't predict its state, insofar as by "predict its state" we mean "Is the neoconservative ideological project^W^W^W cat alive or dead?"

Whether there is a "true state of the world" before the box is opened depends on what the meaning of the word "true" is. Yes there is a state that the apparatus is in - a superposition of two states, or uncollapsed wavefunction - but this state is not an eigenstate of cats, where the two orthogonal eigenstates are |alive> and |dead>, pretty much. When I said "you can't predict its state" above, I was referring to predicting the state projected onto the eigenspaces of live and dead kitties. You can predict that the apparatus is in a superposition, but that isn't what most people call a prediction in this case (though to a physicist it is a prediction of the time evolution of the wavefunction).

At this point it might be useful to remember that Schrodinger's cat was not a prescription but an analogy designed to point out the absurdity of the simplest variety of Copenhagen interpretations of quantum mechanics, that describe how mixtures of quantum states become classically describable. Superpositions should be taken seriously, but believing that the cat's expiration date is the moment you open the box is being unserious. Cats are macroscopic entities. So is Iraq.

Posted by: Ben at January 17, 2007 07:02 PM