Mark Kleiman sort of disagrees with me about truth and credibility. (Not that he has me in mind.) He writes:
On any yes-or-no question, the prior probability of being right by making a random guess is 0.5. So merely having reached the right conclusion once is no great sign of wisdom. The more you know and the smarter and more thoughtful you are, the more you can bias the odds in your favor. So having reached the wrong conclusion once is some evidence against one's smarts, knowledge, thoughtfulness, or all three. But it's not perfectly conclusive evidence. If you want to know whether Person X is likely to make correct guesses in the future based on X's guessing record in the past, you need to review X's approach to those previous questions, not just tot up right and wrong guesses.
I argued in the past that we should generally take truth of assertions as the yardstick for judging credibility, because (a) "it's usually much easier to judge truth than justification, and so judging credibility by justification (among other things) will lead you to give too much credence to smooth talkers who can come up with plausible-sounding explanations of why their past false assertions were justified," and (b) "[i]n a single case, justification may be a better indication than truth, but in the long run, truth is a better predictor of future credibility. Or rather, in the long run, they should converge, and if they don't that's an indication that your judgments of justification are going wrong." (That point was in comments, in response to my brother.) [More background here.]
Actually Kleiman and I agree partially. The part of his quote that I've bolded is right: Getting it wrong once is some evidence against your credibility, but it's not conclusive. But I disagree with the part in italics. If you've got an extensive record of someone's past predictions, and all you want to do is know whether their future predictions are likely to be true, you're probably better off totting up their successes and failures than trying to evaluate their methods. If their methods look good, but they always turn out poorly, that is likely to mean that you need a different way of evaluating their methods -- more likely the more of a track record you have. You should then look at methods to see why one person's methods works and the other's don't. (See Jonathan Kulick.) But if you're evaluating credibility, a long track record comes first.
There's another interesting issue here. Kleiman's correspondent casts aspersions on public policy schools in general. I don't know if most people in these schools got it wrong (I seem to remember Henry Farrell arguing that most political scientists got it right), but if they did, does that give us enough of a track record to say that public policy people are less credible?
Not necessarily, I think. Even if you have a lot of predictions about one event, there could still be something about that event that causes a lot of people who will be credible in the future to get it wrong this time. That is, what actually happens this time could be surprising. Most Oscar prognosticators didn't pick Marion Cotillard to win Best Actress, and most sports prognosticators didn't pick the Giants to win the Super Bowl. That's because there was good reason to believe Julie Christie/the Patriots would win. The fact that one expert gets this event wrong proves that another expert is likely to get this event wrong, not that experts are likely to get future events wrong. So I don't think that looking at lots of predictions by one class of person about one event will always give you enough data to draw meaningful conclusions about the credibility of that class.
On the other hand, I do think that getting the Iraq war wrong hurts your credibility, becuase I don't think there was a justification for the war available if you got the fundamental principle of war right: In Jim Henley's words, "War is a big deal. It isnít normal. Itís not something to take up casually." There was never a reasonable case for war that addressed the fact that war is a terrible thing that is overwhelmingly likely to cause lots of death and suffering, and so you need a real cause; not just some speculation about the good effects it might have. Part of this is to say that when you're making a prediction, you have to be alive not only to the probability that the facts will be as you predict them to be but of the costs if you get it wrong. War supporters didn't realize how bad it would be if the war didn't work out (and in many cases, how unlikely the worst-case non-war scenario was, especially given the Administration's provable nonsense on nuclear weapons).
More mostly irrelevant Kleiman-specific stuff below the fold.
In Kleiman's case, we are looking at only one bad prediction (though see below). And there are other factors that mitigate the effect on his credibility. His area isn't foreign policy, it's drug policy. Just because he makes a mistake outside his area of expertise doesn't mean that he's going to make mistakes in his area of expertise. Also, he didn't commit himself overwhelmingly to the war; it matters how much credibility you stake. [Both of these don't apply to, say, Ken Pollack, who got everything wrong about something he makes himself out to be an expert in. As a friend of mine says, he should have retired to run a vegetable farm by now.]
And he's acknowledged his error, which is important; making a false prediction is less pernicious than being unable to recognize an obvious disaster. (Compare Paul Berman, who I was complaining about the other day; in January 2004 Berman was still frothing about what a great blow had been struck against tyranny. It doesn't help that Berman's justification was transparently idiotic; he's utterly committed to seeing radical Islam and Baathism as two wings of the same movement, which conveniently ignores the fact that they were mortal enemies. And Poland! Gah.]
And in Kleiman's case, these posts about Iran don't look good; it's not just that they turned out wrong about what Iran was up to, but that they were wrong in pretty much the same way as the Iraq stuff. The same people were relaying false information about the same things. But I think here we can identify the method that went wrong and calibrate a new one for the future: don't trust anything a Republican government official says.
The worst thing about this is not that Bill Kristol printed an outright lie in the pages of the New York Times. It's that he's using the Times as a conduit for NewsMax, a right-wing lunatic site that frequently makes things up. Kristol attributes his smear to "a journalist" rather than naming the website, so that casual readers have no way of judging its credibility. So NewsMax's nonexistent credibility is laundered through the Times's presumably high credibility; and casual readers who haven't been keeping track of Kristol's track record have no way of knowing not to trust him.
The Times should fire Kristol. Aside from his failure to check NewsMax's allegation, either he knows how unreliable they are or he doesn't. If he knows (more likely), then he's dishonest in reprinting their smears. If he doesn't, then he's too gullible to be given a platform. Either way, reading his column makes you less well informed.
But -- and this goes back to some pessimistic grumblings I had about epistemic responsibility -- they probably won't fire him over this. And this means that a normal reader of the Times just doesn't have access to the information that would allow them to judge the credibility of the things they read in it. Which is not the reader's fault, I think; it's too time-consuming to check who's right or wrong about what all the time. The newspaper should be doing the job for you, by hiring people with a track record of getting things right. But they don't.
So there's no epistemic penalty for the most brazen falsehoods; you can pass on any kind of discredited B.S. that helps your political side, and the evidence that you shouldn't be believed will remain well hidden. Most of your audience will have no more reason to doubt you than to doubt the people telling the truth. And this breeds more falsehoods, and more confusion, and in the end a government that carries out horribly misguided policies.
Don't even get me started on the Op-Ed retrospective on Iraq; they didn't solicit opinions from anyone who'd been right about Iraq,* but there was room for three separate people from the American Enterprise Institute! Who told us that the major surprise was either that those backward Iraqis weren't ready for the wonderful democracy we gave them, or that the war had turned out to be much more awesome than they'd expected.
*Slaughter may have been a sort of war opponent, I'm not sure; but this was pretty weak tea.
[UPDATE 3/23: New York Times Op-Ed Page, why are you printing Paul Berman's thoughts on radical Islam? WHY? WHY? WHY? WHY? WHY? WHY? You are literally killing people.]