March 25, 2008

Credibility and Truth

Mark Kleiman sort of disagrees with me about truth and credibility. (Not that he has me in mind.) He writes:

On any yes-or-no question, the prior probability of being right by making a random guess is 0.5. So merely having reached the right conclusion once is no great sign of wisdom. The more you know and the smarter and more thoughtful you are, the more you can bias the odds in your favor. So having reached the wrong conclusion once is some evidence against one's smarts, knowledge, thoughtfulness, or all three. But it's not perfectly conclusive evidence. If you want to know whether Person X is likely to make correct guesses in the future based on X's guessing record in the past, you need to review X's approach to those previous questions, not just tot up right and wrong guesses.

I argued in the past that we should generally take truth of assertions as the yardstick for judging credibility, because (a) "it's usually much easier to judge truth than justification, and so judging credibility by justification (among other things) will lead you to give too much credence to smooth talkers who can come up with plausible-sounding explanations of why their past false assertions were justified," and (b) "[i]n a single case, justification may be a better indication than truth, but in the long run, truth is a better predictor of future credibility. Or rather, in the long run, they should converge, and if they don't that's an indication that your judgments of justification are going wrong." (That point was in comments, in response to my brother.) [More background here.]

Actually Kleiman and I agree partially. The part of his quote that I've bolded is right: Getting it wrong once is some evidence against your credibility, but it's not conclusive. But I disagree with the part in italics. If you've got an extensive record of someone's past predictions, and all you want to do is know whether their future predictions are likely to be true, you're probably better off totting up their successes and failures than trying to evaluate their methods. If their methods look good, but they always turn out poorly, that is likely to mean that you need a different way of evaluating their methods -- more likely the more of a track record you have. You should then look at methods to see why one person's methods works and the other's don't. (See Jonathan Kulick.) But if you're evaluating credibility, a long track record comes first.

There's another interesting issue here. Kleiman's correspondent casts aspersions on public policy schools in general. I don't know if most people in these schools got it wrong (I seem to remember Henry Farrell arguing that most political scientists got it right), but if they did, does that give us enough of a track record to say that public policy people are less credible?

Not necessarily, I think. Even if you have a lot of predictions about one event, there could still be something about that event that causes a lot of people who will be credible in the future to get it wrong this time. That is, what actually happens this time could be surprising. Most Oscar prognosticators didn't pick Marion Cotillard to win Best Actress, and most sports prognosticators didn't pick the Giants to win the Super Bowl. That's because there was good reason to believe Julie Christie/the Patriots would win. The fact that one expert gets this event wrong proves that another expert is likely to get this event wrong, not that experts are likely to get future events wrong. So I don't think that looking at lots of predictions by one class of person about one event will always give you enough data to draw meaningful conclusions about the credibility of that class.

On the other hand, I do think that getting the Iraq war wrong hurts your credibility, becuase I don't think there was a justification for the war available if you got the fundamental principle of war right: In Jim Henley's words, "War is a big deal. It isn’t normal. It’s not something to take up casually." There was never a reasonable case for war that addressed the fact that war is a terrible thing that is overwhelmingly likely to cause lots of death and suffering, and so you need a real cause; not just some speculation about the good effects it might have. Part of this is to say that when you're making a prediction, you have to be alive not only to the probability that the facts will be as you predict them to be but of the costs if you get it wrong. War supporters didn't realize how bad it would be if the war didn't work out (and in many cases, how unlikely the worst-case non-war scenario was, especially given the Administration's provable nonsense on nuclear weapons).

More mostly irrelevant Kleiman-specific stuff below the fold.

In Kleiman's case, we are looking at only one bad prediction (though see below). And there are other factors that mitigate the effect on his credibility. His area isn't foreign policy, it's drug policy. Just because he makes a mistake outside his area of expertise doesn't mean that he's going to make mistakes in his area of expertise. Also, he didn't commit himself overwhelmingly to the war; it matters how much credibility you stake. [Both of these don't apply to, say, Ken Pollack, who got everything wrong about something he makes himself out to be an expert in. As a friend of mine says, he should have retired to run a vegetable farm by now.]

And he's acknowledged his error, which is important; making a false prediction is less pernicious than being unable to recognize an obvious disaster. (Compare Paul Berman, who I was complaining about the other day; in January 2004 Berman was still frothing about what a great blow had been struck against tyranny. It doesn't help that Berman's justification was transparently idiotic; he's utterly committed to seeing radical Islam and Baathism as two wings of the same movement, which conveniently ignores the fact that they were mortal enemies. And Poland! Gah.]

And in Kleiman's case, these posts about Iran don't look good; it's not just that they turned out wrong about what Iran was up to, but that they were wrong in pretty much the same way as the Iraq stuff. The same people were relaying false information about the same things. But I think here we can identify the method that went wrong and calibrate a new one for the future: don't trust anything a Republican government official says.

Posted by Matt Weiner at March 25, 2008 08:21 AM
Comments

Does he mean his claims about having reached the right conclusion once and having reached the wrong conclusion once to be in tension here? Or does he intend "no great sign of wisdom" and "some evidence" to be compatible? They certainly are compatible, but have opposite implicatures.

Posted by: Kenny Easwaran at March 25, 2008 10:55 AM

Well, we could drop him a line and see if he responds, but maybe he means that getting it right helps your credibility a bit, but not much. While getting it wrong damages your credibility some, but not conclusively.

Actually, there's a probabilistic reading of the passage. Kleiman says that smart and knowledgeable people will bias the odds in their favor. Let's assume that the number of smart and knowledgeable people is a small percentage of the total population, and everyone else is just flipping a coin. Then it could still be the case that getting it right isn't much evidence that you're smart and knowledgeable -- you're probably just a coin-flipper -- but getting it wrong is evidence that you aren't.

Let's say we have 390 coin-flippers (97.5%) and 10 people (2.5%) who have a 9/10 chance of getting things right. So we wind up with 195 coin-flippers who get it right, 9 knowledgeable people who get it right, 195 coin-flippers who get it wrong, and 1 knowledgeable person who gets it wrong. Then the probability of being knowledgeable given that you get it right is about 4.3%, and the probability of being knowledgeable given that you get it wrong is about 0.5%. Ehh, I actually have no idea what metric I should use to judge whether this means that getting it wrong is stronger evidence that you're not knowledgeable than getting it right is evidence that you are. But, in either case, odds are you ain't.

Also, his correspondent sent him a flyer they passed out at the time. Perhaps Kleiman's view is that this flyer gives bad arguments, so it doesn't help your credibility to have subscribed to this. I can't read the flyer from his link, so I can't say for sure. I hope he isn't -- Yglesias basically punched a hole in this sort of argument in 2004; even if you think that the anti-war movement's policies were wrong (and I'm not sure what his objections were), the hippies were lots more right than anyone supporting the war.

Posted by: Matt Weiner at March 25, 2008 01:47 PM

The real question is what leg the people who are saying yes, you guys opposed the war, but that doesn't prove you were right because you might have had the wrong reasons, are standing on. The old guard anti-war folks might have had the wrong reasons, but the likelihood that they were righter than the people who turned out to be DISASTROUSLY, CATASTROPHICALLY WRONG is fairly substantial.

This is not a knock on Kleiman so much as a whole lot of other people, like Paul Berman (who used to be a garden variety liberal before he became a member of the Monster Raving Loony Party, although to be honest I thought his recent op-ed about radical Islam was only moderately evasive, which is less than usual).

By the way, Jonathan Kulick was music director of my college's radio station the year before me. He is a pretty sharp cat.

Posted by: Ben at March 25, 2008 11:43 PM

I wonder to what extent the separation between judging justification and judging truth will hold up under pressure.

First, a simpleminded argument for some partial convergence between the two notions. Won't it be true that credible testimony about a complex subject, like the Iraq war, will depend on providing supporting evidence? And if someone is wrong about the war, then they're typically wrong about some of those supporting statements. I'm not sure what to think about the cases where that fails: someone who says we should go to war, and only provides the true evidence that "Saddam is a terribly guy." Maybe in that case we can just look at the one false claim and update how credible we view them as being on that basis.

Second, isn't there a problem about determining which pieces of testimony are relevant that smuggles in questions of justification? Compare someone who predicted the war in Iraq would be a fiasco, but thinks the Pittsburgh Pirates are the best baseball team out there with someone who thinks the opposite. I don't treat the guy who gets the Pirates right as credible on Iraq, even though in some sense, you have to be a lot more confused to think the Pirates are the best baseball team than to think (in 2003) that the Iraq war might work (let's assume that no one has been delivering doctored newspapers to our erstwhile Pirates fan).

Posted by: Justin at March 26, 2008 07:34 AM

That's funny about Kulick. Actually, I had meant to point out that he made a mistake -- he ascribes to Rumsfeld the Laingian thesis

If I don't know I don't know
I think I know
If I don't know I know
I think I don't know

But Rumsfeld talked only about known knowns, known unknowns, and unknown unknowns. He didn't talk about unknown unkowns -- "if I don't know I know." By omission, he seems to be implicating the KK thesis, that if you know then you know you know. Which is pretty dubious, epistemologically.

Of course the problem with Rumsfeld wasn't that he was being unclear or talking dubious epistemological theories, it's that he was talking epistemology at all while being absolutely full of shit. He didn't know any of the things he thought he know, and he should've known he didn't know them.

Posted by: Matt Weiner at March 26, 2008 07:35 AM

Suppose there were a machine that made predictions, and had a good success rate, but offered no justifications--it's a black box. Or a gypsy fortuneteller.

There was a talented materials scientist who just seemed to be able to guess which compounds would work out for the kind of effect he was looking for (you'd have to ask Dad what these were; superconductivity, maybe). I think this was unimpartable tacit knowledge. If you had to decide whether to fund his lab or invest in his company, you might go with "truth" even though he couldn't tell a good story.

OTOH, I believe that in the 1980s someone developed a medical diagnostic AI program, but that people didn't accept it well until explanations (justifications?) were added. (A caveat: my knowledge of it at the time wasn''t direct and my memory may have distorted the story.)

Posted by: Matt's mom at March 26, 2008 12:46 PM

The program I was thinking of is MYCIN, actually done in the mid-1970s. Wikipedia describes it. It says that it was 69% successful (in diagnosing bacterial infections), greater than the percentage for experts. It was William Clancey, I think, who said explanations were needed.

Posted by: Matt's mom at March 26, 2008 12:59 PM

This follow-up from Kulick sheds more light on the all-important WSRN issue. ("Love Radio--the Sound of the Suburbs!")

Justin, we cross-posted. The point that testimony about a complex subject will come with supporting reasons is a good one. I think that I might respond that even the people who are right on the big issue will be wrong on some of the supporting reasons. (For instance, unlike Daniel Davies I would've predicted pretty confidently that Saddam had some banned chemical weapons stocks, and would've argued [and still would argue] that it didn't matter.) If each person has a mix of truth and falsehood among their justifications, then you might say that the ultimate truth or falsehood of their predictions gives you a better idea of who weights their inputs right.

About which pieces of testimony are relevant, I think this is partly a question of area, and partly of stakes. To some extent judgments about sports and about Iraq just don't transfer to each other -- similar to what I said about Kleiman's judgments about Iraq and drug policy. Also, I think someone who lets himself be guided by his rooting interests in sports really impeaches his credibility less than someone who gets war wrong, because sports is the sort of area where it's not so pernicious to be guided by your emotions.

And someone who thinks the Pirates are the best team in baseball (and isn't paid to do so) probably does blow their credibility on everything else. Damn.

Mom, tacit knowledge like that of the materials scientist definitely is one factor that makes me think that truth rather than justification is important. And the MYCIN story sounds interesting; it reminds me of the Michael Bishop/J.D. Trout discussion of experiments that seem to show that simple numerical prediction strategies often do better than expert judgment. Which cuts against tacit knowledge as well as against the need for explanation.

Posted by: Matt Weiner at March 26, 2008 07:54 PM

I think you'd be interested in the Wikipedia article.
http://en.wikipedia.org/wiki/MYCIN
It refers to a book (1984) discussing MYCIN's heuristics and issues around it.

Re the Bishop/Trout discussion (I haven't followed the link):there are experiments (I think with betting on cards or the like) that show people making good statistical judgments before they can state a rule or are aware of their judgments. So that could save tacit knowledge, in a way--it could BE a numerical rule.

Posted by: Matt's mom at March 27, 2008 12:31 PM

OK, I think Bishop and Trout probably take in the experiments I was thinking of.

Posted by: Matt's mom at March 27, 2008 12:40 PM
Post a comment









Remember personal info?