January 25, 2007

Found While Searching for Something Else

An article comparing free jazz and scholarly law blogs.

Posted by Matt Weiner at 09:09 AM | Comments (2)

January 20, 2007

Presentation of Intelligible Content

Recently I was getting spam whose subject lines were real news headlines, such as "Iranian officials detained in Iraq, U.S. officials say" and "Madonna Defends Rosie." Though these subject lines were by no means intended to convey information, I nevertheless believed them; though I'm not sure whether this was because of an inclination to think that the spammers would use real headlines or because I induced from the ones I already knew had happened. (Guesses as to which they were will be ignored.)

In the last couple of days, though, the spammers have started making the headlines more alarmist by Madlibbing key elements: "Russian missle shot down USA satellite," "Russian missle shot down Chinese satellite," "U.S. Southwest braces for another winter blast. More then 1000 people are dead." I don't know whether this is supposed to help get around the spam filters (it's not working) or just to make sure that I get no utility whatsoever out of the spam. In any case, because of the reliability of the past spam, I was mildly concerned by some of this until I figured out what was going on. (The winter blast is in fact happening, but I didn't need spam to tell me.)

About an hour and a half ago I received spam with the subject line "Third World War have been declared." I'm glad that I stopped trusting the spam before then.

Posted by Matt Weiner at 01:31 PM | Comments (4)

January 18, 2007

A Bit More on Welfare, Dignity, and Rawls

This comment by Bruce Baugh on U.S. social services seems like it's relevant to the recent exchange with Sigrid Fry-Revere on dignity, charity, and Rawls. Baugh says:

What happens to someone in the US seeking social service help is this: Sooner or later, you run into someone interpreting the criteria for eligiblity in the most constrictive reading possible. And when you run into those people, you have a choice between being honest and losing the help you need, or lying.

(And more. Read the whole thing.)

This jibes with Fry-Revere's point that the way the welfare state hands out services is degrading to the recipient. And also, I think, that it can be degrading to the carer as well; people who entered the caring professions (one hopes) didn't go in with the intention of being the person who denies care to all the borderline cases. (Which I'd be happens partly because of budget incentives.)

But all this seems exactly like the critique that Rawls would make of the welfare state. The welfare state is unjust because it singles out and stigmatizes the recipients of welfare. A universal system of transfer payments, like a guaranteed income, would not stigmatize people in any such way, and it wouldn't have the perverse effects that Baugh describes. It may be no coincidence that the most popular welfare-state entitlement, Social Security payments for the elderly, is universal and not means-tested.

(And part of the obstacle to universal welfare payments in the U.S. may be that Americans are reluctant to risk that payments go to those they see as undeserving, often tied in with racial attitudes. I haven't read that book, so emphasize the "may be.")

Fry-Revere also argues that anonymous caring harms the dignity of carer and caree; I endorse the arguments against that made in the previous thread and the others linked there, especially my mom's comment.

Posted by Matt Weiner at 08:34 AM | Comments (22)

January 17, 2007

An Assertion that Is Probably False, But Is Much Cooler than Mundane Reality So I Don't Care

Regina Spektor's "Fidelity" video was obviously inspired by Jackson's Knowledge Argument.

Posted by Matt Weiner at 08:04 PM | Comments (6)

January 16, 2007

Why Truth Is the Norm of Credibility

The positive part of my account of the norms of assertion is to argue that testimony, at least, is subject to a norm of truth insofar as if you assert something false you should lose credibility. That argument is discussed in a long blog post here and at even greater length here.

One of the obvious questions is "Why should the truth of your assertions be what counts; isn't the important thing for future credibility whether your assertion was justified?" That's a good question; but I still think truth should be seen as the primary norm of assertion, because it's usually much easier to judge truth than justification, and so judging credibility by justification (among other things) will lead you to give too much credence to smooth talkers who can come up with plausible-sounding explanations of why their past false assertions were justified.

The comments about 20/20 hindsight here and here support that view, I think. (via Daniel Davies.) That's actually a serious philosophical claim; in most cases trying to judge truth rather than justification will send you wrong, because it's easier to fudge judgments of justification the way you'd like. As Mark Thoma says (in the second link):

maybe we don't drum you out of the [pundit] profession -- there aren't simply two extremes where we listen fully or don't listen at all -- but we are going to pay less attention to what you have to say. That's how it to goes when you are wrong about important things.

The mildly philosophical point made, some petty political point-scoring below the fold.

Especially rich is Jane Galt's disquisition on hindsight bias, following her claim that "precisely none of the ones that I argued with predicted that things would go wrong in the way they did"; a claim that I suspect rests on a bit of cherrypicking of dovish arguments. (It's proverbial that J.G.'s liberal friends have convenient opinions.) Also rich is this:

I don't see any way that I could have known, without actually checking, that he didn't have at least an advanced [WMD] programme.

I seem to remember one Hans Blix doing some actual checking on Saddam's WMD progams, and being utterly reviled for it. Also, I believe many war opponents argued that the existence of chemical and biological weapons programs wasn't a serious threat. etc.

For what it's worth, my position (and I don't think I was in print on this) was that war is almost guaranteed to have some bad consequences and is very likely to have unpredictable results, some of which could be quite bad. (cf. Quiggin: Few wars go well for those who start them.) As such, we're obliged not to start one without a very good reason. None of the reasons that were presented were good. For about five minutes I thought that war might be a good idea because Saddam might develop nuclear weapons and Very Bad Things could ensue, but then I decided that for the Very Bad Things to happen with war about three unpredictable things had to happen, whereas war is only one or two unpredictable things away from Very Bad. In hindsight, I feel pretty good about this.

One last cheap shot; I believe this:

As I see it, doves have, in effect, benefitted from winning a random game. Not that the result was random--obviously, there was only one true state of the world. But at the time of making the decision, the game was random to the observer, with no way to know the true state until you open the box and poke the cat

makes a hash of the Schrödinger's Cat example, the point of this is that there isn't one true state of the world until an observer opens the box. (I could be wrong, though.)

Posted by Matt Weiner at 07:37 PM | Comments (7)

January 15, 2007

Clayton Littlejohn on Lotteries and Assertion

Clayton Littlejohn has a very interesting post up on lotteries, assertion, and belief, including some criticisms of my paper "Must We Know What We Say?" (Penultimatish version; Philosophical Review link if you have access.)

Clayton says:

It’s one of the vices of Weiner’s treatment of the lottery that it tells us what we should think about the assertion that (1) ["Your lottery ticket didn't win"-mw] is true, but not the belief that (1) is true. So far as I can tell, he thinks there is nothing wrong with believing (1) [indeed I don't--mw], but those who think that there is something wrong with believing (1) will be disappointed by the partial solution to the lottery.

He goes on to present a problem for me given that there's nothing wrong with (1), and to present an argument for the impermissibility of believing on purely probabilistic grounds, and an account of what belief is.

My scattered thoughts are in comments there. [UPDATE: Well, not AOTW, but they will when he approves my comment.] Basically, I'm skeptical that the way to account for the norms of assertion is to think about categorical belief, because I think that the accounts of categorical belief on offer aren't messy enough. Do I categorically believe what I predict to be true? On the stricter accounts of categorical belief (including Clayton's I think), I don't; which makes belief too strong as a norm of assertion. On weaker accounts of categorical belief, I may, but then some of things I believe will be things I shouldn't assert.

Posted by Matt Weiner at 10:23 PM | Comments (2)

January 14, 2007

The Eleatic Analysis Debate

Kieran Healy points to, and reproduces in full, a paper by G.E.M. Anscombe that he claims may have been the shortest Analysis paper ever. I'm not sure; I think there's a non-constructive proof of the infinity of Analysis papers, namely that for any paper there's a shorter one.

And I think Anscombe's paper may have been part of the Eleatic Analysis debate, which proceeds as follows: The last 8 pages of a given issue of Analysis are given to a debate on the doctrine of double effect [or something]. The first 4 pages consist of a paper arguing that there is no such thing; the next 2 pages offer a convincing refutation of the first paper; the next 1 pages offer a convincing counterargument; the next 1/2 page offers a convincing counterargument to that; etc. When you finish the issue, what should you believe?

[NB I should probably mention that this is just the lamp puzzle by I think Judith Jarvis Thomson, rephrased as a joke about Analysis.]

Posted by Matt Weiner at 08:03 AM | Comments (10)

January 09, 2007

More On the Self-Undermining Argument

At the end of the last post I performed a slightly fanciful calculation to determine what should happen if there are two epistemic peers who disagree on the epistemology of disagreement: One thinks that you should not adjust your credences because of your epistemic peers' views, the other thinks that you should adjust your credence so that it's between your original credence and your peer's. (Call the second view the E view, for Elga, as in the previous post. Christensen and Feldman hold related views.)

My calculation was that the second expert should wind up with a credence of 0.586 in the E view, but I'm thinking that I made a mistake; it should be 2/3.

The question is this: According to the E view, how far do you need to go in adjusting your credences? After you've adjusted your credence to reflect your peer's view, must you then look around again, note that your peer still has a different credence, and adjust again? Or is the ideal credence an average of your peers' credences and the credence that you would have if not for the adjustment you'd performed?

I think it has to be the latter, for two reasons. The first is an absurdity result; if you have to keep adjusting your credence to reflect your peer's, then a stubborn peer could force everyone else's credence toward their own. If you start out with a credence of 1 in p, and my credence is 0, after the first adjustment yours is 0.5, after the second it's 0.25, and you get closer and closer to my credence of 0.

The second reason is more straightforward. After you've adjusted your credence and I haven't adjusted mine, you're basing your credence on more evidence than I am. Your credence is based on the opinions of all epistemic peers, while mine is only based on my own opinion. Since I'm ignoring some relevant evidence, I'm no longer your peer, and you don't have to adjust your view to account for mine. (But if you went back to your original credence, you'd be ignoring the same evidence, and I'd be your peer again, so you'd have to adjust back.) So, more precisely, your credence should be an average of the credence that all the peers would have without adjusting their credences.

[It's entirely possible that this is all addressed in the literature somewhere.]

In the previous calculation I said the following: Suppose that the original believer in E winds up with credence c in E. According to the methods endorsed by the E view, their credence in E should be c/2: midway between their credence and the credence (0) of the other peer. But now it seems that, according to the E view, their credence should still be 0.5: Midway between the other peer's credence and the credence (1) that they would have in E without the adjustment for the other peer's credence.

The weighted sum calculation relies on the idea that someone with a credence of c in E should give a credence to another proposition, P, as follows: If according to the methods of E they should give credence x to P, and according to the methods of not-E say they should give credence y to P, then their credence in P should be cx + (1 - c) * y. Now, according to the methods of not-E, they should trust their own judgment of the arguments for E and give credence 1 to E; according to the methods of E they should give credence 0.5 to E. Note that these are not conditional probabilitles (thanks to David Christensen here). According to the weighted sum, the credence in E, c, should be c * 0.5 + (1 - c) *1. Solving c = c/2 + 1 - c, we get c = 2/3.

Incidentally, no matter how many experts there are opposing E this calculation can never push credence in E below 0.5. Suppose there are infinitely many peers opposed to E, so that the methods of E dictate that your credence in E should be 0. Since the methods of not-E dictate that your credence in E should be 1, the weighted sum equation is c = c * 0 + (1 - c) * 1; solving we get c = 1/2.

Thanks also to Mike Almeida in Brian Weatherson's comments; I still owe him an answer to his last comment. [UPDATE: Not anymore.]

UPDATE: This answer should generalize pretty easily; the other answer should generalize less easily.

Suppose that there are two experts, A and B; A begins with a credence of x in EW, B begins with a credence of y in EW. (That is, these are the credences each has when they don't take the other's credence into account.) Then according to the methods of EW, each should have a credence in EW of (x + y)/2; according to the methods of not-EW, each should have their original credence in EW. So if B's credence in EW is c, the weighted sum for B's credence in EW is c * (x+y)/2 + (1 - c) * y. Solving c = c * (x+y)/2 + (1-c)*y, we get y = c + cy - c * (x+y)/2, or c = 2y/(2 + 2y - (x+y)) = 2y/(y - x + 2). Say x = 0, y = 1/2, then c = 2/5; say x = 1/4, y = 3/4, then c = 3/5. If x = 3/4, y = 1/4, then c = 1/3. Of course if y = 0 then c = 0.

Working out the other answer seems like it'll involve simultaneous linear equations, which I'm not up for at the moment.

Posted by Matt Weiner at 08:49 PM | Comments (0)

January 07, 2007

Are Self-Undermining Arguments Self-Undermining?

[The title of the post should be parsed so that it does not ask whether a tautology is true.]

In a post about the epistemology of disagreement, Brian Weatherson argues that the Christensen-Elga-Feldman position is self-undermining:

Roughly, the idea is that if you believe p, and someone as smart as you and as well informed as you believes ~p, then you should replace your belief in p with either a suspension of judgment (in Feldman’s view), or a probability of p between their probability and your old probability (in Elga’s view).

This position is self-undermining because many smart, well-informed people deny it, and so anyone who comes to believe it should suspend judgment in its truth. Brian says, "I think no one should accept a view that will be unacceptable to them if they come to accept it," so he thinks no one should accept the CEF principle.

The principle (S):

(S) No one should accept a view that will be unacceptable to them if they come to accept it

is very appealing, but I'm not sure it can ever provide a reason to reject or suspend judgment on a view that we would otherwise accept. Suppose p is such a view; so long as you don't believe p, the arguments for p are compelling, but if p is true then belief in p is unacceptable. Consider belief in not-p. Since the arguments for belief in p are compelling so long as we do not believe p, if we believe not-p (and are consistent) then belief in not-p is unacceptable if we come to accept not-p. So not-p also falls victim to principle S.

Suspending judgment doesn't quite fall victim to principle S, but it seems to face a similar problem. As I framed the question, the arguments for p are compelling so long as we do not believe p; we don't even have to believe not-p for them to be compelling. Then, once we've suspended judgment on p, we've got no reason to reject p and a good reason to accept it. So we shouldn't suspend judgment. Suspending judgment isn't accepting a view, which is why principle S doesn't apply, but nevertheless it seems reasonable to extend principle S to "No one should suspend judgment on a question if suspending judgment will be unacceptable to them so long as they suspend judgment."

So what should you do when you are convinced by the arguments for a view that will come to seem unacceptable to you when you accept it? I'm not sure. It certainly seems that in such a situation your cognitive limits have become manifest, and you should do what you should do when that happens (which may not always be to suspend judgment). Or perhaps you should oscillate back and forth among the self-undermining views.*

Best of all would be to find a reason why the argument for the self-undermining view is wrong, so that principle S no longer provides your sole reason for refusing to believe in p. On the epistemology of disagreement, that's something I haven't done yet.

*Here's how things might work for the Elga view, that you should adjust your probability to be intermediate between your original probability and your peer's probability. If I only considered my own view of the arguments, I would give a credence of 1 to the Elga view, but there is one other expert who gives a credence of 0 to the Elga view. I now suspend judgment, giving a credence of 0.5 to the Elga view. But it seems that my reasons for giving a credence of 0.5 rather than 1 to the Elga view depend on the Elga view itself, which I am now only giving a credence of 0.5 to.

An equilibrium point may be reached when I assign a credence c to the Elga view that satisfies this constraint: c = (1 - c) + c2/2. Reasoning: insofar as I reject the Elga view, I should trust my own arguments, which convince me of the view; so this component of my credence is 1 - c (degree of credence in not-Elga) * 1 (degree of my confidence in Elga, given not-Elga). Insofar as I accept the Elga view, my credence in it should be midway between my credence in it (c) and my peer's credence (0); so this component of my credence is c (degree of credence in Elga) * c/2 (degree of my confidence in Elga, given Elga). Solving, we get c2/2 - 2c + 1 = 0, or c = 2 - √2 = approx. 0.586.

Giving a credence of 0.586 to the Elga view if you're convinced by the arguments seems consistent, but very weird. I make no particular claims that the previous paragraph gave the right way of calculating c.

Posted by Matt Weiner at 04:11 PM | Comments (8)

January 01, 2007

Happy New Year

Happy New Year to anyone who still reads the blog -- just reminding you that it does still exist.

Token content: Today I saw Children of Men (capsule, not that good, and like The Good Shepherd which I preferred a movie that's worse because it presents itself as though it's going to be THE GREATEST; there were some very effective things about the scenes in the refugee camp but if you want to see a post-apocalyptic movie with insight into the social dynamics of a refugee camp see Time of the Wolf which really is that great; make sure you get the Michael Haneke movie with Isabelle Huppert and not the Burt Reynolds movie that I found while searching for that IMDb entry and which, I believe, does not involve a refugee camp) and four of the six previews I saw were for movies called "The Something": The Kingdom, The Hitcher, The Shooter, The Invisible, along with Amazing Grace and Breach.

I'm trying to convince myself that Breach really will be the movie about real spying that The Good Shepherd was supposed to be; it's based on a focused but fascinating real episode (the Robert Hanssen case), and Chris Cooper is awesome without dragging the accumulated weight of his prestige into any movie he appears in, as Robert DeNiro perhaps does.

[Though that DeNiro theory may have trouble accounting for Analyze This/That, and Meet the Parents/Fockers too. Maybe The Bad Shepherd will be more antic.]

[UPDATE: I now have the little flute-thingy part from the end of "In the Court of the Crimson King" going through my head even though that was not the part of "In the Court of the Crimson King" that appeared on the soundtrack, and I'm pretty sure I haven't heard the little flute-thingy part for over ten years.]

Posted by Matt Weiner at 11:10 PM | Comments (14)