The best kittens, technology, and video games blog in the world.

Sunday, March 15, 2009

How Robin Hanson increased my conviction that healthcare works

Badass Hana by Chrysophylax from flickr (CC-NC-SA)Divide asked in comments to my post "Which crackpot cult to join" why I didn't make any snarky remarks about Eliezer and the rest of the Overcoming Bias crew. Well, here they come.

One of the most peculiar beliefs shared by the Overcoming Bias crew and most of the readers (as estimated by the comments) but hardly anyone else is "Aumann's agreement theorem", which informally says:

If two people are perfect Bayesian rationalists, they cannot agree to disagree.
Like most Bayesian theorems it's mathematically flawless, the problems start when you try to apply it to the actual world. The bastardized version which is really popular on Overcoming Bias is:
After discussion between two non-perfect Bayesian rationalists, their views should be expected to be closer to convergence than before.
It sounds quite reasonable - the more pro-X person shared some pro-X evidence, the more anti-X person shared some anti-X evidence, and how could knowing some extra pro-X evidence could possibly make you more anti-X than before or vice versa? At worst it will make no difference, unless someone was really horrible at discussing.

Still with me? Great! Now let's apply that to Robin Hanson and effectiveness of healthcare. As you might already know if you're a regular OB reader, Robin Hanson believes that healthcare spending is largely wasted and ineffective, he also blogs about it a lot. So according to bastardized Aumann's theorem after reading some of his posts I should be less convinced about healthcare than before, right? Yet it had the opposite effect on me, and I believe my reaction is rationally Bayesian.

Discussion is not random evidence seeking

The problem with bastardized Aumann is that it models discussion as evidence discovery. If I did random research and found out result more consistent with healthcare not working, this would indeed make me less convinced about efficiency healthcare. But discussion is not unbiased evidence discovery! Here's a model I like a lot better:
  • P(healthcare works) = 50% - or any other number, it only matters in which direction it goes
  • P(Robin skillful at discussing) = 90% - I have little reason to suspect lack of skill on Robin's part
  • P(good evidence against|NOT healthcare works) = 90% - if it doesn't work, there should be some good evidence against it
  • P(good evidence against|healthcare works) = 10% - if it works, there will probably be no good evidence against it working, weak evidence will almost certainly exist
  • P(Robin's posts convincing|Robin skillful at discussing AND good evidence against) = 90% - if good evidence exists, and Robin is skillful enough, he will most likely use it correctly and his blog posts will most likely be convincing
  • P(Robin's posts convincing|NOT Robin skillful at discussing OR NOT good evidence against) = 10% - if good evidence doesn't exist, or alternatively Robin fails at arguing, his blog posts will most likely not be convincing
It's just a simple Bayesian network. The only direct observation I have is P(Robin's posts convincing) being false. I find his arguments extremely weak, the kind that I might care a tiny bit about if I found them randomly, but ones you can nit-pick about pretty much anything if you look long enough. Now let's see how it updates my posterior probabilities.
  • P(Robin skillful at discussing|NOT Robin's posts convincing) = 83.3% - down from 90%
  • P(good evidence against|NOT Robin's posts convincing) = 16.7% - very significantly down from 50%
  • P(healthcare works|NOT Robin's posts convincing) = 76.7% - significantly up from 50%
So reading failed arguments against healthcare, I believe more in healthcare, contrary to Robin's intentions. If I didn't really believe in Robin's discussion skills (10% instead of 90%), updates would be very small, but still in the same direction:
  • P(Robin skillful at discussing|NOT Bad Robin's posts convincing) = 5.8% - down from 10%
  • P(good evidence against|NOT Bad Robin's posts convincing) = 47.7% - slightly down from 50%
  • P(healthcare works|NOT Bad Robin's posts convincing) = 51.9% - very slightly up from 50%
I'm not really going to discuss particulars of the healthcare case, but the general heuristic is:
If someone who's good at convincing tries to convince you about X and fails, it is a good evidence against X.
This effect is only strong when you have reasons to believe convincing arguments would be used. If you expect appeal to emotion or other low value arguments (like when a politician speaks to uneducated voters), and get it, it's really low evidence (but still against). If you expect to get strongest arguments available, but get something very weak, that's some very good evidence against.

8 comments:

Anonymous said...

A way to disclose a hidden motive. Insisting on making a weak argument dispite ample evidence to the contrary reveals an agenda, whether to proselytize or garner votes. Appealing to emotions rather than fact would be a dead giveaway.
Pity that it works so well.

Mark Dominus said...

I made a similar point on my blog a few years ago at http://blog.plover.com/lang/etym/Arabic.html : "Naomi Wolf is very smart, and has studied this closely and thought about it for a long time. If that is the best example that she can come up with, then perhaps I'm wrong, and there really aren't as many examples as I thought there would be."

This was about three years ago; since then I have revised my assessment. I now think that Naomi Wolf is not actually very smart. But the point stands nevertheless.

taw said...

Mark Dominus: Yes, your post is talking about the same effects. And there are always two possible solutions, either weak evidence is evidence against, or against persuasion skills of person presenting it. With Robin Hanson we can be pretty sure he's extremely smart guy, with pretty good persuasion skills, so it's most likely problem with the subject matter.

Anonymous said...

Hmmm, so if I get this right... The basic argument comes down to that his facts must be flawed then? I never thought I'd see such a complex way to say that though. Still, I suppose that is because it's more like you are trying to mathematically prove it. Which makes me curious if someone like Hanson would have a counter to this at all...

Tanner L. Swett said...

Where on Overcoming Bias does it say (or imply) something like "after discussion between two non-perfect Bayesian rationalists, their views should be expected to be closed to convergence than before"?

taw said...

Warrigal: It's mentioned over and over again. This thread is only the latest example - http://lesswrong.com/lw/ee/the_mindkiller/b11

TGGP said...

You should not be expected to agree with him more after reading him. That would be irrational. If you expect to update your beliefs in a specific direction, you should update right now rather than waiting. Two bayesian-wannabes should become closer after being exposed to more mutual information, but who moves is what direction cannot be predicted beforehand. The manner in which they should converge is that of a random walk.

Anonymous said...

There's an important hidden step right before the start.

At first, you didn't know whether his argument would be against, or in favor. The moment you learned Robin thinks healthcare doesn't work, and that he had some unknown argument, P(healthcare works) would have been downshifted in anticipation of the argument. When Robin failed to meet your expectations, it shifts back up, yes, but that doesn't ensure it moves above its original position.

I think for it to do so would require you to think Robin would be more convincing if he argued in favor, despite finding his own arguments in favor less convincing than his arguments against.