FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler

Psychologists suckered by Monty Hall?

Author:nick @ April 11th, 2008 Leave a Comment

An article from the New York Times:

http://www.nytimes.com/2008/04/08/science/08tier.html?_r=2&oref=slogin&oref=slogin

Please refer to the article if my explanation is unclear.

The basic idea of the article is that “cognitive dissonance” methodology is flawed in that it ignores the Monty Hall effect, and calls people “irrational” when really people are exhibiting Monty-Hall congruent preferences.

Let’s say you have three choices that you should value equally. Presented with choices red and blue, you choose red arbitrarily. Now I give you a choice between blue and green, and you choose green about 2/3 of the time. Assuming that people are following a probability matching strategy (and there are realistic conditions under which it is an ideal strategy), this means you think green has 2/3 chance of being better than blue, whereas before you thought each was equally likely to be better than the other. Psychologists call this “rationalizing your initial (arbitrary) rejection.”

In contrast, the new argument goes “If blue was worse than red, there’s a 2/3 chance that it’s worse than green, because green could be better than red and blue, or worse than red but better than blue. In 1/3 of outcomes blue is worse than red but better than green”. This can be seen as a variant of the Monty Hall problem, where the odds change conditioned on the first choice.

I am not sure whether to be convinced by this new argument, much as I enjoy seeing the agenda of savvy statistical methods being evangelized. It seems to me that if you carried around full Bayesian posteriors, the effect would vanish.

Dr. Chen’€™s analysis as it’s presented in the New York Times article seems flawed in a way that unfortunately too many things are flawed: it takes the mode of a probability distribution to be representative a distribution itself. That is, it assumes that the probability at the mode is 1 and 0 elsewhere. In literature this is called a “€œmaximum a posteriori€” (MAP) strategy.

If people were proper statisticians, they would carry around with them a notion of uncertainty. They would say to themselves on picking red over blue “€œI think red is better, but I’€™m completely unsure about that”€. Then when faced with the choice of blue and green they would say “I thought red was better than blue, but I was completely unsure about it, so maybe blue really is better than red”€. If red was better than blue, then 2/3 of the time green will be better than blue. If blue was better than red, then 2/3 of the time green will be better than red. Weighing these outcomes by their uncertainty (50/50) gives that green is just as likely to be better than blue, not 2/3 as Dr. Chen suggests.

[The math is the same for preferences, so Dr. Chen'€™s argument for "€œslight preferences"€ is not sufficient]

Dr. Chen’s argument depends on people *forgetting* information, specifically about the certainty of their previous choices. While this is a good candidate explanation for what’s going on, it’s not evidence for people doing “€œthe right thing.”€

You can verify this yourself in simulations. Randomly assign red, green, blue the numbers “1/2/3″. Have the simulation ask you whether red is better than blue, and pick what you like. Then have the simulation give you a choice between the other two. 50% of the time, you will be wrong. This is in stark contrast to the real Monty Hall problem, where in simulations you will find yourself happily winning 67% of the time.

Critically, the Monty Hall problem relies on revelation of new information (€”The car is not behind door number 2″) in the context of the previous choice. The choice itself is not sufficient, and gives no new information. If Monty Hall didn’t show you the goat, you’d be at 50/50. Ask yourself, in this case of cognitive dissonance, what new information is being revealed.

Note that if after you chose red over blue I told you “You were right, red was better than blue”, then the probability of green being better than blue goes up to 2/3. This “You were right” information that is conditioned on your previous choice is the key to making this a Monty Hall style problem. Since people get no such feedback, they don’t know if they were right to choose red or wrong, and so the analysis shouldn’t apply.