Tuesday, January 19, 2016

Fisking Affective Heuristic

Misleading.
I'm doing this to measure how misleading. You can too, but it's done by comparing volume of edits to the original, which means reading the original.

Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.
Start looking for evidence Yudkowsky distinguishes between tuned and untuned heuristics.

This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn't protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock.
...which is how insurance protects things. Further, if the clock is lost, no insurance can rediscover it. For a visual: I can't unsmash Grandfather's clock, I can only buy a new one that looks similar.
And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.
Consider you'd be quite upset if the first clock is lost, and not particularly upset about the second. Would a cool hundred make you feel better? Remember, when stressed, most will take that stress out on others, leading to a cycle of violence. (Partly because cities have high baseline stress - there's a threshold.)

Not that I'm necessarily defending the wisdom of this particular decision, I'm attacking this dismissal of it.
Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.
Yes, maybe you could get away with claiming to have forseen my objection.

Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.  Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.  
About that.
Only 52% could do item AB30901, which is to look at a table on page 118 of the 1980 World Almanac and answer:
According to the chart, did U.S. exports of oil (petroleum) increase or decrease between 1976 and 1978?
Are we sure subjects understand percentages? Given literacy isn't binary, did you check whether subjects who do understand percentages can be bothered to work it out?

The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale.  Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.
Yes, I would predict the same.
I would predict that subjects convert subjective appreciation onto the objective scale. More intense appreciation will result in higher scores. It gets them out of the test the quickest.

Subjects don't think so fuzzily about stuff that will concretely impact them. Try asking for a raise while saying something like, "A raise of [your target] increased productivity by 95% of the difference between no raise and the maximum raise," and see how that works out for you.

Or consider the report of Denes-Raj and Epstein (1994):  Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.  E.g., 7 in 100 was preferred to 1 in 10.
About that.
Notice larger number -> stop thinking, because it's worth losing a few bucks now and then to stop thinking earlier.

According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans.
There's a couple things that could be going on here. Naturally, Denes-Raj and Epstein did not ask the necessary discriminating questions.

First, zombies. They knew they're supposed to say the "the probabilities are against me" but don't know what that means.

Second, they do know what it means but it's too much trouble to remember for a few bucks.

Third, a few bucks is less rewarding that playing a more exciting game. Words != aliefs. Working out what they were thinking, based on what they said they were thinking, is highly nontrivial.

You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.
Yudkowsky defines 'rationality' as 'winning.' Those who don't think about probability are having more surviving children than those who do.

Unfortunately a real study would be costly. Have to do a survey of notable life setbacks. Find ones that could plausibly be affected by probability. Then, a longitudinal study, using IQ-matched controls, where one side is taught probability. (Not using common core.) See if the chosen setbacks occur less often. It's not rational to do work when you can get grants to not do any work.

Nonetheless, Finucane et. al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.
Subjects are rounding off 'high risk' to 'don't do it' and 'low risk' to 'do it.' Let's do a little search/replace.

"presenting information about do it made people perceive do it; presenting information about don't do it made people perceive don't do it."

A thrilling result.

Ganzach (2001) found the same effect in the realm of finance.  According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk.  When judging familiar stocks, analysts' judgments of risks and returns were positively correlated, as conventionally predicted.  But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.
So we've found, at least in this instance, that subjects will use this silly  low-cost algorithm for zero-states zero-cost verbal answers, but not at, for example, their job.

I wonder why caps-r Rationality isn't more popular.



Did you find any evidence Yudkowsky discriminates between a tuned and untuned heuristics?

Some related subjects: thinking rationally vs. haha proles. Emotion vs. logic. Algorithm vs. consciousness.

Since today's economists (except of course the Austrian School) have abandoned the apparently unfashionable concept of causality in favor of the reassuringly autistic positivism of pure statistical correlation, it has escaped their attention that when you stop shooting heroin, you feel awful. 
Can't use autistic correlation to substitute for empathy, either, it turns out.

No comments: