Does it make sense to talk about “rational people”?

That is, is there a sub-population of individuals who consistently exhibit less cognitive bias and better judgment under uncertainty than average people? Do these people have the dispositions we’d intuitively associate with more thoughtful habits of mind? (Are they more flexible and deliberative, less dogmatic and impulsive?)

And, if so, what are the characteristics associated with rationality? Are rational people more intelligent? Do they have distinctive demographics, educational backgrounds, or neurological features?

This is my attempt to find out what the scientific literature has to say about the question. (Note: I’m going to borrow heavily from Keith Stanovich, as he’s the leading researcher in individual differences in rationality. My positions are very close, if not identical, to his, though I answer some questions that he doesn’t cover.)

A minority of people avoid cognitive biases

Most of the standard tests for cognitive bias find that most study participants “fail” (display bias) but a minority “pass” (give the rational or correct answer).

The Wason Selection Task is a standard measure of confirmation bias. Less than 10% got it right in Wason’s original experiment.[1]

The “feminist bank teller” question famous from Kahneman’s experiments is a measure of the conjunction fallacy. Only 10% got it right — 15% for students in the decision science program of the Stanford Business School, who had taken advanced courses in statistics, probability, and decision theory.[2]

Overconfidence bias shows up in 88% of study participants. [6]

The Cognitive Reflection Test measures the ability to avoid choosing intuitive-but-wrong answers. Only 17% get all 3 questions right. [7]

The incidence of framing effects appears to be lower and more variable. Stanovich’s experiments find a framing effect in 30-50% of subjects.[3] The frequency of framing effects in Kahneman and Tversky’s experiments occupy roughly the same range. [4] The incidence of the sunk cost fallacy in experiments is only about 25%. [5]

On 15 cognitive bias questions developed by Keith Stanovich, subjects’ accuracy rates ranged from 92.2% (for a gambler’s fallacy question) to 15.6% (for a sample size neglect question). The average score was 6.88 (46%) with a standard deviation of 2.32. [8]

Many standard measures of cognitive bias find that a minority of subjects get the correct answer. There is significant individual variation in cognitive bias.

Correlation between cognitive bias tasks

Is “rationality” a cluster in thingspace? “Rational” only makes sense as a descriptor of people if the same people are systematically better at cognitive bias tasks across the board. This appears to be true.

Stanovich found that the Cognitive Reflection Test was correlated (r=0.49) with score on the 15-question cognitive bias test. Performance also correlated (r=0.41) with IQ, measured with the Wechsler Abbreviated Scale of Intelligence.

Stanovich also found that four rational thinking tasks (a syllogistic reasoning task, a Wason selection task, a statistical reasoning task, and an argument evaluation task) were correlated at the 0.001 significance level (r = 0.2-0.4). These were also correlated with SAT score (r = 0.53) and more weakly with math background (r = 0.145).[9]

Stanovich found, however, that many types of cognitive bias tests failed to correlate with measures of intelligence such as SAT or IQ scores. [10]

The Cognitive Reflection Test was found to be significantly correlated with correct responses on the base rate fallacy, conservatism, and overconfidence bias, but not with the endowment effect. [12]

Philip Tetlock’s “super-forecasters” — the top 2% most successful predictors on current-events questions in IARPA’s Good Judgment Project — outperformed the average by 65% and the best learning algorithms by 35-60%. The best forecasters scored significantly higher than average on IQ, the Cognitive Reflection Test, and political knowledge. [11]

Correct responses on probability questions correlate with lower rates of the conjunction fallacy. [16]

In short, there appears to be significant correlation between a variety of tests of cognitive biases. Higher IQ is also correlated with avoiding cognitive biases, though many individual cognitive biases are uncorrelated with IQ and the variation in cognitive bias is not fully explained by IQ differences. The cognitive reflection test is correlated with less cognitive bias and with IQ, as well as with forecasting ability. There’s a compelling case that “rationality” is a distinct skill, related to intelligence and math or statistics ability.

Cognitive bias performance and dispositions

“Dispositions” are personal qualities that reflect one’s priorities — “curiosity” would be an example of a disposition. Performance on cognitive bias tests is correlated with the types of dispositions we’d associate with being a thoughtful and reasonable person.

People scoring higher on the Cognitive Reflection Test are more patient, as measured by how willing they are to wait for a larger financial reward.[7]

Higher scores on the Cognitive Reflection Test also correlate with utilitarian thinking (as measured by willingness to throw the switch on the trolley problem.) [13]

Belief in the paranormal is correlated with higher rates of the conjunction fallacy. [17]

Score on rational thinking tasks (argument evaluation, syllogisms, and statistical reasoning) is correlated (r = 0.413) with score on a Thinking Dispositions questionnaire (which measures Actively Open-Minded Thinking, Dogmatism, Paranormal Beliefs, etc.)

Basically, it appears that lower rates of cognitive bias correlate with certain behavioral traits one could intuitively characterize as “reasonable.” They’re less dogmatic, and more open-minded. They’re less likely to believe in the supernatural. They behave more like ideal economic actors. Most of this seems to add up to being more WEIRD, though this may be a function of the features that researchers chose to investigate.

Factors correlating with cognitive bias

Men score higher on the Cognitive Reflection Test than women — the group that answers all three questions correctly is two-thirds men, while the group that answers all three questions wrong is two-thirds women. [7]

Scientists [14] and mathematicians [15] performed no better than undergraduates on the Wason Selection Task, though mathematics undergraduates did better than history undergraduates.

Autistics [18] are less susceptible to the conjunction fallacy than neurotypicals.

Correct responses on conjunction fallacy and base rate questions correspond to better performance on “No-Go” tasks and greater N2, an EEG measure believed to reflect executive inhibition ability. [19] Response inhibition is thought to be based in the striatum and associated with striatal dopamine receptors.

COMT mutations predict greater susceptibility to confirmation bias. [20] COMT is involved in the degradation of dopamine. The Val/Met polymorphism makes the enzyme less efficient, which increases prefrontal cortex activation and working memory for abstract rules. Met carriers exhibited more confirmation bias (p = 0.005).

There doesn’t seem to be that much data on the demographic characteristics of the most and least rational people.

There’s some suggestive neuroscience on the issue; the ability to avoid intuitive-but-wrong choices has to do with executive function and impulsivity, while the ability to switch tasks and avoid being anchored on earlier beliefs has to do with prefrontal cortex learning. As we’ll see later, Stanovich (independently of the neuroscience evidence) categorizes cognitive biases into two distinct types, more or less matching this distinction between “consciously avoiding the intuitive-but-wrong answer” skills and the “considering that you might be wrong” skills.

Is there a hyper-rational elite?

It seems clear that there’s such a thing as individual variation in rationality, that people who are more rational in one area tend to be more rational in others, and that rationality correlates with the kinds of things you’d expect: intelligence, mathematical ability, and a flexible cognitive disposition.

It’s not obvious that “cognitive biases” are a natural category — some are associated with IQ, while some aren’t, and it seems quite probable that different biases have different neural correlates. But tentatively, it seems to make sense to talk about “rationality” as a single phenomenon.

A related question is whether there exists a small population of extreme outliers with very low rates of cognitive bias, a rationality elite. Tetlock’s experiments seem to suggest this may be true — that there are an exceptional 2% who forecast significantly better than average people, experts, or algorithms.

In order for the “rationality elite” hypothesis to be generally valid, we’d have to see the same people score exceptionally high on a variety of cognitive bias tests. There doesn’t yet appear to be evidence to confirm this.

Stanovich’s tripartite model

Stanovich proposes dividing “System II”, or the reasoning mind, into two further parts: the “reflective mind” and the “algorithmic mind.” The reflective mind engages in self-skepticism; it interrupts processes and asks “is this right?” The algorithmic mind is involved in working memory and cognitive processing capacity — it is what IQ tests and SATs measure.

This would explain why some cognitive biases, but not others, correlate with IQ. Intelligence does not protect against myside bias, the bias blind spot, sunk costs, and anchoring effects. Intelligence is correlated with various tests of probabilistic reasoning (base rate neglect, probability matching), tests of logical reasoning (belief bias, argument evaluation), expected value maximization in gambles, overconfidence bias, and the Wason selection test.

One might argue that the skills that correlate with intelligence are tests of symbolic manipulation skill, the ability to consciously follow rules of logic and math, while the skills that don’t correlate with intelligence require cognitive flexibility, the ability to change one’s mind and avoid being tied to past choices.

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

Cognitive flexibility, for which the “actively open-minded thinking scale” is a good proxy measure, is the ability to question your own beliefs. It predicts performance on a forecasting task, because the open-minded people sought more information. [21] Less open-minded individuals are more biased towards their own first opinions and do less searching for information.[22] Actively open-minded thinking increases with age (in middle schoolers) and correlates with cognitive ability.[23]

Under this model, people with high IQs, and especially people with training in probability, economics, and maybe explicit rationality, will be better at the cognitive bias skills that have to do with cognitive decoupling, but won’t be better at the others.

Speculatively, we might imagine that there is a “cognitive decoupling elite” of smart people who are good at probabilistic reasoning and score high on the cognitive reflection test and the IQ-correlated cognitive bias tests. These people would be more likely to be male, more likely to have at least undergrad-level math education, and more likely to have utilitarian views. Speculating a bit more, I’d expect this group to be likelier to think in rule-based, devil’s-advocate ways, influenced by economics and analytic philosophy. I’d expect them to be more likely to identify as rational.

I’d expect them not to be much better than average at avoiding the cognitive biases uncorrelated with intelligence. The cognitive decoupling elite would be just as prone to dogmatism and anchoring as anybody else. However, the subset that were cognitively flexible would probably be noticeably better at predicting the future. Tetlock’s finding that the most accurate political pundits are “foxes” not “hedgehogs” seems to be related to this idea of the “reflective mind.” Most smart abstract thinkers are not especially open-minded, but those who are, get things right a lot more than everybody else.

It’s also important to note that experiments on cognitive bias pinpoint a minority, but not a tiny minority, of less biased individuals. 17% of college students at all colleges, but 45% of college students at MIT, got all three questions on the cognitive reflection test right. MIT has about 120,000 living alumni; 27% of Americans have a bachelor’s or professional degree. The number of Americans getting the Cognitive Reflection Test right is probably on the order of a few million — that is, a few percent of the total population. Obviously, conjunctively adding more cognitive bias tests should narrow down the population of ultra-rational people further, but we’re not talking about a tiny elite cabal here. In the terminology of my previous post, the evidence points to the existence of unusually rational people, but only at the “One-Percenter” level. If there are Elites, Ultra-Elites, and beyond, we don’t yet have the tests to detect them.

Conclusion

Yes, there are people who are consistently less cognitively biased than average. They are a minority, but not a tiny minority. They are smarter and more reasonable than average. When you break down the measures of cognitive bias into two types, you find that intelligence is correlated with measures of ability to reason formally, but not with measures of ability to question one’s own judgment; the latter are more correlated with dispositions like “active open-mindedness.” There’s no evidence to suggest that there’s a very small (e.g. less than 1% of the population) group of extremely rational people, probably because we don’t have enough experimental power to detect extremes of performance on cognitive bias tests.

References

[1] Wason, Peter C. “Reasoning about a rule.” The Quarterly Journal of Experimental Psychology 20.3 (1968): 273-281.

[2] Tversky, Amos, and Daniel Kahneman. “Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.” Psychological review 90.4 (1983): 293.

[3] E. Stanovich, Keith, and Richard F. West. “Individual differences in framing and conjunction effects.” Thinking & Reasoning 4.4 (1998): 289-317.

[4] Tversky, Amos, and Daniel Kahneman. “Rational choice and the framing of decisions.” Journal of business (1986): S251-S278.

[5] Friedman, Daniel, et al. “Searching for the sunk cost fallacy.” Experimental Economics 10.1 (2007): 79-104.

[6] West, R. F., & Stanovich, K. E. (1997). The domain specificity and generality of overconfidence: Individual differences in performance estimation bias. Psychonomic Bulletin & Review, 4, 387-392.

[7] Frederick, Shane. “Cognitive reflection and decision making.” Journal of Economic perspectives (2005): 25-42.

[8]Toplak, Maggie E., Richard F. West, and Keith E. Stanovich. “The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks.”Memory & Cognition 39.7 (2011): 1275-1289.

[9] Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161-188

[10] Stanovich, K. E., West, R. F., & Toplak, M. E. (2011). Intelligence and rationality. In R. J. Sternberg & S. B. Kaufman (Eds.), Cambridge Handbook of Intelligence (pp. 784-826). New York: Cambridge University Press.

[11] Ungar, Lyle, et al. “The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predictions.” 2012 AAAI Fall Symposium Series. 2012.

[12] Hoppe, Eva I., and David J. Kusterer. “Behavioral biases and cognitive reflection.” Economics Letters 110.2 (2011): 97-100.

[13] Paxton, Joseph M., Leo Ungar, and Joshua D. Greene. “Reflection and reasoning in moral judgment.” Cognitive Science 36.1 (2012): 163-177.

[14] Griggs, Richard A., and Sarah E. Ransdell. “Scientists and the selection task.”Social Studies of Science 16.2 (1986): 319-330.

[15]Inglis, Matthew, and Adrian Simpson. “Mathematicians and the selection task.”Proceedings of the 28th International Conference on the Psychology of Mathematics Education. Vol. 3. 2004.

[16]Benassi, Victor A., and Russell L. Knoth. “The intractable conjunction fallacy: Statistical sophistication, instructional set, and training.” Journal of Social Behavior & Personality (1993).

[17] Rogers, Paul, Tiffany Davis, and John Fisk. “Paranormal belief and susceptibility to the conjunction fallacy.” Applied cognitive psychology 23.4 (2009): 524-542.

[18]Morsanyi, Kinga, Simon J. Handley, and Jonathan SBT Evans. “Decontextualised minds: Adolescents with autism are less susceptible to the conjunction fallacy than typically developing adolescents.” Journal of autism and developmental disorders 40.11 (2010): 1378-1388.

[19] De Neys, Wim, et al. “What makes a good reasoner?: Brain potentials and heuristic bias susceptibility.” Proceedings of the Annual Conference of the Cognitive Science Society. Vol. 32. 2010.

[20] Doll, Bradley B., Kent E. Hutchison, and Michael J. Frank. “Dopaminergic genes predict individual differences in susceptibility to confirmation bias.” The Journal of neuroscience 31.16 (2011): 6188-6198.

[21] Haran, Uriel, Ilana Ritov, and Barbara A. Mellers. “The role of actively open-minded thinking in information acquisition, accuracy, and calibration.”Judgment & Decision Making 8.3 (2013).

[22] Baron, Jonathan. “Beliefs about thinking.” Informal reasoning and education(1991): 169-186.

[23] Kokis, Judite V., et al. “Heuristic and analytic processing: Age trends and associations with cognitive ability and cognitive styles.” Journal of Experimental Child Psychology 83.1 (2002): 26-52.