What’s the probability that in the next 500 years, the Earth’s polar ice caps will melt completely? How did you arrive at a specific number? Some theorists argue that your response was meaningless, because nothing similar to the event has ever occurred. But our studies show that individuals’ responses are systematic and predictable.
Those theorists who describe themselves as
frequentists argue that whatever estimate you came up with for the melting of the Earth’s ice caps is meaningless: probabilities should be interpreted as the limits of repeated observations, and there is no event that is comparable to the the ice caps melting.
Bayesian theorists, however, argue that the estimate you gave reflects your degree of belief in the event. With Phil Johnson-Laird and Max Lotstein, I developed a theory to explain how humans derive numerical estimates of subjective probabilities
(Khemlani, Lotstein, & Johnson-Laird, 2012). This theory distinguishes between an intuitive prenumerical system and a deliberative system capable of arithmetic. A computer implementation of the theory uses mental models of evidence to construct iconic representations of degrees of belief.
The system works by constructing simulations of evidence in the form of quantified expressions, e.g., Some arctic glaciers have melted (Khemlani et al., under review). It then infers a bounded analog magnitude representation of belief by sampling from the simulations of evidence. The analog magnitude representation is similar to representations found in infants (Xu & Spelke, 2000), animals (Meck & Church, 1983), and adults in non-numerate cultures (Gordon, 2004). The procedures that operate over the analog belief representations are constrained: reasoners can combine and update beliefs, but they do so without relying on memory storage. The representations accordingly impose little cognitive strain – but their simplicity predicts systematic errors.
[lightbox link=”http://mentalmodels.princeton.edu/skhemlani/files/2012/09/JPD12-Theoretical.png”]
[/lightbox]
For instance, consider the following problem: suppose you’ve estimated the probability of two unique events, e.g., the probability that the ice caps will melt is 10% and the probability that sea lions will go extinct is 2%. What is the probability that both events occur, i.e., the ice caps will melt and sea lions will go extinct? If the two events are independent, then individuals should multiply the probabilities, e.g, P(ice caps melt & sea lions die) = P(ice caps melt) * P(sea lions die) = .05 * .02 = .001. That general strategy is reflected in Figure A above, where P(A) = P(ice caps melt) and P(B) = P(sea lions die). However, our system is limited, and it generally does not multiply probabilities in such a manner. It has to resort to a simpler computation to combine the two probabilities, e.g., one that splits the difference between them (reflected in Figure B). Experimental data (Figure C) corroborates the second strategy and the computational model we developed.
- Barth, H. et al. (2006). Nonsymbolic arithmetic in adults and young children. Cognition, 98, 199–222.
- Gordon, P. (2004). Numerical cognition without words: Evidence from Amazonia. Science, 306, 496-499.
- Meck, W.H., & Church, R.M. (1983). A mode control model of counting and timing processes. Journal of Experimental Psycholology: Animal Behavioral Processes, 9, 320-334.
- Khemlani, S., Lotstein, M., & Johnson-Laird, P.N. (2012). The probabilities of unique events. PLoS ONE, 7, e45975.
- Xu, F., & Spelke, E.S. (2000). Large number discrimination in 6-month-old infants. Cognition, 74, B1-11.