Gordon Briggs and I presented a new computational model and a novel dataset on how people make generative causal deductions at this year’s International Conference on Cognitive Modeling (ICCM). For instance, if you know that habituation causes seriation and that seriation prevents methylation, you don’t need to know what all those words mean in order to make inferences about the effect of habituation on methylation.
Our algorithm interprets such statements and stochastically builds discrete mental simulations of the events they describe. Here’s the abstract:
People without any advanced training can make deductions about abstract causal relations. For instance, suppose you learn that habituation causes seriation, and that seriation prevents methylation. The vast majority of reasoners infer that habituation prevents methylation. Cognitive scientists disagree on the mechanisms that underlie causal reasoning, but many argue that people can mentally simulate causal interactions. We describe a novel algorithm that makes domain-general causal inferences. The algorithm constructs small-scale iconic simulations of causal relations, and so it implements the “model” theory of causal reasoning (Goldvarg & JohnsonLaird, 2001; Johnson-Laird & Khemlani, 2017). It distinguishes between three different causal relations: causes, enabling conditions, and preventions. And, it can draw inferences about both orthodox relations (habituation prevents methylation) and omissive causes (the failure to habituate prevents methylation). To test the algorithm, we subjected participants to a large battery of causal reasoning problems and compared their performance to what the algorithm predicted. We found a close match between human causal reasoning and the patterns predicted by the algorithm.