All posts by:

skhemlani (47)

No other information about this author.


📃 Paper on norms and future causation out in Cognitive Science

In a project lead by Paul Henne (Lake Forest College), we recently published a paper in Cognitive Science about how norms affect prospective causal judgments, i.e., judgments about whether a particular situation can cause a future event. Here’s the abstract:

People more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some outcome. Until recently, this abnormal-selection effect has been studied using retrospective vignette-based paradigms. We use a novel set of video stimuli to investigate this effect for prospective causal judgments—that is, judgments about the cause of some future outcome. Four experiments show that people more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some future outcome. We show that the abnormal-selection effects are not primarily explained by the perception of agency (Experiment 4). We discuss these results in relation to recent efforts to model causal judgment.

and here’s a link to the paper.


📃 How people assess whether an explanation is “complete”

All explanations are incomplete, but some explanations are more complete than others — this is the central result of our recent work and some other research into explanatory reasoning (e.g., Zemla et al., 2017). Joanna Korman and I describe a new theory of explanatory reasoning now out in Acta Psychologica. Here’s the title:

All explanations are incomplete, but reasoners think some explanations are more complete than others. To explain this behavior, we propose a novel theory of how people assess explanatory incompleteness. The account assumes that reasoners represent explanations as causal mental models – iconic representations of possible arrangements of causes and effects. A complete explanation refers to a single integrated model, whereas an incomplete explanation refers to multiple models. The theory predicts that if there exists an unspecified causal relation – a gap – anywhere within an explanation, reasoners must maintain multiple models to handle the gap. They should treat such explanations as less complete than those without a gap. Four experiments provided participants with causal descriptions, some of which yield one explanatory model, e.g., A causes B and B causes C, and some of which demand multiple models, e.g., A causes X and B causes C. Participants across the studies preferred one-model descriptions to multiple-model ones on tasks that implicitly and explicitly required them to assess explanatory completeness. The studies corroborate the theory. They are the first to reveal the mental processes that underlie the assessment of explanatory completeness. We conclude by reviewing the theory in light of extant accounts of causal reasoning.

and here’s the paper.


📃 QJEP paper on domino effects in causation

Our paper on domino effects in causation is now out in the Quarterly Journal of Experimental Psychology — the paper shows that when you contradict a node in a causal chain (e.g., A in A caused B and B caused C), the rest of the chain topples like falling dominoes. The results support the idea that people construct simulations when they reason about conflicts instead of revising their knowledge in a minimal way.

The paper is available for download here and the abstract is available here:

Inconsistent beliefs call for revision—but which of them should individuals revise? A long-standing view is that they should make minimal changes that restore consistency. An alternative view is that their primary task is to explain how the inconsistency arose. Hence, they are likely to violate minimalism in two ways: they should infer more information than is strictly necessary to establish consistency and they should reject more information than is strictly necessary to establish consistency. Previous studies corroborated the first effect: reasoners use causal simulations to build explanations that resolve inconsistencies. Here, we show that the second effect is true too: they use causal simulations to reject more information than is strictly necessary to establish consistency. When they abandon a cause, the effects of the cause topple like dominos: Reasoners tend to deny the occurrence of each subsequent event in the chain. Four studies corroborated this prediction.


📃 New JoCN on temporal reasoning and durations

A new paper by Laura Kelly, myself, and Phil Johnson-Laird (link) describes systematic reasoning errors when people assess the consistency of durative temporal relations, as in the sentence: the sale happened during the convention. The paper is out in the Journal of Cognitive Neuroscience for their special issue titled “Mental Models in Time”, edited by Virginie van Wassenhove. Here’s the abstract:

A set of assertions is consistent provided they can all be true at the same time. Naive individuals could prove consistency using the formal rules of a logical calculus, but it calls for them to fail to prove the negation of one assertion from the remainder in the set. An alternative procedure is for them to use an intuitive system (System 1) to construct a mental model of all the assertions. The task should be easy in this case. However, some sets of consistent assertions have no intuitive models and call for a deliberative system (System 2) to construct an alternative model. Formal rules and mental models therefore make different predictions. We report three experiments that tested their respective merits. The participants assessed the consistency of temporal descriptions based on statements using “during” and “before.” They were more accurate for consistent problems with intuitive models than for those that called for deliberative models. There was no robust difference in accuracy between consistent and inconsistent problems. The results therefore corroborated the model theory.


📃 ICYMI: The R Lab @ CogSci 2020

The Reasoning Lab presented work at the CogSci 2020 virtual conference this year, including research on temporal reasoning, reasoning about desire, genericity and semantic memory, teleology and agency, and quantification. Here’s an archive of the presentations:


? Talk on interactivity at RSS 2020

I gave a talk titled, “Leveraging conceptual constraints for interactive robotics” at an RSS 2020 workshop, AI & Its Alternatives in Assistive and Collaborative Robotics: Decoding Intent, organized by Deepak Gopinath, Ola Kalinowska, Mahdieh Nejati, Katarina Popovic, Brenna Argall, and Tood Murphey.

Here’s the video of the talk:


? Lab alum Zach Horne starting at the University of Edinburgh in 2021

Zach Horne will join the faculty at the University of Edinburgh in the winter semester, 2021. Congratulations, Zach!

? Lab alum Joanna Korman to start at Bentley University in Fall 2020

Congrats to Joanna Korman (, who’ll be starting as an Assistant Professor at Bentley University this fall!

? Chapter on syllogistic reasoning in the Handbook of Rationality

I wrote a new chapter on the psychology of syllogistic reasoning in the forthcoming Handbook of Rationality that summarizes recent advances in the field. Here’s a quick summary:

Psychologists have studied syllogistic inferences for more than a century, because they can serve as a microcosm of human rationality. “Syllogisms” is a term that refers to a set of 64 reasoning arguments, each of which is comprised of two premises, such as: “All of the designers are women. Some of the women are not employees. What, if anything, follows?” People make systematic mistakes on such problems, and they appear to reason using different strategies. A meta-analysis showed that many existing theories fail to explain such patterns. To address the limitations of previous accounts, two recent theories synthesized both heuristic and deliberative processing. This chapter reviews both accounts and addresses their strengths. It concludes by arguing that if syllogistic reasoning serves as a sensible microcosm of rationality, the synthesized theories may provide directions on how to resolve broader conflicts that vex psychologists of reasoning and human thinking.

You can read the full chapter here.


New paper on teleological generics in press at Cognition

Joanna Korman and I have a new paper out in Cognition that examines statements such as “cars are for driving”. The statement is interesting, because people tend to accept it, though they reject statements such as “cars are for parking”, even though you park cars just as often as you drive them.

Such statements are called teleological generics — they’re generalizations that describe the function or purpose of some concept, such as cars. We developed a new theory of what makes them acceptable. The theory proposes that people mentally represent a privileged link called a “principled connection” between their concept of cars and their concept of driving.

Here’s the abstract of the paper:

Certain “generic” generalizations concern functions and purposes, e.g., cars are for driving. Some functional properties yield unacceptable teleological generics: for instance, cars are for parking seems false even though people park cars as often as they drive them. No theory of teleology in philosophy or psychology can explain what makes teleological generics acceptable. However, a recent theory (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that a certain type of mental representation – a “principled” connection between a kind and a property – licenses generic generalizations. The account predicts that people should accept teleological generics that describe kinds and properties linked by a principled connection. Under the analysis, car bears a principled connection to driving (a car’s primary purpose) and a non-principled connection to parking (an incidental consequence of driving). We report four experiments that tested and corroborated the theory’s predictions, and we describe a regression analysis that rules out alternative accounts. We conclude by showing how the theory we developed can serve as the foundation for a general theory of teleological thinking.

And here’s access to the OSF project (which includes a preprint):