All posts by:

skhemlani (31)

No other information about this author.

JUL 08

Lab alum Zach Horne starting at the University of Edinburgh in 2021

Zach Horne will join the faculty at the University of Edinburgh in the winter semester, 2021. Congratulations, Zach!
JUL 08

Lab alum Joanna Korman to start at Bentley University in Fall 2020

Congrats to Joanna Korman (http://joko-cogsci.com/), who’ll be starting as an Assistant Professor at Bentley University this fall!
FEB 24

Chapter on syllogistic reasoning in the Handbook of Rationality

I wrote a new chapter on the psychology of syllogistic reasoning in the forthcoming Handbook of Rationality that summarizes recent advances in the field. Here’s a quick summary:

Psychologists have studied syllogistic inferences for more than a century, because they can serve as a microcosm of human rationality. “Syllogisms” is a term that refers to a set of 64 reasoning arguments, each of which is comprised of two premises, such as: “All of the designers are women. Some of the women are not employees. What, if anything, follows?” People make systematic mistakes on such problems, and they appear to reason using different strategies. A meta-analysis showed that many existing theories fail to explain such patterns. To address the limitations of previous accounts, two recent theories synthesized both heuristic and deliberative processing. This chapter reviews both accounts and addresses their strengths. It concludes by arguing that if syllogistic reasoning serves as a sensible microcosm of rationality, the synthesized theories may provide directions on how to resolve broader conflicts that vex psychologists of reasoning and human thinking.

You can read the full chapter here.

JAN 10

New paper on teleological generics in press at Cognition

Joanna Korman and I have a new paper out in Cognition that examines statements such as “cars are for driving”. The statement is interesting, because people tend to accept it, though they reject statements such as “cars are for parking”, even though you park cars just as often as you drive them.

Such statements are called teleological generics — they’re generalizations that describe the function or purpose of some concept, such as cars. We developed a new theory of what makes them acceptable. The theory proposes that people mentally represent a privileged link called a “principled connection” between their concept of cars and their concept of driving.

Here’s the abstract of the paper:

Certain “generic” generalizations concern functions and purposes, e.g., cars are for driving. Some functional properties yield unacceptable teleological generics: for instance, cars are for parking seems false even though people park cars as often as they drive them. No theory of teleology in philosophy or psychology can explain what makes teleological generics acceptable. However, a recent theory (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that a certain type of mental representation – a “principled” connection between a kind and a property – licenses generic generalizations. The account predicts that people should accept teleological generics that describe kinds and properties linked by a principled connection. Under the analysis, car bears a principled connection to driving (a car’s primary purpose) and a non-principled connection to parking (an incidental consequence of driving). We report four experiments that tested and corroborated the theory’s predictions, and we describe a regression analysis that rules out alternative accounts. We conclude by showing how the theory we developed can serve as the foundation for a general theory of teleological thinking.

And here’s access to the OSF project (which includes a preprint):

SEP
2019
07

Harner to show new work on omissive causes at AIC 2019 in Manchester

Hillary Harner will present a paper by Gordon Briggs, herself, Christina Wasylyshyn, Paul Bello, and myself titled “Neither the time nor the place: Omissive causes yield temporal inferences” at this year’s International Workshop on AI and Cognition (AIC 2019). The paper describes an oddity in reasoning about “omissive causes” — situations when something happens as a result of something else not happening (an omission). The paper can be downloaded here, and here’s the abstract:

Is it reasonable for humans to draw temporal conclusions from omissive causal assertions? For example, if you learn that not charging your phone caused it to die, is it sensible to infer that your failure to charge your phone occurred before it died? The conclusion seems intuitive, but no theory of causal reasoning explains how people make the inference other than a recent proposal by Khemlani and colleagues [2018a]. Other theories either treat omissions as non-events, i.e., they have no location in space or time; or they account for omissions as entities that have no explicit temporal component. Theories of omissions as non-events predict that people might refrain from drawing conclusions when asked whether an omissive cause precedes its effect; theories without any temporal component make no prediction. We thus present Khemlani and colleagues’ [2018a] theory and describe two experiments that tested its predictions. The results of the experiments speak in favor of a view that omissive causation imposes temporal constraints on events and their effects; these findings speak against predictions of the non-event view. We conclude by consider- ing whether drawing a temporal conclusion from an omissive cause constitutes a reasoning error and discuss implications for AI systems designed to compute causal inferences.

JUL
2019
22

LRW talk on the dynamic processing of perceptual models

I presented work by Neha Bhat on how to leverage object detection algorithms to build spatial mental models at the 2019 London Reasoning Workshop. The abstract is here:

We describe a novel computational system that processes images in order to dynamically construct and update iconic spatial simulations of the world — the equivalent of perceptual mental models. The system couples two existing technologies: an advanced machine-learning technique for real-time object-recognition (the YOLO algorithm) along with a computational model for representing and reasoning about spatial mental models (built in the mReasoner system). The system is capable of extracting qualitative spatial relations from perceptual inputs in real-time. We describe how it can be used to investigate dynamic spatial thinking and reasoning.

JUL
2019
21

A novel algorithm for causal deduction in ICCM Proceedings

Gordon Briggs and I presented a new computational model and a novel dataset on how people make generative causal deductions at this year’s International Conference on Cognitive Modeling (ICCM). For instance, if you know that habituation causes seriation and that seriation prevents methylation, you don’t need to know what all those words mean in order to make inferences about the effect of habituation on methylation.

Our algorithm interprets such statements and stochastically builds discrete mental simulations of the events they describe. Here’s the abstract:

People without any advanced training can make deductions about abstract causal relations. For instance, suppose you learn that habituation causes seriation, and that seriation prevents methylation. The vast majority of reasoners infer that habituation prevents methylation. Cognitive scientists disagree on the mechanisms that underlie causal reasoning, but many argue that people can mentally simulate causal interactions. We describe a novel algorithm that makes domain-general causal inferences. The algorithm constructs small-scale iconic simulations of causal relations, and so it implements the “model” theory of causal reasoning (Goldvarg & JohnsonLaird, 2001; Johnson-Laird & Khemlani, 2017). It distinguishes between three different causal relations: causes, enabling conditions, and preventions. And, it can draw inferences about both orthodox relations (habituation prevents methylation) and omissive causes (the failure to habituate prevents methylation). To test the algorithm, we subjected participants to a large battery of causal reasoning problems and compared their performance to what the algorithm predicted. We found a close match between human causal reasoning and the patterns predicted by the algorithm.

JUL
2019
19

Harner presented work on teleological generics at SPP

Hillary Harner presented our latest work on teleological generics at SPP. The abstract of her work is available here:

People can describe generalizations about the functions of objects by producing teleological generic language, i.e. those statements that express generalities about the purposes of objects. People accept teleological generics such as eyes are for seeing and cars are for driving. However, no studies have examined whether generalizations about volitional agents are acceptable. It may be counterintuitive to consider autonomous individuals as having any kind of function or purpose: what’s the purpose of a giraffe, or a whale, or an Etruscan? No matter how you complete the teleological generic, Etruscans are for _____, the sentence seems unacceptable, since Etruscans had autonomy and volition over their own actions. But perhaps people consider certain volitional agents as having some kind of associated function. The teleological generic, horses are for riding may strike some people as acceptable, because people may associate the kind horses with its unique utility as a beast of burden.

 We ran a study designed to evaluate whether people accept agent-based teleological generics. Participants read statements of the format Xs are for Y, e.g., horses are for riding. Half of the statements concerned on activities that humans have no direct benefit from: the animal naturally performs the activity on its own and the activity is not a byproduct of domestication, e.g. “bees are for buzzing.” The other half concerned animals and associated activities that humans benefit from, either because of domestication or because of some way humans have drawn direct impact from the animals’ behavior, e.g. “bees are for making honey.” 50 participants received 24 generic statements in total, and they rated each one as true or false. The experiment tested the hypothesis that participants should accept teleological generics more often when they concerned activities from which humans draw a direct benefit. The results confirmed the hypothesis: people rated teleological generics true more often for statements concerning beneficial activities than for “control” items that concerned frequent activities that do not benefit humans (83% vs. 41%, Wilcoxon test, z = 5.54, p < .001, Cliff’s δ = .77).

JUL
2019
13

Kelly’s work on durational reasoning at LRW and CogSci

Laura Kelly presented new research on reasoning about durations at the 2019 London Reasoning Workshop. The abstract of her talk is here:

Few experiments have examined how people reason about durative relations, e.g., “during”. Such relations pose challenges to present theories of reasoning, but many researchers argue that people simulate a mental timeline when they think about sequences of events. A recent theory posits that to mentally simulate durative relations, reasoners do not represent all of the time points across which an event might endure. Instead, they construct discrete tokens that stand in place of the beginnings and endings of those events. The theory predicts that when reasoners need to build multiple simulations to solve a reasoning problem, they should be more prone to error. To test the theory, a series of experiments provided participants with sets of premises describing durative relations; they assessed whether the sets were consistent or inconsistent. Their ability to do so varied by whether the descriptions concerned one mental simulation or multiple simulations. We conclude by situating the study in recent work on temporal thinking.

MAY
2019
07

New paper on why machines can’t reason yet

A major failure of current AI systems is that they can’t mimic common sense reasoning: most ML systems don’t reason, and all theorem provers draw trivial and silly deductions. We analyze why — and suggest a path forward — in a new paper now out in German AI journal Künstliche Intelligenz:

AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.