All posts by:

skhemlani (47)

No other information about this author.

SEP
2019
07

Harner to show new work on omissive causes at AIC 2019 in Manchester

Hillary Harner will present a paper by Gordon Briggs, herself, Christina Wasylyshyn, Paul Bello, and myself titled “Neither the time nor the place: Omissive causes yield temporal inferences” at this year’s International Workshop on AI and Cognition (AIC 2019). The paper describes an oddity in reasoning about “omissive causes” — situations when something happens as a result of something else not happening (an omission). The paper can be downloaded here, and here’s the abstract:

Is it reasonable for humans to draw temporal conclusions from omissive causal assertions? For example, if you learn that not charging your phone caused it to die, is it sensible to infer that your failure to charge your phone occurred before it died? The conclusion seems intuitive, but no theory of causal reasoning explains how people make the inference other than a recent proposal by Khemlani and colleagues [2018a]. Other theories either treat omissions as non-events, i.e., they have no location in space or time; or they account for omissions as entities that have no explicit temporal component. Theories of omissions as non-events predict that people might refrain from drawing conclusions when asked whether an omissive cause precedes its effect; theories without any temporal component make no prediction. We thus present Khemlani and colleagues’ [2018a] theory and describe two experiments that tested its predictions. The results of the experiments speak in favor of a view that omissive causation imposes temporal constraints on events and their effects; these findings speak against predictions of the non-event view. We conclude by consider- ing whether drawing a temporal conclusion from an omissive cause constitutes a reasoning error and discuss implications for AI systems designed to compute causal inferences.

JUL
2019
22

LRW talk on the dynamic processing of perceptual models

I presented work by Neha Bhat on how to leverage object detection algorithms to build spatial mental models at the 2019 London Reasoning Workshop. The abstract is here:

We describe a novel computational system that processes images in order to dynamically construct and update iconic spatial simulations of the world — the equivalent of perceptual mental models. The system couples two existing technologies: an advanced machine-learning technique for real-time object-recognition (the YOLO algorithm) along with a computational model for representing and reasoning about spatial mental models (built in the mReasoner system). The system is capable of extracting qualitative spatial relations from perceptual inputs in real-time. We describe how it can be used to investigate dynamic spatial thinking and reasoning.

JUL
2019
21

A novel algorithm for causal deduction in ICCM Proceedings

Gordon Briggs and I presented a new computational model and a novel dataset on how people make generative causal deductions at this year’s International Conference on Cognitive Modeling (ICCM). For instance, if you know that habituation causes seriation and that seriation prevents methylation, you don’t need to know what all those words mean in order to make inferences about the effect of habituation on methylation.

Our algorithm interprets such statements and stochastically builds discrete mental simulations of the events they describe. Here’s the abstract:

People without any advanced training can make deductions about abstract causal relations. For instance, suppose you learn that habituation causes seriation, and that seriation prevents methylation. The vast majority of reasoners infer that habituation prevents methylation. Cognitive scientists disagree on the mechanisms that underlie causal reasoning, but many argue that people can mentally simulate causal interactions. We describe a novel algorithm that makes domain-general causal inferences. The algorithm constructs small-scale iconic simulations of causal relations, and so it implements the “model” theory of causal reasoning (Goldvarg & JohnsonLaird, 2001; Johnson-Laird & Khemlani, 2017). It distinguishes between three different causal relations: causes, enabling conditions, and preventions. And, it can draw inferences about both orthodox relations (habituation prevents methylation) and omissive causes (the failure to habituate prevents methylation). To test the algorithm, we subjected participants to a large battery of causal reasoning problems and compared their performance to what the algorithm predicted. We found a close match between human causal reasoning and the patterns predicted by the algorithm.

JUL
2019
19

Harner presented work on teleological generics at SPP

Hillary Harner presented our latest work on teleological generics at SPP. The abstract of her work is available here:

People can describe generalizations about the functions of objects by producing teleological generic language, i.e. those statements that express generalities about the purposes of objects. People accept teleological generics such as eyes are for seeing and cars are for driving. However, no studies have examined whether generalizations about volitional agents are acceptable. It may be counterintuitive to consider autonomous individuals as having any kind of function or purpose: what’s the purpose of a giraffe, or a whale, or an Etruscan? No matter how you complete the teleological generic, Etruscans are for _____, the sentence seems unacceptable, since Etruscans had autonomy and volition over their own actions. But perhaps people consider certain volitional agents as having some kind of associated function. The teleological generic, horses are for riding may strike some people as acceptable, because people may associate the kind horses with its unique utility as a beast of burden.

 We ran a study designed to evaluate whether people accept agent-based teleological generics. Participants read statements of the format Xs are for Y, e.g., horses are for riding. Half of the statements concerned on activities that humans have no direct benefit from: the animal naturally performs the activity on its own and the activity is not a byproduct of domestication, e.g. “bees are for buzzing.” The other half concerned animals and associated activities that humans benefit from, either because of domestication or because of some way humans have drawn direct impact from the animals’ behavior, e.g. “bees are for making honey.” 50 participants received 24 generic statements in total, and they rated each one as true or false. The experiment tested the hypothesis that participants should accept teleological generics more often when they concerned activities from which humans draw a direct benefit. The results confirmed the hypothesis: people rated teleological generics true more often for statements concerning beneficial activities than for “control” items that concerned frequent activities that do not benefit humans (83% vs. 41%, Wilcoxon test, z = 5.54, p < .001, Cliff’s ή = .77).

JUL
2019
13

Kelly’s work on durational reasoning at LRW and CogSci

Laura Kelly presented new research on reasoning about durations at the 2019 London Reasoning Workshop. The abstract of her talk is here:

Few experiments have examined how people reason about durative relations, e.g., “during”. Such relations pose challenges to present theories of reasoning, but many researchers argue that people simulate a mental timeline when they think about sequences of events. A recent theory posits that to mentally simulate durative relations, reasoners do not represent all of the time points across which an event might endure. Instead, they construct discrete tokens that stand in place of the beginnings and endings of those events. The theory predicts that when reasoners need to build multiple simulations to solve a reasoning problem, they should be more prone to error. To test the theory, a series of experiments provided participants with sets of premises describing durative relations; they assessed whether the sets were consistent or inconsistent. Their ability to do so varied by whether the descriptions concerned one mental simulation or multiple simulations. We conclude by situating the study in recent work on temporal thinking.

MAY
2019
07

New paper on why machines can’t reason yet

A major failure of current AI systems is that they can’t mimic common sense reasoning: most ML systems don’t reason, and all theorem provers draw trivial and silly deductions. We analyze why — and suggest a path forward — in a new paper now out in German AI journal Künstliche Intelligenz:

AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.

NOV
2018
20

Recent research featured by the Psychonomics Society

Thomas the loop engine: Learning to program computers with a toy train

Anja Jamzorik recently featured our latest paper in Memory & Cognition for the The Psychonomics Society website; check it out!

 

OCT
2018
16

Postdoc positions in cognitive science at NRL

[Edit 2018-10-16: My lab is looking for a new postdoc interested in studying epistemic reasoning!]

I’m currently seeking applicants for multiple postdoctoral positions to collaborate on ongoing initiatives, including (but not limited to):

  • Testing a unified computational framework of reasoning
  • [New!] Studying how people reason about epistemics, i.e., knowledge and belief
  • Studying how people engage in explanatory reasoning
  • Studying how people reason about time, space, and spatiotemporal relations
  • Studying how people can extract causal relations from visual input

The postdoc will develop his or her own research program in addition to working with me and Greg Trafton at the Navy Center for Applied Research in Artificial Intelligence at NRL’s Washington, DC headquarters. The position will involve building computational models, designing and running studies, and conducting data analysis.

The ideal candidate has (or will have) a Ph.D. in cognitive psychology, cognitive science, or computer science, with experience in higher level cognition, experimental design and data analysis, or cognitive modeling. Postdocs will be hired through the NRC Research Associateship Program. Only US citizenship or green card holders are eligible for the program.

The Intelligent Systems Section at the Navy Center for Applied Research in Artificial Intelligence is devoted to basic and applied research in human cognition. The lab is interdisciplinary and focuses on cognitive science, reasoning, cognitive robotics and human-robot interaction, procedural errors, spatial cognition, object recognition, memory, and categorization.

Interested applicants should contact (sunny.khemlani@nrl.navy.mil) for inquiries and more information.

JUL
2018
23

Paper on omissive causes in Memory & Cognition

When something happens bc something else didn’t occur, it’s called “omissive causation” — like when your phone dies because you didn’t charge it. Our new theory in Memory & Cognition predicts how people mentally simulate omissions. It predicts that people should prioritize possibilities corresponding to mental models of omissive causal relations, and that they should be able to distinguish between omissive causes, omissive enabling conditions, and omissive preventers. Here’s the paper itself, here’s a link to the OSF page, and here’s the abstract:
Some causal relations refer to causation by commission (e.g., A gunshot causes death), and others refer to causation by omission (e.g., Not breathing causes death). We describe a theory of the representation of omissive causation based on the assumption that people mentally simulate sets of possibilities—mental models—that represent causes, enabling conditions, and preventions (Goldvarg & Johnson-Laird, 2001). The theory holds that omissive causes, enabling conditions, and preventions each refer to distinct sets of possibilities. For any such causal relation, reasoners typically simulate one initial possibility, but they are able to consider alternative possibilities through deliberation. These alternative possibilities allow them to deliberate over finer-grained distinctions when reasoning about causes and effects. Hence, reasoners should be able to distinguish between omissive causes and omissive enabling conditions. Four experiments corroborated the predictions of the theory. We describe them and contrast the results with the predictions of alternative accounts of causal representation and inference.
JUL
2018
18

Paper on reasoning about facts and possibilities out in Cognitive Science

I have a new paper out in Cognitive Science with Ruth Byrne and Phil Johnson-Laird. We developed a theory about “sentential reasoning”, which is the sort of reasoning that occurs when you think about sentences that are connected by words such as “and”, “or”, or “if.” Cognitive theories have yet to explain the process by which people reason about sentences that concern facts, and possibilities, and so we built a theory around the idea that humans reason about sentences by simulating the world around them as sets of possibilities. We designed a computational model and applied it to explain some recent data on sentential reasoning.

The paper is available for download here, and here’s the abstract:

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions,
such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes
between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

If you want to check out any of the computational modeling code or the data for it, they’re available via OSF.