All posts by:

skhemlani (35)

No other information about this author.

JUL
2019
21

A novel algorithm for causal deduction in ICCM Proceedings

Gordon Briggs and I presented a new computational model and a novel dataset on how people make generative causal deductions at this year’s International Conference on Cognitive Modeling (ICCM). For instance, if you know that habituation causes seriation and that seriation prevents methylation, you don’t need to know what all those words mean in order to make inferences about the effect of habituation on methylation.

Our algorithm interprets such statements and stochastically builds discrete mental simulations of the events they describe. Here’s the abstract:

People without any advanced training can make deductions about abstract causal relations. For instance, suppose you learn that habituation causes seriation, and that seriation prevents methylation. The vast majority of reasoners infer that habituation prevents methylation. Cognitive scientists disagree on the mechanisms that underlie causal reasoning, but many argue that people can mentally simulate causal interactions. We describe a novel algorithm that makes domain-general causal inferences. The algorithm constructs small-scale iconic simulations of causal relations, and so it implements the “model” theory of causal reasoning (Goldvarg & JohnsonLaird, 2001; Johnson-Laird & Khemlani, 2017). It distinguishes between three different causal relations: causes, enabling conditions, and preventions. And, it can draw inferences about both orthodox relations (habituation prevents methylation) and omissive causes (the failure to habituate prevents methylation). To test the algorithm, we subjected participants to a large battery of causal reasoning problems and compared their performance to what the algorithm predicted. We found a close match between human causal reasoning and the patterns predicted by the algorithm.

JUL
2019
19

Harner presented work on teleological generics at SPP

Hillary Harner presented our latest work on teleological generics at SPP. The abstract of her work is available here:

People can describe generalizations about the functions of objects by producing teleological generic language, i.e. those statements that express generalities about the purposes of objects. People accept teleological generics such as eyes are for seeing and cars are for driving. However, no studies have examined whether generalizations about volitional agents are acceptable. It may be counterintuitive to consider autonomous individuals as having any kind of function or purpose: what’s the purpose of a giraffe, or a whale, or an Etruscan? No matter how you complete the teleological generic, Etruscans are for _____, the sentence seems unacceptable, since Etruscans had autonomy and volition over their own actions. But perhaps people consider certain volitional agents as having some kind of associated function. The teleological generic, horses are for riding may strike some people as acceptable, because people may associate the kind horses with its unique utility as a beast of burden.

 We ran a study designed to evaluate whether people accept agent-based teleological generics. Participants read statements of the format Xs are for Y, e.g., horses are for riding. Half of the statements concerned on activities that humans have no direct benefit from: the animal naturally performs the activity on its own and the activity is not a byproduct of domestication, e.g. “bees are for buzzing.” The other half concerned animals and associated activities that humans benefit from, either because of domestication or because of some way humans have drawn direct impact from the animals’ behavior, e.g. “bees are for making honey.” 50 participants received 24 generic statements in total, and they rated each one as true or false. The experiment tested the hypothesis that participants should accept teleological generics more often when they concerned activities from which humans draw a direct benefit. The results confirmed the hypothesis: people rated teleological generics true more often for statements concerning beneficial activities than for “control” items that concerned frequent activities that do not benefit humans (83% vs. 41%, Wilcoxon test, z = 5.54, p < .001, Cliff’s δ = .77).

JUL
2019
13

Kelly’s work on durational reasoning at LRW and CogSci

Laura Kelly presented new research on reasoning about durations at the 2019 London Reasoning Workshop. The abstract of her talk is here:

Few experiments have examined how people reason about durative relations, e.g., “during”. Such relations pose challenges to present theories of reasoning, but many researchers argue that people simulate a mental timeline when they think about sequences of events. A recent theory posits that to mentally simulate durative relations, reasoners do not represent all of the time points across which an event might endure. Instead, they construct discrete tokens that stand in place of the beginnings and endings of those events. The theory predicts that when reasoners need to build multiple simulations to solve a reasoning problem, they should be more prone to error. To test the theory, a series of experiments provided participants with sets of premises describing durative relations; they assessed whether the sets were consistent or inconsistent. Their ability to do so varied by whether the descriptions concerned one mental simulation or multiple simulations. We conclude by situating the study in recent work on temporal thinking.

MAY
2019
07

New paper on why machines can’t reason yet

A major failure of current AI systems is that they can’t mimic common sense reasoning: most ML systems don’t reason, and all theorem provers draw trivial and silly deductions. We analyze why — and suggest a path forward — in a new paper now out in German AI journal Künstliche Intelligenz:

AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.

NOV
2018
20

Recent research featured by the Psychonomics Society

Thomas the loop engine: Learning to program computers with a toy train

Anja Jamzorik recently featured our latest paper in Memory & Cognition for the The Psychonomics Society website; check it out!

 

OCT
2018
16

Postdoc positions in cognitive science at NRL

[Edit 2018-10-16: My lab is looking for a new postdoc interested in studying epistemic reasoning!]

I’m currently seeking applicants for multiple postdoctoral positions to collaborate on ongoing initiatives, including (but not limited to):

  • Testing a unified computational framework of reasoning
  • [New!] Studying how people reason about epistemics, i.e., knowledge and belief
  • Studying how people engage in explanatory reasoning
  • Studying how people reason about time, space, and spatiotemporal relations
  • Studying how people can extract causal relations from visual input

The postdoc will develop his or her own research program in addition to working with me and Greg Trafton at the Navy Center for Applied Research in Artificial Intelligence at NRL’s Washington, DC headquarters. The position will involve building computational models, designing and running studies, and conducting data analysis.

The ideal candidate has (or will have) a Ph.D. in cognitive psychology, cognitive science, or computer science, with experience in higher level cognition, experimental design and data analysis, or cognitive modeling. Postdocs will be hired through the NRC Research Associateship Program. Only US citizenship or green card holders are eligible for the program.

The Intelligent Systems Section at the Navy Center for Applied Research in Artificial Intelligence is devoted to basic and applied research in human cognition. The lab is interdisciplinary and focuses on cognitive science, reasoning, cognitive robotics and human-robot interaction, procedural errors, spatial cognition, object recognition, memory, and categorization.

Interested applicants should contact (sunny.khemlani@nrl.navy.mil) for inquiries and more information.

JUL
2018
23

Paper on omissive causes in Memory & Cognition

When something happens bc something else didn’t occur, it’s called “omissive causation” — like when your phone dies because you didn’t charge it. Our new theory in Memory & Cognition predicts how people mentally simulate omissions. It predicts that people should prioritize possibilities corresponding to mental models of omissive causal relations, and that they should be able to distinguish between omissive causes, omissive enabling conditions, and omissive preventers. Here’s the paper itself, here’s a link to the OSF page, and here’s the abstract:
Some causal relations refer to causation by commission (e.g., A gunshot causes death), and others refer to causation by omission (e.g., Not breathing causes death). We describe a theory of the representation of omissive causation based on the assumption that people mentally simulate sets of possibilities—mental models—that represent causes, enabling conditions, and preventions (Goldvarg & Johnson-Laird, 2001). The theory holds that omissive causes, enabling conditions, and preventions each refer to distinct sets of possibilities. For any such causal relation, reasoners typically simulate one initial possibility, but they are able to consider alternative possibilities through deliberation. These alternative possibilities allow them to deliberate over finer-grained distinctions when reasoning about causes and effects. Hence, reasoners should be able to distinguish between omissive causes and omissive enabling conditions. Four experiments corroborated the predictions of the theory. We describe them and contrast the results with the predictions of alternative accounts of causal representation and inference.
JUL
2018
18

Paper on reasoning about facts and possibilities out in Cognitive Science

I have a new paper out in Cognitive Science with Ruth Byrne and Phil Johnson-Laird. We developed a theory about “sentential reasoning”, which is the sort of reasoning that occurs when you think about sentences that are connected by words such as “and”, “or”, or “if.” Cognitive theories have yet to explain the process by which people reason about sentences that concern facts, and possibilities, and so we built a theory around the idea that humans reason about sentences by simulating the world around them as sets of possibilities. We designed a computational model and applied it to explain some recent data on sentential reasoning.

The paper is available for download here, and here’s the abstract:

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions,
such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes
between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

If you want to check out any of the computational modeling code or the data for it, they’re available via OSF.

APR
2018
19

New theory of teleological generalizations in CogSci 2018

Joanna Korman will be presenting our theory on how people understand “teleological generalizations” at the next CogSci 2018 in Madison, WI later this year. Teleological generalizations are statements that cite the purpose or function of something, e.g., “Forks are for eating.” We sought to tackle the mystery of why some teleological generalizations make sense while others don’t: for example, “forks are for washing” seems like a silly generalization to make, even though you wash forks just as often as you eat with them (hopefully).
To preview our solution, here’s the abstract of the paper:

Certain generalizations are teleological, e.g., forks are for eating. But not all properties relevant to a particular concept permit teleological generalization. For instance, forks get washed roughly as often as they’re used for eating, yet the generalization, forks are for washing, might strike reasoners as unacceptable. What explains the discrepancy? A recent taxonomic theory of conceptual generalization (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that certain kinds of conceptual connections – known as “principled” connections – license generalizations, whereas associative, “statistical” connections license only probabilistic expectations. We apply this taxonomy to explain teleological generalization: it predicts that acceptable teleological generalizations concern concept-property pairs in which the concept bears a principled connection to a property. Under this analysis, the concept fork bears a principled connection to eating and a statistical connection to washing. Two experiments and a regression analysis tested and corroborated the predictions of the theory.

You can read the full paper here, and here are our data and analyses.

APR
2018
10

CFP: Stockholm Workshop on Human + Automated Reasoning

Interested in reasoning? The psychologists who study how humans do it don’t talk much with the scientists who get computers to do it. We’re trying to fix that: we put together a workshop that bridges the communities that study human and machine reasoning. Here are the details:

Full Paper submission deadline: 25th of April, 2018
Notification: 3rd of June, 2018
Final submission: 17th of June, 2018
Workshop: July 2018 at FAIM in Stockholm, Sweden
Read more about it: http://ratiolog.uni-koblenz.de/bridging2018
Submit your papers here: https://easychair.org/conferences/?conf=bridging2018