All posts by:

skhemlani (29)

No other information about this author.

JUL
2018
23

Paper on omissive causes in Memory & Cognition

When something happens bc something else didn’t occur, it’s called “omissive causation” — like when your phone dies because you didn’t charge it. Our new theory in Memory & Cognition predicts how people mentally simulate omissions. It predicts that people should prioritize possibilities corresponding to mental models of omissive causal relations, and that they should be able to distinguish between omissive causes, omissive enabling conditions, and omissive preventers. Here’s the paper itself, here’s a link to the OSF page, and here’s the abstract:
Some causal relations refer to causation by commission (e.g., A gunshot causes death), and others refer to causation by omission (e.g., Not breathing causes death). We describe a theory of the representation of omissive causation based on the assumption that people mentally simulate sets of possibilities—mental models—that represent causes, enabling conditions, and preventions (Goldvarg & Johnson-Laird, 2001). The theory holds that omissive causes, enabling conditions, and preventions each refer to distinct sets of possibilities. For any such causal relation, reasoners typically simulate one initial possibility, but they are able to consider alternative possibilities through deliberation. These alternative possibilities allow them to deliberate over finer-grained distinctions when reasoning about causes and effects. Hence, reasoners should be able to distinguish between omissive causes and omissive enabling conditions. Four experiments corroborated the predictions of the theory. We describe them and contrast the results with the predictions of alternative accounts of causal representation and inference.
JUL
2018
18

Paper on reasoning about facts and possibilities out in Cognitive Science

I have a new paper out in Cognitive Science with Ruth Byrne and Phil Johnson-Laird. We developed a theory about “sentential reasoning”, which is the sort of reasoning that occurs when you think about sentences that are connected by words such as “and”, “or”, or “if.” Cognitive theories have yet to explain the process by which people reason about sentences that concern facts, and possibilities, and so we built a theory around the idea that humans reason about sentences by simulating the world around them as sets of possibilities. We designed a computational model and applied it to explain some recent data on sentential reasoning.

The paper is available for download here, and here’s the abstract:

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions,
such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes
between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

If you want to check out any of the computational modeling code or the data for it, they’re available via OSF.

APR
2018
19

New theory of teleological generalizations in CogSci 2018

Joanna Korman will be presenting our theory on how people understand “teleological generalizations” at the next CogSci 2018 in Madison, WI later this year. Teleological generalizations are statements that cite the purpose or function of something, e.g., “Forks are for eating.” We sought to tackle the mystery of why some teleological generalizations make sense while others don’t: for example, “forks are for washing” seems like a silly generalization to make, even though you wash forks just as often as you eat with them (hopefully).
To preview our solution, here’s the abstract of the paper:

Certain generalizations are teleological, e.g., forks are for eating. But not all properties relevant to a particular concept permit teleological generalization. For instance, forks get washed roughly as often as they’re used for eating, yet the generalization, forks are for washing, might strike reasoners as unacceptable. What explains the discrepancy? A recent taxonomic theory of conceptual generalization (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that certain kinds of conceptual connections – known as “principled” connections – license generalizations, whereas associative, “statistical” connections license only probabilistic expectations. We apply this taxonomy to explain teleological generalization: it predicts that acceptable teleological generalizations concern concept-property pairs in which the concept bears a principled connection to a property. Under this analysis, the concept fork bears a principled connection to eating and a statistical connection to washing. Two experiments and a regression analysis tested and corroborated the predictions of the theory.

You can read the full paper here, and here are our data and analyses.

APR
2018
10

CFP: Stockholm Workshop on Human + Automated Reasoning

Interested in reasoning? The psychologists who study how humans do it don’t talk much with the scientists who get computers to do it. We’re trying to fix that: we put together a workshop that bridges the communities that study human and machine reasoning. Here are the details:

Full Paper submission deadline: 25th of April, 2018
Notification: 3rd of June, 2018
Final submission: 17th of June, 2018
Workshop: July 2018 at FAIM in Stockholm, Sweden
Read more about it: http://ratiolog.uni-koblenz.de/bridging2018
Submit your papers here: https://easychair.org/conferences/?conf=bridging2018

MAR
2018
12

HRI 2018 presentation on explanatory biases + deep learning

My colleague, Esube Bekele, recently presented our research on integrating deep learning (specifically, a person re-identification network) with an explanatory bias known as the “inherence bias”. The work was featured in the “Explainable Robotics Systems” workshop at HRI 2018. Here’s the paper, and here’s the abstract:

Despite the remarkable progress in deep learning in recent years, a major challenge for present systems is to generate explanations compelling enough to serve as useful accounts of the system’s operations [1]. We argue that compelling explanations are those that exhibit human-like biases. For instance, humans prefer explanations that concern inherent properties instead of extrinsic influences. The bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts [2], particularly those that concern conflicting or anomalous observations. We show how person re-identification (re-ID) networks can exhibit an inherence bias. Re-ID networks operate by computing similarity metrics between pairs of images to infer whether the images display the same individual. State-of-the-art re-ID networks tend to output a description of a particular individual, a similarity metric, or a discriminative model [3], but no existing re-ID network provides an explanation of its operations. To address the deficit, we developed a multi-attribute residual network that treats a subset of its features as either inherent or extrinsic, and we trained the network against the ViPER dataset [4]. Unlike previous systems, the network reports a judgment paired with an explanation of that judgment in the form of a description. The descriptions concern inherent properties when the network detects dissimilarity and extrinsic properties when it detects similarity. We argue that such a system provides a blueprint for how to make the operations of deep learning techniques comprehensible to human operators.

MAR
2018
10

Talk on omissions at the Duke U. Workshop on Causal Reasoning

I gave at talk on omissive causation at the Workshop on Causal Reasoning, which was put together by Felipe de Brigard. The talk focused on recent collaborative work on how people represent and reason with omissive causes, e.g., “Not watering the plants caused them to die.” You can check out the slides here.

FEB
2018
15

New chapter summarizing state-of-the-art research on reasoning

Do you know absolutely nothing about reasoning? Wanna fix that? I have a new chapter out in the Stevens’ Handbook that summarizes the latest and greatest work on research into reasoning and higher level cognition. Here’s the link.

NOV
2017
19

Chapter on mental models in the Routledge Handbook

I have a new chapter out that reviews how people use mental models to reason. You can read the chapter here, and the first couple paragraphs are available here:

The theory of mental models has a long history going back to the logic diagrams of C.S. Peirce in the nineteenth century. But it was the psychologist and physiologist Kenneth Craik who first introduced mental models into psychology. Individuals build a model of the world in their minds, so that they can simulate future events and thereby make prescient decisions (Craik, 1943). But reasoning, he thought, depends on verbal rules. He died tragically young, and had no chance to test these ideas. The current “model” theory began with the hypothesis that reasoning too depends on simulations using mental models (Johnson-Laird, 1980).
Reasoning is a systematic process that starts with semantic information in a set of premises, and transfers it to a conclusion. Semantic information increases with the number of possibilities that an assertion eliminates, and so it is inversely related to an assertion’s probability (Johnson-Laird, 1983, Ch. 2; Adams, 1998). And semantic information yields a taxonomy of reasoning. Deductions do not increase semantic information even if they concern probabilities, but inductions do increase it. Simple inductions, such as generalizations, rule out more possibilities than the premises do. Abductions, which are a special case of induction, introduce concepts that are not in the premises in order to create explanations (see Koslowski, this volume). The present chapter illustrates how the model theory elucidates these three major sorts of reasoning: deduction, abduction, and induction

SEP
2017
22

Blog post on the frontiers of explanatory reasoning

Recently, the journal Psychonomic Bulletin & Review put together a special issue on the Process of Explanation (guest edited by Andrei Cimpian and Frank Keil). I read almost all the papers in the special issue — they’re excellent and well worth your time. I participated in a Digital Event (organized by Stephan Lewandowsky) where I synthesized some of the papers I liked the most in a blog post. You can check it out here:

It’s Tricky to Build an Explanation Machine – Let’s Fix That

JAN
2016
19

Paper on algorithmic thinking in children

Monica Bucciarelli, Robert Mackiewicz, Phil Johnson-Laird and I recently published a new paper in the Journal of Cognitive Psychology describing a theory of how children use mental simulations and gestures to reason about simple algorithms, such as reversing the order of items in list. Here’s a link to the paper, and here’s the abstract:

Experiments showed that children are able to create algorithms, that is, sequences of operations that solve problems, and that their gestures help them to do so. The theory of mental models, which is implemented in a computer program, postulates that the creation of algorithms depends on mental simulations that unfold in time. Gestures are outward signs of moves and they help the process. We tested 10-year-old children, because they can plan, and because they gesture more than adults. They were able to rearrange the order of 6 cars in a train (using a siding), and the difficulty of the task depended on the number of moves in minimal solutions (Experiment 1). They were also able to devise informal algorithms to rearrange the order of cars when they were not allowed to move the cars, and the difficulty of the task depended on the complexity of the algorithms (Experiment 2). When children were prevented from gesturing as they formulated algorithms, the accuracy of their algorithms declined by13% (Experiment 3). We discuss the implications of these results.