All posts by:

skhemlani (47)

No other information about this author.

APR
2018
19

New theory of teleological generalizations in CogSci 2018

Joanna Korman will be presenting our theory on how people understand “teleological generalizations” at the next CogSci 2018 in Madison, WI later this year. Teleological generalizations are statements that cite the purpose or function of something, e.g., “Forks are for eating.” We sought to tackle the mystery of why some teleological generalizations make sense while others don’t: for example, “forks are for washing” seems like a silly generalization to make, even though you wash forks just as often as you eat with them (hopefully).
To preview our solution, here’s the abstract of the paper:

Certain generalizations are teleological, e.g., forks are for eating. But not all properties relevant to a particular concept permit teleological generalization. For instance, forks get washed roughly as often as they’re used for eating, yet the generalization, forks are for washing, might strike reasoners as unacceptable. What explains the discrepancy? A recent taxonomic theory of conceptual generalization (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that certain kinds of conceptual connections – known as “principled” connections – license generalizations, whereas associative, “statistical” connections license only probabilistic expectations. We apply this taxonomy to explain teleological generalization: it predicts that acceptable teleological generalizations concern concept-property pairs in which the concept bears a principled connection to a property. Under this analysis, the concept fork bears a principled connection to eating and a statistical connection to washing. Two experiments and a regression analysis tested and corroborated the predictions of the theory.

You can read the full paper here, and here are our data and analyses.

APR
2018
10

CFP: Stockholm Workshop on Human + Automated Reasoning

Interested in reasoning? The psychologists who study how humans do it don’t talk much with the scientists who get computers to do it. We’re trying to fix that: we put together a workshop that bridges the communities that study human and machine reasoning. Here are the details:

Full Paper submission deadline: 25th of April, 2018
Notification: 3rd of June, 2018
Final submission: 17th of June, 2018
Workshop: July 2018 at FAIM in Stockholm, Sweden
Read more about it: http://ratiolog.uni-koblenz.de/bridging2018
Submit your papers here: https://easychair.org/conferences/?conf=bridging2018

MAR
2018
12

HRI 2018 presentation on explanatory biases + deep learning

My colleague, Esube Bekele, recently presented our research on integrating deep learning (specifically, a person re-identification network) with an explanatory bias known as the “inherence bias”. The work was featured in the “Explainable Robotics Systems” workshop at HRI 2018. Here’s the paper, and here’s the abstract:

Despite the remarkable progress in deep learning in recent years, a major challenge for present systems is to generate explanations compelling enough to serve as useful accounts of the system’s operations [1]. We argue that compelling explanations are those that exhibit human-like biases. For instance, humans prefer explanations that concern inherent properties instead of extrinsic influences. The bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts [2], particularly those that concern conflicting or anomalous observations. We show how person re-identification (re-ID) networks can exhibit an inherence bias. Re-ID networks operate by computing similarity metrics between pairs of images to infer whether the images display the same individual. State-of-the-art re-ID networks tend to output a description of a particular individual, a similarity metric, or a discriminative model [3], but no existing re-ID network provides an explanation of its operations. To address the deficit, we developed a multi-attribute residual network that treats a subset of its features as either inherent or extrinsic, and we trained the network against the ViPER dataset [4]. Unlike previous systems, the network reports a judgment paired with an explanation of that judgment in the form of a description. The descriptions concern inherent properties when the network detects dissimilarity and extrinsic properties when it detects similarity. We argue that such a system provides a blueprint for how to make the operations of deep learning techniques comprehensible to human operators.

MAR
2018
10

Talk on omissions at the Duke U. Workshop on Causal Reasoning

I gave at talk on omissive causation at the Workshop on Causal Reasoning, which was put together by Felipe de Brigard. The talk focused on recent collaborative work on how people represent and reason with omissive causes, e.g., “Not watering the plants caused them to die.” You can check out the slides here.

FEB
2018
15

New chapter summarizing state-of-the-art research on reasoning

Do you know absolutely nothing about reasoning? Wanna fix that? I have a new chapter out in the Stevens’ Handbook that summarizes the latest and greatest work on research into reasoning and higher level cognition. Here’s the link.

NOV
2017
19

Chapter on mental models in the Routledge Handbook

I have a new chapter out that reviews how people use mental models to reason. You can read the chapter here, and the first couple paragraphs are available here:

The theory of mental models has a long history going back to the logic diagrams of C.S. Peirce in the nineteenth century. But it was the psychologist and physiologist Kenneth Craik who first introduced mental models into psychology. Individuals build a model of the world in their minds, so that they can simulate future events and thereby make prescient decisions (Craik, 1943). But reasoning, he thought, depends on verbal rules. He died tragically young, and had no chance to test these ideas. The current “model” theory began with the hypothesis that reasoning too depends on simulations using mental models (Johnson-Laird, 1980).
Reasoning is a systematic process that starts with semantic information in a set of premises, and transfers it to a conclusion. Semantic information increases with the number of possibilities that an assertion eliminates, and so it is inversely related to an assertion’s probability (Johnson-Laird, 1983, Ch. 2; Adams, 1998). And semantic information yields a taxonomy of reasoning. Deductions do not increase semantic information even if they concern probabilities, but inductions do increase it. Simple inductions, such as generalizations, rule out more possibilities than the premises do. Abductions, which are a special case of induction, introduce concepts that are not in the premises in order to create explanations (see Koslowski, this volume). The present chapter illustrates how the model theory elucidates these three major sorts of reasoning: deduction, abduction, and induction

SEP
2017
22

Blog post on the frontiers of explanatory reasoning

Recently, the journal Psychonomic Bulletin & Review put together a special issue on the Process of Explanation (guest edited by Andrei Cimpian and Frank Keil). I read almost all the papers in the special issue — they’re excellent and well worth your time. I participated in a Digital Event (organized by Stephan Lewandowsky) where I synthesized some of the papers I liked the most in a blog post. You can check it out here:

It’s Tricky to Build an Explanation Machine – Let’s Fix That

JAN
2016
19

Paper on algorithmic thinking in children

Monica Bucciarelli, Robert Mackiewicz, Phil Johnson-Laird and I recently published a new paper in the Journal of Cognitive Psychology describing a theory of how children use mental simulations and gestures to reason about simple algorithms, such as reversing the order of items in list. Here’s a link to the paper, and here’s the abstract:

Experiments showed that children are able to create algorithms, that is, sequences of operations that solve problems, and that their gestures help them to do so. The theory of mental models, which is implemented in a computer program, postulates that the creation of algorithms depends on mental simulations that unfold in time. Gestures are outward signs of moves and they help the process. We tested 10-year-old children, because they can plan, and because they gesture more than adults. They were able to rearrange the order of 6 cars in a train (using a siding), and the difficulty of the task depended on the number of moves in minimal solutions (Experiment 1). They were also able to devise informal algorithms to rearrange the order of cars when they were not allowed to move the cars, and the difficulty of the task depended on the complexity of the algorithms (Experiment 2). When children were prevented from gesturing as they formulated algorithms, the accuracy of their algorithms declined by13% (Experiment 3). We discuss the implications of these results.

NOV
2015
01

New paper on reasoning about events and time

I have a new paper out in Frontiers in Human Neuroscience on a theory, computer model, and robotic implementation of event segmentation and temporal reasoning. The paper is with Tony Harrison and Greg Trafton. Here’s the link and here’s the abstract:

We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.

MAY
2015
07

Three new papers on causality in CogSci 2015

I’ve been doing a bit of work on causal reasoning lately with my colleagues, Paul Bello, Geoff Goodwin, and Phil Johnson-Laird. Here are links to three papers that I’ll be presenting at CogSci 2015 in Pasadena, CA later this summer: