All posts by:

skhemlani (35)

No other information about this author.

MAR
2018
12

HRI 2018 presentation on explanatory biases + deep learning

My colleague, Esube Bekele, recently presented our research on integrating deep learning (specifically, a person re-identification network) with an explanatory bias known as the “inherence bias”. The work was featured in the “Explainable Robotics Systems” workshop at HRI 2018. Here’s the paper, and here’s the abstract:

Despite the remarkable progress in deep learning in recent years, a major challenge for present systems is to generate explanations compelling enough to serve as useful accounts of the system’s operations [1]. We argue that compelling explanations are those that exhibit human-like biases. For instance, humans prefer explanations that concern inherent properties instead of extrinsic influences. The bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts [2], particularly those that concern conflicting or anomalous observations. We show how person re-identification (re-ID) networks can exhibit an inherence bias. Re-ID networks operate by computing similarity metrics between pairs of images to infer whether the images display the same individual. State-of-the-art re-ID networks tend to output a description of a particular individual, a similarity metric, or a discriminative model [3], but no existing re-ID network provides an explanation of its operations. To address the deficit, we developed a multi-attribute residual network that treats a subset of its features as either inherent or extrinsic, and we trained the network against the ViPER dataset [4]. Unlike previous systems, the network reports a judgment paired with an explanation of that judgment in the form of a description. The descriptions concern inherent properties when the network detects dissimilarity and extrinsic properties when it detects similarity. We argue that such a system provides a blueprint for how to make the operations of deep learning techniques comprehensible to human operators.

MAR
2018
10

Talk on omissions at the Duke U. Workshop on Causal Reasoning

I gave at talk on omissive causation at the Workshop on Causal Reasoning, which was put together by Felipe de Brigard. The talk focused on recent collaborative work on how people represent and reason with omissive causes, e.g., “Not watering the plants caused them to die.” You can check out the slides here.

FEB
2018
15

New chapter summarizing state-of-the-art research on reasoning

Do you know absolutely nothing about reasoning? Wanna fix that? I have a new chapter out in the Stevens’ Handbook that summarizes the latest and greatest work on research into reasoning and higher level cognition. Here’s the link.

NOV
2017
19

Chapter on mental models in the Routledge Handbook

I have a new chapter out that reviews how people use mental models to reason. You can read the chapter here, and the first couple paragraphs are available here:

The theory of mental models has a long history going back to the logic diagrams of C.S. Peirce in the nineteenth century. But it was the psychologist and physiologist Kenneth Craik who first introduced mental models into psychology. Individuals build a model of the world in their minds, so that they can simulate future events and thereby make prescient decisions (Craik, 1943). But reasoning, he thought, depends on verbal rules. He died tragically young, and had no chance to test these ideas. The current “model” theory began with the hypothesis that reasoning too depends on simulations using mental models (Johnson-Laird, 1980).
Reasoning is a systematic process that starts with semantic information in a set of premises, and transfers it to a conclusion. Semantic information increases with the number of possibilities that an assertion eliminates, and so it is inversely related to an assertion’s probability (Johnson-Laird, 1983, Ch. 2; Adams, 1998). And semantic information yields a taxonomy of reasoning. Deductions do not increase semantic information even if they concern probabilities, but inductions do increase it. Simple inductions, such as generalizations, rule out more possibilities than the premises do. Abductions, which are a special case of induction, introduce concepts that are not in the premises in order to create explanations (see Koslowski, this volume). The present chapter illustrates how the model theory elucidates these three major sorts of reasoning: deduction, abduction, and induction

SEP
2017
22

Blog post on the frontiers of explanatory reasoning

Recently, the journal Psychonomic Bulletin & Review put together a special issue on the Process of Explanation (guest edited by Andrei Cimpian and Frank Keil). I read almost all the papers in the special issue — they’re excellent and well worth your time. I participated in a Digital Event (organized by Stephan Lewandowsky) where I synthesized some of the papers I liked the most in a blog post. You can check it out here:

It’s Tricky to Build an Explanation Machine – Let’s Fix That

JAN
2016
19

Paper on algorithmic thinking in children

Monica Bucciarelli, Robert Mackiewicz, Phil Johnson-Laird and I recently published a new paper in the Journal of Cognitive Psychology describing a theory of how children use mental simulations and gestures to reason about simple algorithms, such as reversing the order of items in list. Here’s a link to the paper, and here’s the abstract:

Experiments showed that children are able to create algorithms, that is, sequences of operations that solve problems, and that their gestures help them to do so. The theory of mental models, which is implemented in a computer program, postulates that the creation of algorithms depends on mental simulations that unfold in time. Gestures are outward signs of moves and they help the process. We tested 10-year-old children, because they can plan, and because they gesture more than adults. They were able to rearrange the order of 6 cars in a train (using a siding), and the difficulty of the task depended on the number of moves in minimal solutions (Experiment 1). They were also able to devise informal algorithms to rearrange the order of cars when they were not allowed to move the cars, and the difficulty of the task depended on the complexity of the algorithms (Experiment 2). When children were prevented from gesturing as they formulated algorithms, the accuracy of their algorithms declined by13% (Experiment 3). We discuss the implications of these results.

NOV
2015
01

New paper on reasoning about events and time

I have a new paper out in Frontiers in Human Neuroscience on a theory, computer model, and robotic implementation of event segmentation and temporal reasoning. The paper is with Tony Harrison and Greg Trafton. Here’s the link and here’s the abstract:

We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.

MAY
2015
07

Three new papers on causality in CogSci 2015

I’ve been doing a bit of work on causal reasoning lately with my colleagues, Paul Bello, Geoff Goodwin, and Phil Johnson-Laird. Here are links to three papers that I’ll be presenting at CogSci 2015 in Pasadena, CA later this summer:

MAR
2015
16

Review on integrating probability and deduction in human reasoning out in TiCS

I wrote a paper with Phil Johnson-Laird and Geoff Goodwin that reviews recent developments in theories of human reasoning. It seeks to explain how logic and probability fit together with cognitive processes of inference. You can download it here, and here’s the abstract:

This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn – not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction.

MAR
2015
07

Comprehensive model of immediate inferences in QJEP

I published a computational model of immediate quantification inferences in QJEP with my co-authors, Max Lotstein, Greg Trafton, and Phil Johnson-Laird. You can download it here, and here’s the abstract:

We propose a theory of immediate inferences from assertions containing a single quantifier, such as: All of the artists are bakers; therefore, some of the bakers are artists. The theory is based on mental models and is implemented in a computer program, mReasoner. It predicts three main levels of increasing difficulty: (a) immediate inferences in which the premise and conclusion have identical meanings; (b) those in which the initial mental model of the premise yields the correct conclusion; and (c) those in which only an alternative to the initial model establishes the correct conclusion. These levels of difficulty were corroborated for inferences to necessary conclusions (in a reanalysis of data from Newstead, S. E., & Griggs, R. A. (1983). Drawing inferences from quantified statements: A study of the square of opposition. Journal of Verbal Learning and Verbal Behavior, 22, 535–546), for inferences to modal conclusions, such as, it is possible that all of the bakers are artists (Experiment 1), for inferences with unorthodox quantifiers, such as, most of the artists (Experiment 2), and for inferences about the consistency of pairs of quantified assertions (Experiment 3). The theory also includes three parameters in a stochastic system that predicted quantitative differences in accuracy within the three main sorts of inference.