All posts by:

skhemlani (21)

No other information about this author.

SEP
2017
22

Blog post on the frontiers of explanatory reasoning

Recently, the journal Psychonomic Bulletin & Review put together a special issue on the Process of Explanation (guest edited by Andrei Cimpian and Frank Keil). I read almost all the papers in the special issue — they’re excellent and well worth your time. I participated in a Digital Event (organized by Stephan Lewandowsky) where I synthesized some of the papers I liked the most in a blog post. You can check it out here:

It’s Tricky to Build an Explanation Machine – Let’s Fix That

JAN
2016
19

Paper on algorithmic thinking in children

Monica Bucciarelli, Robert Mackiewicz, Phil Johnson-Laird and I recently published a new paper in the Journal of Cognitive Psychology describing a theory of how children use mental simulations and gestures to reason about simple algorithms, such as reversing the order of items in list. Here’s a link to the paper, and here’s the abstract:

Experiments showed that children are able to create algorithms, that is, sequences of operations that solve problems, and that their gestures help them to do so. The theory of mental models, which is implemented in a computer program, postulates that the creation of algorithms depends on mental simulations that unfold in time. Gestures are outward signs of moves and they help the process. We tested 10-year-old children, because they can plan, and because they gesture more than adults. They were able to rearrange the order of 6 cars in a train (using a siding), and the difficulty of the task depended on the number of moves in minimal solutions (Experiment 1). They were also able to devise informal algorithms to rearrange the order of cars when they were not allowed to move the cars, and the difficulty of the task depended on the complexity of the algorithms (Experiment 2). When children were prevented from gesturing as they formulated algorithms, the accuracy of their algorithms declined by13% (Experiment 3). We discuss the implications of these results.

NOV
2015
01

New paper on reasoning about events and time

I have a new paper out in Frontiers in Human Neuroscience on a theory, computer model, and robotic implementation of event segmentation and temporal reasoning. The paper is with Tony Harrison and Greg Trafton. Here’s the link and here’s the abstract:

We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.

MAY
2015
07

Three new papers on causality in CogSci 2015

I’ve been doing a bit of work on causal reasoning lately with my colleagues, Paul Bello, Geoff Goodwin, and Phil Johnson-Laird. Here are links to three papers that I’ll be presenting at CogSci 2015 in Pasadena, CA later this summer:

MAR
2015
16

Review on integrating probability and deduction in human reasoning out in TiCS

I wrote a paper with Phil Johnson-Laird and Geoff Goodwin that reviews recent developments in theories of human reasoning. It seeks to explain how logic and probability fit together with cognitive processes of inference. You can download it here, and here’s the abstract:

This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn – not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction.

MAR
2015
07

Comprehensive model of immediate inferences in QJEP

I published a computational model of immediate quantification inferences in QJEP with my co-authors, Max Lotstein, Greg Trafton, and Phil Johnson-Laird. You can download it here, and here’s the abstract:

We propose a theory of immediate inferences from assertions containing a single quantifier, such as: All of the artists are bakers; therefore, some of the bakers are artists. The theory is based on mental models and is implemented in a computer program, mReasoner. It predicts three main levels of increasing difficulty: (a) immediate inferences in which the premise and conclusion have identical meanings; (b) those in which the initial mental model of the premise yields the correct conclusion; and (c) those in which only an alternative to the initial model establishes the correct conclusion. These levels of difficulty were corroborated for inferences to necessary conclusions (in a reanalysis of data from Newstead, S. E., & Griggs, R. A. (1983). Drawing inferences from quantified statements: A study of the square of opposition. Journal of Verbal Learning and Verbal Behavior, 22, 535–546), for inferences to modal conclusions, such as, it is possible that all of the bakers are artists (Experiment 1), for inferences with unorthodox quantifiers, such as, most of the artists (Experiment 2), and for inferences about the consistency of pairs of quantified assertions (Experiment 3). The theory also includes three parameters in a stochastic system that predicted quantitative differences in accuracy within the three main sorts of inference.

NOV
2014
19

Theory on unique probabilities out in Cognitive Science

Max Lotstein, Phil Johnson-Laird and I published a paper in Cognitive Science on how people estimate unique probabilities, like the probability that Jeb Bush will be elected US President in 2016. The theory hinges on how mental models of beliefs are used to update iconic representations of probability. Here’s a link and here’s the abstract:

We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory’s predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.

JUL
2014
12

LRW 8 presentation on conditional probabilities

I recently gave a talk on the conditional probabilities of unique events (Khemlani, Lotstein, & Johnson-Laird, 2014) at the 8th London Reasoning Workshop at Birkbeck College. You can download the presentation here.
(more…)

APR
2014
22

Monsters for science

Earlier this year, Abby Sussman, Danny Oppenheimer and I published a paper on latent scope biases in higher cognition. One of the fun things about writing the paper is that to prepare the materials for the experiment, we worked with Mike Lariccia, a friend who’s also a fantastic illustrator of graphic novels.
(more…)

DEC
2013
20

Paper on kinematic mental simulations out in PNAS

I recently published a paper on kinematic mental simulations in PNAS. The paper is with Monica Bucciarelli, Robert Mackiewicz, and Phil Johnson-Laird, and it examines how reasoners without any background in computer science or logic can construct mental “algorithms” in a systematic way, akin to recipes or driving directions.
(more…)