JUN 21

? Recent work by the R Lab at ICT 2021

The Reasoning Lab presented work on how people think and reason about time, durations, causality, bouletics, kinematics, and quantifiers at this year’s International Conference on Thinking 2021. For those who couldn’t make the conference, I’ve included an archive of the presentations here:

JUN 18

? mReasoner reasoning engine detailed in Psych Review

Phil Johnson-Laird and I recently published a deep dive into the mReasoner computational cognitive model and the new theory of reasoning about properties that it implements. We describe a new model based theory of reasoning about quantifiers, such as “all”, “some”, and “most”, as well as a series of simulation studies that show how the system implements a number of different reasoning tasks, such as assessing whether a set of statements is possible, consistent, or necessary. The system implements and tests a novel set of heuristics for syllogistic reasoning, and it shows how to stochastically vary the structure of mental models. You can read more from the abstract, here:

We present a theory of how people reason about properties. Such inferences have been studied since Aristotle’s invention of Western logic. But, no previous psychological theory gives an adequate account of them, and most theories do not go beyond syllogistic inferences, such as: All the bankers are architects; Some of the chefs are bankers; What follows? The present theory postulates that such assertions establish relations between properties, which mental models represent in corresponding relations between sets of entities. The theory combines the construction of models with innovative heuristics that scan them to draw conclusions. It explains the processes that generate a conclusion from premises, decides if a given conclusion is necessary or possible, assesses its probability, and evaluates the consistency of a set of assertions. A computer program implementing the theory embodies an intuitive system 1 and a deliberative system 2, and it copes with quantifiers such as more than half the architects. It fit data from over 200 different sorts of inference, including those about the properties of individuals, the properties of a set of individuals, and the properties of several such sets in syllogisms. Another innovation is that the program accounts for differences in reasoning from one individual to another, and from one group of individuals to another: some tend to reason intuitively but some go beyond intuitions to search for alternative models. The theory extends to inferences about disjunctions of properties, about relations rather than properties, and about the properties of properties.

and from the paper itself, available for download here.

FEB 10

? Frontiers paper on theories of omission

I published a paper in Frontiers in Psychology with a team of researchers at NRL that includes Paul Bello, Gordon Briggs, Hillary Harner, and Christina Wasylyshyn on how people reason about omissive causations. They tend to reason with iconic possibilities that yield temporal inferences, and they tend to reason with one possibility at a time, two patterns that are best explained by the model theory of causation. Here’s the abstract:

When the absence of an event causes some outcome, it is an instance of omissive causation. For instance, not eating lunch may cause you to be hungry. Recent psychological proposals concur that the mind represents causal relations, including omissive causal relations, through mental simulation, but they disagree on the form of that simulation. One theory states that people represent omissive causes as force vectors; another states that omissions are representations of contrasting counterfactual simulations; a third argues that people think about omissions by representing sets of iconic possibilities – mental models – in a piecemeal fashion. In this paper, we tease apart the empirical predictions of the three theories and describe experiments that run counter to two of them. Experiments 1 and 2 show that reasoners can infer temporal relations from omissive causes – a pattern that contravenes the force theory. Experiment 3 asked participants to list the possibilities consistent with an omissive cause – it found that they tended to list particular privileged possibilities first, most often, and faster than alternative possibilities. The pattern is consistent with the model theory, but inconsistent with the contrast hypothesis. We marshal the evidence and explain why it helps to solve a long-standing debate about how the mind represents omissions.

and the paper is available for download here.

JAN 18

? Paper on norms and future causation out in Cognitive Science

In a project lead by Paul Henne (Lake Forest College), we recently published a paper in Cognitive Science about how norms affect prospective causal judgments, i.e., judgments about whether a particular situation can cause a future event. Here’s the abstract:

People more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some outcome. Until recently, this abnormal-selection effect has been studied using retrospective vignette-based paradigms. We use a novel set of video stimuli to investigate this effect for prospective causal judgments—that is, judgments about the cause of some future outcome. Four experiments show that people more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some future outcome. We show that the abnormal-selection effects are not primarily explained by the perception of agency (Experiment 4). We discuss these results in relation to recent efforts to model causal judgment.

and here’s a link to the paper.

DEC
2020
17

? How people assess whether an explanation is “complete”

All explanations are incomplete, but some explanations are more complete than others — this is the central result of our recent work and some other research into explanatory reasoning (e.g., Zemla et al., 2017). Joanna Korman and I describe a new theory of explanatory reasoning now out in Acta Psychologica. Here’s the title:

All explanations are incomplete, but reasoners think some explanations are more complete than others. To explain this behavior, we propose a novel theory of how people assess explanatory incompleteness. The account assumes that reasoners represent explanations as causal mental models – iconic representations of possible arrangements of causes and effects. A complete explanation refers to a single integrated model, whereas an incomplete explanation refers to multiple models. The theory predicts that if there exists an unspecified causal relation – a gap – anywhere within an explanation, reasoners must maintain multiple models to handle the gap. They should treat such explanations as less complete than those without a gap. Four experiments provided participants with causal descriptions, some of which yield one explanatory model, e.g., A causes B and B causes C, and some of which demand multiple models, e.g., A causes X and B causes C. Participants across the studies preferred one-model descriptions to multiple-model ones on tasks that implicitly and explicitly required them to assess explanatory completeness. The studies corroborate the theory. They are the first to reveal the mental processes that underlie the assessment of explanatory completeness. We conclude by reviewing the theory in light of extant accounts of causal reasoning.

and here’s the paper.

NOV
2020
23

? QJEP paper on domino effects in causation

Our paper on domino effects in causation is now out in the Quarterly Journal of Experimental Psychology — the paper shows that when you contradict a node in a causal chain (e.g., A in A caused B and B caused C), the rest of the chain topples like falling dominoes. The results support the idea that people construct simulations when they reason about conflicts instead of revising their knowledge in a minimal way.

The paper is available for download here and the abstract is available here:

Inconsistent beliefs call for revision—but which of them should individuals revise? A long-standing view is that they should make minimal changes that restore consistency. An alternative view is that their primary task is to explain how the inconsistency arose. Hence, they are likely to violate minimalism in two ways: they should infer more information than is strictly necessary to establish consistency and they should reject more information than is strictly necessary to establish consistency. Previous studies corroborated the first effect: reasoners use causal simulations to build explanations that resolve inconsistencies. Here, we show that the second effect is true too: they use causal simulations to reject more information than is strictly necessary to establish consistency. When they abandon a cause, the effects of the cause topple like dominos: Reasoners tend to deny the occurrence of each subsequent event in the chain. Four studies corroborated this prediction.

SEP
2020
01

? New JoCN on temporal reasoning and durations

A new paper by Laura Kelly, myself, and Phil Johnson-Laird (link) describes systematic reasoning errors when people assess the consistency of durative temporal relations, as in the sentence: the sale happened during the convention. The paper is out in the Journal of Cognitive Neuroscience for their special issue titled “Mental Models in Time”, edited by Virginie van Wassenhove. Here’s the abstract:

A set of assertions is consistent provided they can all be true at the same time. Naive individuals could prove consistency using the formal rules of a logical calculus, but it calls for them to fail to prove the negation of one assertion from the remainder in the set. An alternative procedure is for them to use an intuitive system (System 1) to construct a mental model of all the assertions. The task should be easy in this case. However, some sets of consistent assertions have no intuitive models and call for a deliberative system (System 2) to construct an alternative model. Formal rules and mental models therefore make different predictions. We report three experiments that tested their respective merits. The participants assessed the consistency of temporal descriptions based on statements using “during” and “before.” They were more accurate for consistent problems with intuitive models than for those that called for deliberative models. There was no robust difference in accuracy between consistent and inconsistent problems. The results therefore corroborated the model theory.

AUG
2020
20

? ICYMI: The R Lab @ CogSci 2020

The Reasoning Lab presented work at the CogSci 2020 virtual conference this year, including research on temporal reasoning, reasoning about desire, genericity and semantic memory, teleology and agency, and quantification. Here’s an archive of the presentations:

JUL
2020
25

? Talk on interactivity at RSS 2020

I gave a talk titled, “Leveraging conceptual constraints for interactive robotics” at an RSS 2020 workshop, AI & Its Alternatives in Assistive and Collaborative Robotics: Decoding Intent, organized by Deepak Gopinath, Ola Kalinowska, Mahdieh Nejati, Katarina Popovic, Brenna Argall, and Tood Murphey.

Here’s the video of the talk:

JUL
2020
08

? Lab alum Zach Horne starting at the University of Edinburgh in 2021

Zach Horne will join the faculty at the University of Edinburgh in the winter semester, 2021. Congratulations, Zach!