New paper on why machines can’t reason yet

A major failure of current AI systems is that they can’t mimic common sense reasoning: most ML systems don’t reason, and all theorem provers draw trivial and silly deductions. We analyze why — and suggest a path forward — in a new paper now out in German AI journal Künstliche Intelligenz:

AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.


Recent research featured by the Psychonomics Society

Thomas the loop engine: Learning to program computers with a toy train

Anja Jamzorik recently featured our latest paper in Memory & Cognition for the The Psychonomics Society website; check it out!



Postdoc positions in cognitive science at NRL

[Edit 2018-10-16: My lab is looking for a new postdoc interested in studying epistemic reasoning!]

I’m currently seeking applicants for multiple postdoctoral positions to collaborate on ongoing initiatives, including (but not limited to):

  • Testing a unified computational framework of reasoning
  • [New!] Studying how people reason about epistemics, i.e., knowledge and belief
  • Studying how people engage in explanatory reasoning
  • Studying how people reason about time, space, and spatiotemporal relations
  • Studying how people can extract causal relations from visual input

The postdoc will develop his or her own research program in addition to working with me and Greg Trafton at the Navy Center for Applied Research in Artificial Intelligence at NRL’s Washington, DC headquarters. The position will involve building computational models, designing and running studies, and conducting data analysis.

The ideal candidate has (or will have) a Ph.D. in cognitive psychology, cognitive science, or computer science, with experience in higher level cognition, experimental design and data analysis, or cognitive modeling. Postdocs will be hired through the NRC Research Associateship Program. Only US citizenship or green card holders are eligible for the program.

The Intelligent Systems Section at the Navy Center for Applied Research in Artificial Intelligence is devoted to basic and applied research in human cognition. The lab is interdisciplinary and focuses on cognitive science, reasoning, cognitive robotics and human-robot interaction, procedural errors, spatial cognition, object recognition, memory, and categorization.

Interested applicants should contact ( for inquiries and more information.


Paper on omissive causes in Memory & Cognition

When something happens bc something else didn’t occur, it’s called “omissive causation” — like when your phone dies because you didn’t charge it. Our new theory in Memory & Cognition predicts how people mentally simulate omissions. It predicts that people should prioritize possibilities corresponding to mental models of omissive causal relations, and that they should be able to distinguish between omissive causes, omissive enabling conditions, and omissive preventers. Here’s the paper itself, here’s a link to the OSF page, and here’s the abstract:
Some causal relations refer to causation by commission (e.g., A gunshot causes death), and others refer to causation by omission (e.g., Not breathing causes death). We describe a theory of the representation of omissive causation based on the assumption that people mentally simulate sets of possibilities—mental models—that represent causes, enabling conditions, and preventions (Goldvarg & Johnson-Laird, 2001). The theory holds that omissive causes, enabling conditions, and preventions each refer to distinct sets of possibilities. For any such causal relation, reasoners typically simulate one initial possibility, but they are able to consider alternative possibilities through deliberation. These alternative possibilities allow them to deliberate over finer-grained distinctions when reasoning about causes and effects. Hence, reasoners should be able to distinguish between omissive causes and omissive enabling conditions. Four experiments corroborated the predictions of the theory. We describe them and contrast the results with the predictions of alternative accounts of causal representation and inference.

Paper on reasoning about facts and possibilities out in Cognitive Science

I have a new paper out in Cognitive Science with Ruth Byrne and Phil Johnson-Laird. We developed a theory about “sentential reasoning”, which is the sort of reasoning that occurs when you think about sentences that are connected by words such as “and”, “or”, or “if.” Cognitive theories have yet to explain the process by which people reason about sentences that concern facts, and possibilities, and so we built a theory around the idea that humans reason about sentences by simulating the world around them as sets of possibilities. We designed a computational model and applied it to explain some recent data on sentential reasoning.

The paper is available for download here, and here’s the abstract:

This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions,
such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes
between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.

If you want to check out any of the computational modeling code or the data for it, they’re available via OSF.


New theory of teleological generalizations in CogSci 2018

Joanna Korman will be presenting our theory on how people understand “teleological generalizations” at the next CogSci 2018 in Madison, WI later this year. Teleological generalizations are statements that cite the purpose or function of something, e.g., “Forks are for eating.” We sought to tackle the mystery of why some teleological generalizations make sense while others don’t: for example, “forks are for washing” seems like a silly generalization to make, even though you wash forks just as often as you eat with them (hopefully).
To preview our solution, here’s the abstract of the paper:

Certain generalizations are teleological, e.g., forks are for eating. But not all properties relevant to a particular concept permit teleological generalization. For instance, forks get washed roughly as often as they’re used for eating, yet the generalization, forks are for washing, might strike reasoners as unacceptable. What explains the discrepancy? A recent taxonomic theory of conceptual generalization (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that certain kinds of conceptual connections – known as “principled” connections – license generalizations, whereas associative, “statistical” connections license only probabilistic expectations. We apply this taxonomy to explain teleological generalization: it predicts that acceptable teleological generalizations concern concept-property pairs in which the concept bears a principled connection to a property. Under this analysis, the concept fork bears a principled connection to eating and a statistical connection to washing. Two experiments and a regression analysis tested and corroborated the predictions of the theory.

You can read the full paper here, and here are our data and analyses.


CFP: Stockholm Workshop on Human + Automated Reasoning

Interested in reasoning? The psychologists who study how humans do it don’t talk much with the scientists who get computers to do it. We’re trying to fix that: we put together a workshop that bridges the communities that study human and machine reasoning. Here are the details:

Full Paper submission deadline: 25th of April, 2018
Notification: 3rd of June, 2018
Final submission: 17th of June, 2018
Workshop: July 2018 at FAIM in Stockholm, Sweden
Read more about it:
Submit your papers here:


HRI 2018 presentation on explanatory biases + deep learning

My colleague, Esube Bekele, recently presented our research on integrating deep learning (specifically, a person re-identification network) with an explanatory bias known as the “inherence bias”. The work was featured in the “Explainable Robotics Systems” workshop at HRI 2018. Here’s the paper, and here’s the abstract:

Despite the remarkable progress in deep learning in recent years, a major challenge for present systems is to generate explanations compelling enough to serve as useful accounts of the system’s operations [1]. We argue that compelling explanations are those that exhibit human-like biases. For instance, humans prefer explanations that concern inherent properties instead of extrinsic influences. The bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts [2], particularly those that concern conflicting or anomalous observations. We show how person re-identification (re-ID) networks can exhibit an inherence bias. Re-ID networks operate by computing similarity metrics between pairs of images to infer whether the images display the same individual. State-of-the-art re-ID networks tend to output a description of a particular individual, a similarity metric, or a discriminative model [3], but no existing re-ID network provides an explanation of its operations. To address the deficit, we developed a multi-attribute residual network that treats a subset of its features as either inherent or extrinsic, and we trained the network against the ViPER dataset [4]. Unlike previous systems, the network reports a judgment paired with an explanation of that judgment in the form of a description. The descriptions concern inherent properties when the network detects dissimilarity and extrinsic properties when it detects similarity. We argue that such a system provides a blueprint for how to make the operations of deep learning techniques comprehensible to human operators.


Talk on omissions at the Duke U. Workshop on Causal Reasoning

I gave at talk on omissive causation at the Workshop on Causal Reasoning, which was put together by Felipe de Brigard. The talk focused on recent collaborative work on how people represent and reason with omissive causes, e.g., “Not watering the plants caused them to die.” You can check out the slides here.


New chapter summarizing state-of-the-art research on reasoning

Do you know absolutely nothing about reasoning? Wanna fix that? I have a new chapter out in the Stevens’ Handbook that summarizes the latest and greatest work on research into reasoning and higher level cognition. Here’s the link.