Reason: Misidentification - How Views On Formalisms Have Distorted the Study of Thought
I am returning to a theme that in becoming clearer and clearer regarding what reasoning is not, not what it is.
Introduction
A long tradition in philosophy, logic, statistics, and cognitive science has made repeated attempts to define and formalize reasoning. Yet much of this tradition has misrepresented the very nature of reasoning by substituting rigid formal tools for the far more fluid, ambiguous, and complex operations that actually characterize human thought. Deductive logic, symbolic systems, statistical modeling, and certain experimental paradigms have been mistaken for reasoning itself—when in reality they are narrow technical instruments or stylized reflections of specific mental activities. The effect of this misidentification has been to obscure the real contours of thought, flatten its richness, and entrench a host of conceptual confusions.
This essay argues that reasoning is not well captured by deduction, or probability, taken as primitives. Rather, reasoning emerges from recursive processes grounded in cognitive mechanisms such as pattern recognition, generalization, abstraction, contextual interpretation and an uncountable number of other methods. These are not reducible to formal rules or statistical models. Nor do they lend themselves to modular definitions. Any attempt to describe reasoning must acknowledge the entangled, recursive nature of mental operations and resist the temptation to confuse the tools of analysis with the phenomenon being analyzed.
I. The Misplaced Centrality of Deductive Logic
The most persistent and misleading conflation is that of reasoning with deductive logic. This view—entrenched across millennia and still visible in philosophy, law, and artificial intelligence—treats syllogisms, modus ponens, modus tollens, and symbolic calculus as models of reasoning itself. While deductive logic can appear within acts of reasoning, it is not the act itself.
Deductive logic functions only under strict assumptions: fixed premises, well-defined terms, and a closed system. Real-world reasoning almost never meets these conditions. Instead, it copes with vagueness, shifting contexts, incomplete knowledge, and conflicting goals. Formal logic is useful in mathematics and certain theoretical disciplines, but it bears little resemblance to how humans interpret a situation, deliberate about actions, or understand new information.
Even where logic appears superficially—such as in legal discourse—it misleads. Legal reasoning is not deductive, despite appearances. It depends on judgment, precedent, competing interpretations, and normative balancing. Attempts to render it deductive gloss over the essential ambiguity and evaluative character of legal decision-making. The supposed use of deductive logic in law is better understood as a rhetorical frame than a cognitive process.
II. Calculated Numbers Are Not Belief
Another significant category error lies in the treatment of Bayesian statistics as a model of subjective belief. Bayesian reasoning involves mathematical procedures for updating probabilities given new data. It is rule-governed, consistent, and repeatable. Belief, by contrast, is biologically grounded, context-dependent, emotionally colored, and behaviorally unstable.
To equate a posterior probability with a “degree of belief” is to misunderstand both belief and number. People hold inconsistent beliefs, resist contrary evidence, and fail to update their views rationally. They may express a probability judgment while simultaneously acting as if it were false. Belief is a creature of culture, psychology, and biology—not a number.
Bayesian outputs may inform belief. But they are not belief. The idea that probability is the right model for belief is not a discovery but a philosophical preference, rooted in formalism and unsupported by the behavior of actual minds.
III. Syntax and Semantics Are Inseparable
Formal logic and computational linguistics often treat syntax as distinct from semantics. Syntax, in this view, is the structure of expressions, while semantics is their meaning. This separation is technically convenient but conceptually flawed. Syntax does not exist independently. It is always framed, interpreted, and constructed in light of semantic content.
Patterns of logic—like those used in propositional calculus—are not themselves syntactic in the way this tradition assumes. They are imbued with meaning at every level: in the choice of symbols, in the interpretation of connectives, and in the purpose of the system itself. One cannot even invent or apply such a system without semantic assumptions.
In practice, any meaningful use of logical patterns requires semantics. The idea that we can manipulate symbols without reference to meaning is a fiction. Language is not separable from thought, and structure is not more foundational than content. Syntax may be the scaffolding, but meaning is the building.
IV. Soundness Pulls Logic Back into Cognition
Deductive logic, when considered as a self-contained system, separates validity (structural consistency) from soundness (truth and meaning of premises). But soundness inevitably returns us to the domain of cognition, judgment, and context. Determining whether a premise is true requires observation, interpretation, and background knowledge. This means that even the most rigid logical system cannot stand apart from the rest of mental life.
In evaluating soundness, one engages not in deduction but in the full suite of human reasoning: pattern recognition, memory, expectation, analogy, evidence weighing, and more. This further undermines the idea that logic can be a self-sufficient model of reasoning. It must borrow its premises from the very cognitive operations it cannot formalize.
V. Induction Is Not Probability: It Is Generalization
David Hume, and many who followed, contributed to the confusion by discussing induction, which really seems to be generalization, and then redefining induction as probabilistic. But the act of generalizing is not a matter of assigning probabilities. It is a pattern recognition mechanism—an informal extrapolation based on similarity and repetition.
A child who learns that “dogs bark” does not arrive at this belief through calculation. The belief emerges from repeated observation and categorical inference. Probabilistic reasoning may describe some aspects of uncertainty, but it is not what underlies human generalization.
The deeper problem is that generalization, classification, analogy, and abstraction all seem to draw on shared capacities: detecting patterns, comparing features, and abstracting regularities. These are not probabilistic calculations but intuitive, non-numeric acts.
VI. Abduction: Naming the Mystery Without Explaining It
Charles Peirce’s notion of abduction—reasoning to the best explanation—was an attempt to describe how hypotheses arise. But as a theoretical model, abduction is vague and unsatisfying. At most, it acknowledges that people form hypotheses when confronted with puzzling facts. But how, why, and under what constraints this occurs remains unclear.
Saying “we form a hypothesis” after encountering a surprising event is descriptive, not explanatory. It is an observation that something happens, not a theory of what the mechanism is. Like analogy, abduction is better treated as an umbrella term for a variety of mental operations rather than a distinct mode of reasoning.
VII. No Cognitive Primitives: Recursion, Not Reduction
Efforts to isolate analogy, generalization, classification, or abstraction as fundamental reasoning types overlook their mutual entanglement. Each seems to presuppose the others. Classification requires generalization. Generalization presupposes similarity assessment. Analogy requires classification and abstraction. There are no clear boundaries.
These operations are not modular. They are recursive, layered, and context-sensitive. This reflects not a failure of terminology but a structural feature of cognition. The brain seems to not compute by discrete symbolic steps but apparently by overlapping neural activations, feedback loops, and embodied constraints. The search for clean primitives in cognition may be a dead end.
VIII. Experimental Bias and the Limits of Behavioral Economics
The work of Kahneman and Tversky on heuristics and biases is often cited as showing the irrationality of human reasoning. But their experiments—though clever—rest on peculiar premises and narrow setups. Participants are asked to respond to stylized, artificial questions under conditions that often misrepresent real-world reasoning.
These experiments rely on formal norms—such as Bayesian rationality or expected utility theory—as standards of correct reasoning. When participants depart from these norms, the departure is labeled a bias. But it is not clear that the norms themselves are universally appropriate. Reasoning in real-world settings is shaped by goals, time constraints, partial knowledge, and practical heuristics—not formal optimization.
Bias exists. But the experimental literature tends to overstate both its frequency and significance, while underplaying the contextual intelligence of everyday reasoning. The Nobel Prize recognition of this work reflects an institutional preference for formalism, not a breakthrough in understanding the nature of thought.
Conclusion
The central mistake made by generations of scholars is to mistake artifacts for processes—tools for cognition itself. Deductive logic is not reasoning; Bayesian numbers are not belief; syntax cannot be divorced from semantics; generalization is not statistical inference; and analogy is not a cognitive primitive. Thought is recursive, contextual, and embodied. Its mechanisms are not reducible to tidy formalisms or idealized models.
What is needed is not another schema or taxonomy, but a rejection of the idea that reasoning can be captured by formal devices. To understand thought requires accepting its messiness, its interdependence, and its resistance to clean definition. The search for elegance has often led scholars away from insight. It is time to reverse course.

