Reason: Probability, Determinism, and Judgment in Measurement
I wanna git me one of them weights!
Author’s Preface
The topic of probability is often treated as if it were a universal tool, applicable without qualification to any domain. Yet probability only makes sense when the conditions of its use are made explicit. The following essay reflects on the difference between deterministic systems, where outcomes can be regarded as certain, and real-world applications, where probability requires decisions about what is being measured and how. The example of cracking a walnut with a hammer illustrates how probability can shift from certainty to uncertainty depending on context and judgment. The essay continues the Reason series by focusing on the limits of probabilistic reasoning and the inescapable role of human judgment in defining what is to be measured.
Introduction
Probability theory has become one of the dominant ways of reasoning about the world. It is employed in science, engineering, economics, and psychology. Yet its foundations rest on assumptions that are often overlooked. In deterministic systems—highly stable systems governed by fixed rules—probability is unnecessary, because the outcome is certain. In applying probability theory to real-world situations, choices must be made about what aspect of the phenomenon is being treated probabilistically. This decision is never automatic; it requires human judgment.
This essay will explore three themes: (1) the irrelevance of probability in deterministic systems, (2) the introduction of probabilistic treatment in borderline or complex cases, and (3) the central role of judgment in measurement and counting.
Discussion
Deterministic Systems and Certainty
In some systems, the outcome is guaranteed. A deterministic system is one in which the same initial conditions always produce the same result. When a hammer is applied with sufficient force to a walnut, breakage is not a matter of chance but of certainty. Here, the probability of breakage, p, is equal to 1. Probability adds nothing to the description, because the system is stable and the outcome fixed.
Determinism underlies much of classical physics, where calculations can predict motion, impact, and outcomes with precision. In such cases, probability is unnecessary, and to speak of it is redundant. Probability is only meaningful when uncertainty or variation is present.
Introducing Probability in Borderline Cases
Consider again the walnut. When the hammer strike is reduced to just enough force to sometimes break the shell, the outcome is no longer certain. In this case, the probability of breakage might be estimated at p = 0.5. Now, chance is introduced not because the universe has changed its nature, but because the situation has become sensitive to small variations in force, angle, and position.
In this domain, probability theory begins to appear useful. Yet the question arises: what aspect is being treated probabilistically? One could consider the likelihood of breakage itself. Alternatively, one could look at the distribution of fragments after breakage. The latter introduces multiple variables: distance of fragments, angle of projection, or even a calculated measure of dispersion. Each choice frames the phenomenon differently, and each requires judgment about what is significant enough to measure.
The Judgment Inherent in Measurement
All measurement involves judgment. Even in apparently simple cases, one must decide what counts as a “break” or what qualifies as a “fragment.” In statistical practice, decisions are required about what to include, what to ignore, and how to code results. These are not purely technical matters but involve human choice.
Counting itself is not exempt. To count a collection of items is to decide what is being treated as one unit. In the walnut example, is one fragment a countable item, or does it only count if it reaches a certain size? These decisions influence the data, and therefore the probabilities, that emerge.
Probability does not float free as a neutral description of reality. It depends on the judgments that define what is being counted, measured, or observed. This judgment is not an afterthought but a constitutive part of probabilistic reasoning.
Broader Implications
The walnut example is trivial, but the principle extends to all applications of probability. In medicine, economics, or psychology, the question of what is being measured is even more contested. Probability may be used to estimate treatment effects, market risks, or behavioral outcomes, but all these rely on prior judgments about what counts as an outcome, what counts as a success, and what variables are ignored.
Thus, probabilistic reasoning always contains an element of construction. It does not simply reveal probabilities “out there” in the world but constructs them based on human decisions about what is to be measured and how.
Summary
Deterministic systems do not require probability, since outcomes are certain. In real-world cases where uncertainty arises, probability is invoked, but always with reference to decisions about what aspect of the situation is being measured. The walnut example illustrates how certainty gives way to probability as systems become more variable. At the core of all probabilistic reasoning is judgment: the choice of what to count, what to measure, and how to interpret outcomes. Far from being a neutral tool, probability rests on human decisions that shape its application and meaning.
Readings
Box, G. E. P., & Draper, N. R. (1987). Empirical model-building and response surfaces. Wiley.
— A classic, plain-spoken guide to building and checking models against data, emphasizing that models must be judged in practice, not assumed to fit universally.
Cartwright, N. (1983). How the laws of physics lie. Oxford University Press.
— Explores how scientific laws hold only under carefully controlled conditions and fail in open, complex systems, directly relevant to questions of probability and determinism.
Hacking, I. (1975). The emergence of probability. Cambridge University Press.
— A historical treatment of how probability theory developed and how it shifted from games of chance to applications in science and philosophy.
Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge University Press.
— A comprehensive account of probability theory as an extension of logic, with a focus on the choices and assumptions underlying its use.

