Reason: (3rd) On Variability, Probability, and the Illusion of Prediction
Why Mathematical Descriptions of Unstable Systems Mislead More Than They Reveal
PART VI – ON CRACKING WALNUTS
Walnuts: Probabilities in a Physical Example
Consider this situation: there is a walnut, a hammer, and a concrete floor. A person swings the hammer. The walnut breaks—probability = 1.
We cannot reasonably imagine a walnut not breaking under these conditions. But immediately this raises a question: what counts as a walnut? What counts as a hammer? What counts as a swing, or a concrete floor? Definitions creep in everywhere.
In practice, we do not define everything. We rely on shared understanding, and our language works well enough without requiring a complete inventory of the universe. For present purposes, we assume a regular setup: an ordinary walnut, an ordinary hammer, an ordinary concrete floor.
Now, suppose the question is not whether the walnut breaks—we know it will—but how it breaks. Where do the fragments go? How large are they? How much do they weigh? At what angles do they scatter? Do they leap into the air, and how far do they travel?
Here probabilistic language becomes useful. We cannot specify the exact location of every fragment in advance, but we can talk about distributions of distances, sizes, or angles. Yet all of this depends on what we choose to measure.
And notice how quickly the certainty collapses if we alter the setup. Suppose the “walnut” is really a steel ball made to look like a walnut. Suppose the hammer is made of foam rubber. Suddenly, the outcome changes. The simple certainty of “probability = 1” evaporates.
The lesson is not that walnuts, hammers, and concrete are special, but that situations must be defined—and those definitions rest on human decisions. What we choose to count, what outcomes we decide to measure, and how we describe them all shape the kind of probabilistic language we can apply. In the case of walnut fragments, one might decide to measure distance from the center. Someone else might decide to measure fragment size or velocity. Each choice defines a different probabilistic frame.
There could be any number of such decisions. And that is the point: probability never floats free. It attaches to situations we define, with outcomes we decide to count. Even in something as simple as cracking a walnut.
PART VII – ASSUMPTIONS ABOUT ASSUMPTIONS
On the Notion of Meta-Assumptions
I keep wondering whether some other scholars—perhaps better scholars than I am—have already stumbled upon the distinction between assumptions and meta-assumptions. To me, it seems obvious once it is spelled out. Self-evident, even.
And yet, I have not seen it discussed anywhere in what I have read. I had to come to it on my own. Makes my brain ‘urt.
Meta-Assumptions Versus Assumptions
It is important not to confuse meta-assumptions with assumptions.
Assumptions are the conditions specified inside a model—independence, closure, stability, identical distributions, and so on. Meta-assumptions lie one level higher.
The first meta-assumption is that the model actually applies to the world.
The second meta-assumption is that the correct assumptions have been chosen, and that if those assumptions are satisfied—even though this is usually only guessed at—then the model will hold.
The third meta-assumption is that if the assumptions are not satisfied, then the model will not hold.
So, three meta-assumptions: one general, two specific.
The Question of Applicability and Meta-Assumptions
The central question is this: can the mathematical language of probability and statistics describe, with any fidelity, the outcomes of unstable, ill-defined, non-repeatable, and confounded situations? I think the jury is still out.
Practitioners have offered their answers. Statisticians have made their claims. They even point to theorems such as the central limit theorem—there are several versions—as proof.
But here is the catch. These results rest on a meta-assumption: that toy models, however rigorously proven they may be within mathematics, actually map onto the real world. That claim is not a proof, but a leap. It is another level of assumption altogether.
That is why I call them meta-assumptions.
In short: mathematics may (or not) demonstrate the internal coherence of its models. But coherence is not applicability. Applicability rests on meta-assumptions, and those are always in question.
PART VIII – WE ARE NOT ALONE
Possible Parallels in Scholarship
It is always useful to see where similar ideas appear in the work of other scholars. They may not have used the same words, but many have wrestled with the same problem: how to distinguish between the internal assumptions of a model and the higher-level claim that the model applies to the world.
I am not certain what a nomological machine is in the strictest sense. I believe it is one of Nancy Cartwright’s inventive terms, though she may have adapted it from earlier traditions. What matters is that it points in the same direction: models do not work everywhere. They work in carefully set-up situations, and those situations create the appearance of law-like behavior.
Different thinkers have recognized this problem, but each has given it a different wrapping. Cartwright calls them machines. Box calls them models. Lakatos frames them as programmes. Michell asks whether numbers in psychology actually measure anything. Hacking points to reasoning styles. All of these gestures point to the same gap I call meta-assumptions.
Nancy Cartwright – Nomological Machines
Cartwright’s claim is that laws work because of structured setups. A laboratory experiment, a coin-tossing device, or a shielded chamber all enforce conditions that make laws appear reliable. Outside those setups, laws may fail.
The parallel is clear:
· Assumptions = the explicit conditions inside the machine (fair coins, independent tosses, sealed environments).
· Meta-assumption = the claim that if the machine works, then the law applies beyond it—in the open world.
Her term captures something real, but it buries the higher-level leap in jargon. The clarity lies in making that leap explicit as a meta-assumption.
George Box – All Models Are Wrong, but Some Are Useful
Box’s dictum—“all models are wrong, but some are useful”—is perhaps the most widely quoted line in applied statistics. It acknowledges that models are simplifications and never literally true. The test is whether they are useful for prediction or control.
But the hidden step is this:
· Assumptions = the simplifications that make the model run (linearity, independence, normality).
· Meta-assumption = the belief that usefulness implies tethering to reality, that the model’s simplified structure can be projected outward.
Box pointed to the problem, but he left the higher-level claim unexamined. The danger is that “useful” slides into “true,” when in fact usefulness is situational and context-bound.
Imre Lakatos – Research Programmes
Lakatos described science as made of “hard cores” protected by “belts” of auxiliary assumptions. When the belt takes the blows, the core survives. Scientists defend their commitments by adjusting the belt.
The overlap here is straightforward:
· Assumptions = the belt statements that keep the model intact.
· Meta-assumption = that if the belt holds, then the core is a true map of reality.
Lakatos focused on the sociology and history of science—how programmes resist falsification—but he did not ask the more basic question: does protecting the belt really ensure the core corresponds to the world? That question is meta-assumptive.
Measurement Theory – Joel Michell and Others
Measurement theory in psychology shows the same pattern. Psychologists often assume attributes like intelligence, anxiety, or self-esteem are measurable on numerical scales. But whether these constructs satisfy the requirements of measurement is almost never tested.
· Assumptions = “IQ is measurable,” “scores can be meaningfully compared.”
· Meta-assumption = that if these assumptions hold, then IQ behaves like weight or length.
Michell argues this assumption is rarely justified. It is not just that some scales are imperfect; it is that the very idea of measurement in psychology may be conceptually incoherent. This is a clear case where meta-assumptions matter: what is taken for granted is not simply the model, but the very claim that the model belongs in the same category as physical measurement.
Ian Hacking – Styles of Reasoning
Hacking’s notion of “styles of reasoning” points out that different sciences validate their claims in different ways. Statistical significance might count as evidence in psychology, while reproducibility in physics or coherence in mathematics serves the same function elsewhere.
The underlying structure fits:
· Assumptions = the rules internal to each style (p-values, error bars, reproducibility thresholds).
· Meta-assumption = that these rules do not merely guide practice, but actually connect scientific reasoning to the way the world is.
Hacking draws attention to the diversity of reasoning, but once again, the leap from practice to reality is left implicit.
Is There More Clarity in the Meta-Assumption Formulation?
All of these scholars recognize the gap between models and the world. But each buries the point in their own terminology: nomological machine, hard core and belt, styles of reasoning.
The meta-assumption framing makes the leap plain.
· Assumptions = conditions internal to the model.
· Meta-assumptions = claims about whether those conditions, if satisfied in the world, guarantee that the model applies to the world.
The distinction is sharper this way. It avoids hiding the leap under new labels. It shows directly that what is at issue is not just assumptions inside the model, but assumptions about the model itself.
Situations and Scholars
There is also the question of situations. Probability is always situational. What counts as an event, what is treated as a situation, and what is defined as an outcome are all human decisions.
Cartwright’s nomological machine is, at root, a way of talking about situations. Dice, coins, and urns of balls all count as situations. Medicine, nutrition, or psychology are also situations, though far less stable. Scholars hint at this, but rarely say it directly. Applied probability is always situational. That should be obvious, but it is worth stating.
The Bottom Line – Prediction and Variability
At the end of all this, the bottom line is clear. Even if probabilistic distributions—those neat mathematical expressions of chance—can be made to apply to open systems, they tell us very little. They have almost no predictive power when it comes to the next event.
Everything depends on whether those distributions truly apply. But even if they do, the dream of predicting specific outcomes remains an illusion. The world is variable.
As for determinism versus true randomness—uncaused events—I leave that question open. Yet I find the notion of uncaused events deeply implausible, even contradictory. To claim that something has no cause, yet still produces regularity, makes no sense.
Perhaps the more unsettling possibility is that the world itself resists coherence—that it may not make sense at all.
The Complete Series:
1st: https://ephektikoi.substack.com/p/reason-1st-on-variability-probability
2nd: https://ephektikoi.substack.com/p/reason-2nd-on-variability-probability
3rd: https://ephektikoi.substack.com/p/reason-3rd-on-variability-probability
4th: https://ephektikoi.substack.com/p/reason-4th-on-variability-probability
5th: https://ephektikoi.substack.com/p/reason-5th-on-variability-probability

