Reason: Dice Rolls, Variable Outcomes, and Randomness vs. Determinism
Almost Everyone Has Played with Dice and Games and Knows That in the Long Run, Results Can Be Bounded and Predictable, but in the Short Run, You Can't Predict Any Individual Outcome of an Event
Introduction
Rolling dice is something nearly everyone has done. It’s basic. You throw a die, it bounces around, and lands on a number. One face out of six. There’s no guarantee which face comes up, but over time—hundreds or thousands of rolls—you notice a pattern. Every face shows up more or less equally. So we call it random, but it’s a kind of random that behaves predictably in the long run. This is how people first encounter probability. But once you start thinking about it, it opens up a lot of questions. What is randomness, really? Is it just a label for things we don’t understand, or is there such a thing as a truly uncaused event?
From dice to physics, and from common sense to philosophy, this essay looks at how we think about randomness and causality, how language and models play tricks on us, and how some theories—maybe even some famous ones—have confused metaphysics for mechanics.
The Fair Die and Its Contrasts
Let’s begin with the basic idea of a fair die. A six-sided cube, more or less symmetrical. The idea is that when you roll it, it will fall onto one of its six faces, and every face has an equal chance—1 in 6. It won’t stop on a corner or edge, because that’s physically unstable. It will settle onto a face, and the assumption is that there’s no bias. Now that’s an important point. A fair die is not defined by what it is, but by what it lacks. It lacks asymmetry, it lacks weighting, it lacks bias. It’s defined negatively.
In contrast, a loaded die is engineered to favor one outcome. You could add weight to one side, subtly reshape a face, shift the center of mass—there are many ways to do it. A loaded die doesn’t act “randomly” in the same way. The trickster with the loaded die can win bets over time, because the outcome is not evenly distributed. With enough data, a pattern shows up.
This contrast reveals something central: in a fair setup, short-term outcomes are unpredictable, but long-term results are statistically even. With a biased system, short-term unpredictability might still appear, but the long-term pattern deviates. And if you know how it's biased, you can use that to your advantage.
Random, but Caused
So what about unpredictability? When people say dice rolls are random, they usually mean that they can’t predict the result. But that doesn’t mean the roll is uncaused. The die gets tossed, hits a surface, bounces, spins, and lands. These are physical processes. They involve mass, momentum, friction, air resistance—all basic physics. So the roll is caused, even though we can’t predict the outcome. The unpredictability comes from how complicated it is to track all the influences. Tiny differences in how the die is thrown or how it bounces can change the result. We don’t call it uncaused. We call it too complex to compute.
This is what people mean when they talk about epistemic randomness. It’s randomness from our point of view, because we don’t know or can’t measure all the factors. But under the hood, everything is following causal rules.
Scholars and the Notion of Ontological Randomness
Eventually, scholars entered the discussion—some with their feet on the ground, and others, maybe not. The grounded ones said what most people would: dice rolls seem random, but that’s because we can’t know all the factors. The process is still deterministic. If we had complete knowledge and the tools to measure it, we could predict the result. In principle.
Others, though, took a different tack. They introduced the idea of ontological randomness—that some events are not just unpredictable, but actually uncaused. These weren’t usually theorists dealing with dice, but physicists and philosophers looking at radioactive decay, quantum field fluctuations, or electron spin measurements. They claimed that in some cases, no prior event determines what happens. In other words, some events are uncaused. That’s a strong claim.
Some physicists especially, suggest that certain quantum events are fundamentally probabilistic. They don't necessarily say "uncaused" outright. Instead, they say things like "not determined by any prior state," or "indeterminate at the point of measurement." But that’s splitting hairs. If something is not determined by anything before it, then what is it, if not uncaused?
Testing Determinism with a Dice Experiment
Let’s imagine an experiment that could test this. We build a system with extreme precision. A die is manufactured to be as geometrically fair as possible, down to microns. The surface it lands on is flat and stable. A machine throws the die the same way every time—same angle, same force, same speed, same spin. We minimize temperature shifts, air currents, vibrations, telekinesis (joking), what have you—anything that could introduce noise.
Now, roll after roll, what do we get? If we’ve done our job well, we should see the same result over and over. Or at least, results that shift predictably when we vary the inputs. In other words, the outcome becomes deterministic under controlled conditions. The randomness disappears.
This experiment has been done in practice. Researchers have demonstrated that with enough control, the outcome of a dice roll can be predicted. Not perfectly—there are still limits—but well enough to show that randomness here is not deep. It’s practical.
So What About Quantum Physics?
Quantum mechanics is the area where many physicists claim true randomness exists. They point to radioactive decay, where an unstable atom decays at a time that no one can predict. The half-life tells us how long a batch of atoms will take to decay on average, but not which atom will go when. Same with measurements of spin or polarization. You can only get probabilities, not certainties.
So some say: that’s it. Proof of ontological randomness. But is it? The results are unpredictable individually, but the statistical patterns are rock solid. The same distribution curves show up every time. That’s what makes atomic clocks possible. So again, we have something that looks random in the short term, but highly predictable in the long run.
And here's where the contradiction shows up. If something is uncaused, how does it follow rules? How does it show regularity? How can it be bounded if it’s not constrained? Saying it’s uncaused but still follows a pattern sounds like doublethink.
Some people respond to this by positing hidden variables—factors we can’t measure yet, but that might account for the behavior. Physicists might prefer the term “hidden variables,” but frankly, “unknown factors” is more honest. Calling something a variable implies it’s part of a defined system. “Unknown factors” admits we’re guessing. Maybe there are causal processes we don’t understand. Maybe not. But saying “uncaused” should require more than just unpredictability.
Three Options for Causality
It helps to lay things out. There are three basic options when talking about causes:
Events are caused.
Events are uncaused.
Events are neither caused nor uncaused.
That third one is a nonstarter. It doesn’t even make sense. So we’re left with one or two. If someone says events are uncaused, that’s a serious claim. It needs strong evidence. But the idea that something can be uncaused and still conform to statistical laws? That’s shaky. It undermines itself. The pattern suggests structure. Structure implies constraint. Constraint implies cause.
Models and Mathematics Are Not Reality
Now we come to mathematics, which is often used by physicists to describe physical systems. And a lot of physicists are incredibly good at applying math. But that doesn’t mean they all think deeply about what mathematics actually is. Math is a modeling system. It doesn’t describe reality directly—it describes relationships between abstract entities. That can be useful, even powerful. But it can also mislead.
I have the impression, maybe wrong, that some physicists seem to forget this. Perhaps they treat mathematical models as if they are the world itself, not a description of it. Perhaps some find an elegant equation and say, “That’s how nature works.” But the world doesn’t have equations written on it. The math is a tool, not a mirror. The danger comes when people take mathematical truth to imply physical truth. Just because an equation works on paper doesn’t mean the universe is built that way.
Wigner talked about the “unreasonable effectiveness of mathematics.” Fair enough. But there’s also a lot of unreasonable ineffectiveness that gets overlooked. We can generate mathematical systems that model nothing. The map isn’t the territory.
Language and the House of Cards
The last piece of the puzzle is language. Language lets us describe the world, but it also lets us confuse ourselves. We can use it to express truths, lies, and complete nonsense. And not just isolated statements—we can build whole systems of reasoning that are internally coherent but completely disconnected from reality.
Part of the problem is that language doesn’t work in isolation. Every word is connected to a whole entangled network of other words, ideas, and beliefs. Even a dictionary is just a rough sketch of that entanglement. So we can build elaborate conceptual structures that sound impressive but rest on fuzzy definitions, unexamined assumptions, and category errors.
This is how you end up with scholars—and sometimes whole academic traditions—saying things that are elegant, but absurd. They get stuck inside a paradigm and can’t see its flaws. Abstractions get piled on abstractions, and eventually the tower has no foundation. Some even go so far as to deny that there’s an objective world. But if there’s no real world, then argument itself collapses. There’s no referent. No consequence. It becomes a game of words with no stakes. That’s not philosophy. That’s intellectual self-hypnosis.
Summary
Start with dice. They seem random in the short term but behave predictably in the long run. That’s not magic. That’s structure. When precision increases, predictability follows. So randomness there is not deep—it’s epistemic.
Quantum theory complicates things. It introduces apparent randomness that resists explanation. Some theorists say that this randomness is real and uncaused. But that claim is heavy, and the evidence is ambiguous. Regularity suggests constraint. And constraint suggests cause.
Causality is not a philosophical invention. It’s something built into our brains through evolution. Animals, including humans, respond to causes because that’s how they survive. Philosophers who say otherwise are playing word games.
Mathematics and language are essential tools—but they’re also dangerous when misused. They let us build models and descriptions, but also illusions and empty theories. The trick is knowing the difference.
Reading List (APA Format)
Ball, P. (2018). Beyond weird: Why everything you thought you knew about quantum physics is different. University of Chicago Press.
Takes a skeptical look at standard quantum narratives and explores the limits of predictability and interpretation.
Butterfield, J. (2005). Determinism and indeterminism. In J. Butterfield & J. Earman (Eds.), Philosophy of Physics (pp. 1–105). Elsevier.
Covers classical and quantum conceptions of determinism, including the assumptions behind probabilistic language.
Eagle, A. (Ed.). (2010). Philosophy of probability: Contemporary readings. Routledge.
Includes a range of perspectives on whether probabilities reflect ignorance, chance, or something else.
Ismael, J. (2009). Probability in deterministic physics. The Journal of Philosophy, 106(2), 89–108.
Argues that probabilities can arise from complexity in otherwise deterministic systems.
Maudlin, T. (2011). Quantum non-locality and relativity: Metaphysical intimations of modern physics (3rd ed.). Wiley-Blackwell.
Examines quantum theory with a focus on the metaphysical assumptions behind randomness and non-locality.
Strevens, M. (2011). Depth: An account of scientific explanation. Harvard University Press.
Explores how scientific explanations work, with a focus on when probabilistic reasoning clarifies and when it obscures.
Van Fraassen, B. C. (1989). Laws and symmetry. Oxford University Press.
Critical of overly metaphysical interpretations of physical laws, emphasizing modeling over metaphysical necessity.