Understanding the World: Chances are ...
I try to come to grips with something I have wrestled with for decades – mathematical odds or if you will, probabilities.
Me Brain ‘urts (Homage to an Old Monty Python Sketch)
I find thinking about probabilities is hard. Why is that? I think it's got to do with the nature of abstraction in thinking and not being able to tie something to concrete visualizations or concrete understandings of other words. So probabilities is very abstract, although we can represent some of them diagrammatically to a very limited extent. It seems to be more abstract than algebra or calculus, and definitely more than Euclidean geometry or many other branches of mathematics.
Chance is a Feature of Life – Risk, and Reward
So you may wonder why anybody would care about probabilities. Well, they are just a feature of life. And we do use them every day. We look out the window and we think, well, it's more likely to rain. Or we go for a road trip and we think, well, it's unlikely that I'll have an accident, but I may have a flat tire. We can assign, numerical values. We use terms like more likely, less likely, almost certain, almost certainly not when we don't put numbers for them. It's only when we get into the mathematics and numbers that we get a little more precise in one sense—not necessarily more precise in any objective sense, but the numbers look precise.
A good part of science important to our lives depends upon probabilities and its close cousins, statistics. Most research is couched in statistical language: significance, confidence intervals. There's huge problems with that, of course, in the domains in which it's applied, but it all depends on probabilities and the ramifications of that.
So it's not that it's unimportant. It's very important. If you're a gambler, you use it intuitively, but if you're a card juggler, you can use it explicitly. Apparently, you're not allowed to actually win consistently at a casino where they kick you out. I'm only being a little facetious here. There are rules against people who seem to have figured out the system by using good probabilistic reasoning, and they're shown the door, apparently. I don't know, only what I have read. I'm not a gambler in that sense, but we all gamble throughout our lives without recognizing it.
Terms That We Use – But What is the Real Meaning
We can use a lot of terms. We can talk about odds, chance, possibilities, plausibilities, probabilities for events and outcomes, observed and observable, or theorized, or felt subjectively. So some philosophers get themselves bent at a knot about what's the real meaning of probability, is it something to do with platonic forms, something to do with counting of events, is it something to do with our subjective beliefs? I would say it's all three. They're all aspects of it. It's just different perspectives of the same thing.
Personal Difficulties, but Not Uncommon
So I said earlier that I found statistics and probability hard. I took statistics three times: twice as an undergraduate. I failed the first time, and squeaked through the second time. I went to graduate school, took graduate statistics for arts dummies (experimental psychology grad students). The course was in multiple regression analysis. I was informed it was not remotely as hard as that which the engineers and hard scientists took in their undergradruate studies. Still I got the top mark in the class (polishes his fingernails.) So I give myself a little credit for having worked very, very hard. Now I've had some cognitive decline, so thinking about probabilities is not any easier. My fluid intelligence was probably worse now than it was then. I do have some objective evidence of decline and some hypothesized reasons for it, at least one of which is true. There are confounding factors, so it's hard to disentangle them.
I only find that the simplest of combinatorial situations, rolling dice, flipping coins, intuitively clear. As soon as we get into more complex areas of thinking, I'm at a loss. I can't get my head around the Monty Hall problem, for instance. I believe that may be more of a Bayesian problem, but it still involves discrete events. So I just have a lot of trouble understanding the logic behind that. I have a lot of trouble understanding the whole idea of a null hypothesis and significance testing. To me, it seems irrational and illogical. That might just be me. Apparently, there are some statisticians who find it obvious, and I know there are a lot of people who do not find it obvious.
My Pet Orangutan
So I look at my pet orangutan (you do believe I have a pet orangutan, don't you?) Anyway, my pet orangutan can do a little bit of a comparison of sizes, counting without the words, but I know my orangutan cannot understand arithmetic. Well, by analogy, I can understand some simple probabilities, but I can't understand the complex, not at any intuitive level. I really wrestle with Bayes' theorem. I'm trying to get my head around just how and why it works. I understand it in very simple terms, but when it comes down to the concrete, where do the numbers come from, and how do you do the computations beyond the simple algebra, it's just really simple. I find conceptually very hard to get my head around it. Even with a Venn diagram and stuff, I still find it difficult. (Oh, by the way, I really don't have a pet orangutan. He's just my invisible companion).
Shit Happens – But With Patterns
Variability happens, and it happens under constraints, and we can describe those constraints probabilistically. Variability means we always have differences in the outcomes for any sort of event. Probabilities is a way of getting a handle on that. In some cases, we can actually come up with models that are testable and countable. In other cases, we can only hypothesize because tests would be impossible because we're dealing with too much complexity. So the simple cases, they're not so simple either. We use counting, permutations perhaps, definitely combinations for categories.
So, when we're dealing with categories such as dice, roulette wheels, cards, or coin flips, we can figure out the probabilities based on the platonic forms using various deterministic counting and combinatorial methods; with others such as medical experiments, psychological experiments we have a lot more trouble due to the complexities of the situation. Not sure that a deity could even in principle.
So we can have discrete counts, or we can have continuous measurements. Sometime we take the continuous measurements and break them into categories. There are also methods that can use continuous measurements but I've forgotten much of my training on that.
Kolmogorov's Three Axioms
Andrey Kolmogorov, a Russian mathematician, formalized the axiomatic foundation of probability theory in 1933. His three axioms form the basis of modern probability theory and are widely accepted in mathematics and related disciplines.
Probabilities are always non-negative: The probability of any event is never less than zero.
The total probability of all possible outcomes is one: If you consider everything that can possibly happen, the total probability adds up to exactly one.
Probabilities of mutually exclusive events add up: If two events cannot happen at the same time, the probability of either one happening is the sum of their individual probabilities.
Probabilities as Ratios (i.e. Fractions or Percentages)
It occurs to me that the simplest way to think of probabilities is as ratios, ranging from 0 over N to N over N, or in other words, 0 to 1. And that's axiomatic. Probabilities lie between 0 and 1. So 0 means never happens, and 1 means always happens. And what is it that happens? Well, there's events and there's outcomes. Now we can also phrase that as percentages. That's just an implication of statistics. So we can have 0% or 100% or never or all of the time. So I find that the clearest way of thinking about it. I don't know if I've ever been exposed to that thinking in my earlier training, but it seems to me pretty obvious that that's what we're dealing with.