Reason: On Competence, Intelligence, and the Incoherence of Fixed Taxonomies
I return to an old topic, with a new slant.
Author’s Preface
This essay examines human competence—its uneven distribution across individuals, the ways it has been defined and measured, and the limits of those measurements. The discussion challenges the common belief in a single, measurable property called “intelligence” and instead treats competence as a complex, multi-dimensional phenomenon. It critiques the history and methods of intelligence testing, especially IQ, from both theoretical and computational perspectives, and considers the appeal and shortcomings of alternative taxonomies such as Howard Gardner’s “multiple intelligences.” The analysis accepts the reality of large, observable differences in competence but rejects the simplifications that arise when those differences are compressed into a single number or forced into fixed, final categories.
Introduction
The cultural habit is to speak of “intelligence” as if it were a single, measurable property—a fixed trait that can be expressed as a number. Schools, militaries, and employers have acted as though such a number is meaningful across life domains. Observation suggests otherwise: competence is plural, uneven, and shaped by the interplay of learning, inherited capacities, and context. This essay treats “intelligence” as a linguistic compression of diverse abilities, not a discovered substance, and examines why common measurement practices fail to capture the complexity. It also considers the appeal of taxonomies—Gardner’s and others—and argues that the very idea of a fixed set of intelligences is incoherent given the fluid nature of human ability.
Discussion
I. Competence Is a Mosaic, Not a Monolith
Human abilities appear as patterns of competence across specific domains: verbal expression, spatial reasoning, mechanical skill, musical performance, and social negotiation, among others. These patterns are often sharply uneven. One person may have poor speech, intense hyperactivity, and exceptional perceptual–motor coordination; another may excel across many domains; another may struggle even with basic self-maintenance.
Competence in one area does not imply competence in another. The range of variation within and across populations is immense, and any single global score conceals the detailed structure of abilities. Collapsing these abilities into a single “intelligence” number substitutes a simplistic label for a rich, uneven reality.
II. Savants, Continua, and the Illusion of Categories
The old term idiot savant highlighted stark contrasts: severe limitations in most areas paired with extraordinary ability in a narrow domain. “Savant” softened the label but not the phenomenon. Savants can calculate calendar dates instantly, perform rapid factorisation, recall vast domain-specific information, or reproduce complex music or artwork after minimal exposure. These feats often bypass the conscious, stepwise strategies typical of others, and their mechanisms remain largely unknown.
Such profiles do not prove a categorical gap between “savant” and “normal” reasoning; they fit more plausibly along a continuum. The same applies at the low end of competence: there are degrees of limitation, not sharp boundaries. Dividing these continua into discrete types is a matter of language, not of natural taxonomy.
III. Language, Classification, and Selection
Language allows classification into areas such as verbal, spatial, social, or mechanical ability, but such categories are fuzzy and overlapping. Every act of classification is also an act of selection: deciding what to include and what to leave out. Test batteries, theoretical models, and ability taxonomies are selections shaped by the assumptions and priorities of their designers as much as by the structure of the world.
IV. A Short History of IQ and Its Expansion
IQ testing began as a narrow instrument for sorting military recruits on structured, puzzle-like tasks under time pressure. Over time it became a tool for schools, employers, and researchers, with the score treated as shorthand for overall competence.
Despite this generalisation, IQ measures only a narrow slice of human ability. Its scoring often violates measurement principles—treating ordinal data as if it were interval data—and it compresses varied ability patterns into a single total. The testing industry persisted and expanded, spawning variants such as separate visual–spatial or motor scores, as well as competing theoretical camps. Gardner’s multiple intelligences popularised a plural framing; others, such as Thurstone’s seven abilities, Guilford’s elaborate structure, or CHC theory, offered different cuts of the ability space.
All such frameworks are conjectural. They carve the ability landscape along lines that are intuitively appealing or administratively convenient but not shown to be natural boundaries of the mind.
V. Learning and Inheritance: Interdependent, Not Separable
Competence always involves learning; no one is born knowing how to read or perform complex tasks. But learning rests on inherited capacities—physical, sensory, and cognitive—that constrain and enable what can be acquired. In turn, these capacities are shaped, refined, or suppressed by learning.
Extreme claims of “all genes” or “all environment” are untenable. Inherited and learned elements are interdependent at every stage, making any attempt to isolate their separate contributions in testing conceptually incoherent.
VI. The Confounding that Cannot Be Untangled
All competence testing—IQ included—confounds innate capacity with learned skill. This is not simply a technical inconvenience; it is an inherent feature of what is being measured. Because these components are mutually dependent, no test can cleanly separate them.
Statistics claiming to apportion “percent of variance explained” to heredity or environment at the group level are logically and practically suspect, and even if they had some group-level meaning, they do not apply to individuals.
VII. The Computational and Measurement Faults in IQ Scoring
IQ testing’s scoring process suffers from multiple weaknesses:
1. Binary reduction of varied tasks. Diverse items are reduced to right/wrong scoring, assuming closed-system correctness that applies only to narrow problem types.
2. Illicit addition of ordinal data. Scores from unlike tasks are treated as commensurable units, ignoring qualitative differences in what they measure.
3. Loss of pattern information. Different answer patterns can yield identical totals, discarding potentially important structure.
4. Interval-scale pretence. Ordinal totals are treated as interval measures suitable for arithmetic operations.
5. Imposed normality. Raw scores are forced into a Gaussian distribution, producing neat standardisation without proving an underlying normally distributed trait.
6. Time-limit bias. A single time limit for all items weights speed heavily, potentially underestimating other cognitive strengths.
These steps may be convenient for ranking large groups but offer little theoretical justification.
VIII. “Proven to Work”: A Narrow and Circular Claim
When proponents say IQ “works,” they usually mean it correlates with school grades, job performance in certain structured settings, other cognitive tests, or remains stable over time. These are narrow definitions of success and often circular: the test predicts other tests, or measures designed to load on te same factor. None of this proves that IQ captures a general underlying property of mind.
IX. The Appeal and Limits of Taxonomies
Gardner’s eight intelligences—linguistic, logical–mathematical, musical, bodily–kinesthetic, spatial, interpersonal, intrapersonal, and naturalistic, with “existential” sometimes proposed—are intuitively appealing but incomplete. Thurstone’s seven, Guilford’s hundreds, CHC’s ten-plus factors, emotional intelligence, and practical intelligence all offer alternative divisions.
These frameworks share the same weaknesses: fuzzy boundaries, overlap, cultural bias, measurement fragility, and probable incompleteness. More fundamentally, for something as fluid and context-dependent as mental competence, a fixed, final taxonomy is incoherent. Abilities develop, decline, and shift; contexts redefine what counts as competence; and skills interact in ways that defy clean separation.
X. Tacit Recognition vs. Numerical Overreach
The reality of large, observable differences in competence is undeniable. People see them without tests: in problem-solving, practical work, learning speed, adaptability, and more. The failure is not in recognising these differences but in flattening them into a one-dimensional score, discarding the domain-specific information that matters in actual life. IQ testing oversimplifies the complexity of competence by a wide margin.
Summary
Human competence is a mosaic of uneven abilities, not a single measurable property. Attempts to represent it as a single score—whether through IQ or other means—strip away essential structure. Savant phenomena and low-function extremes illustrate continua, not discrete types. Learning and inheritance are inseparable in practice and concept. IQ scoring relies on binary reductions, ordinal-data misuse, loss of pattern information, unwarranted interval assumptions, and imposed distributional forms. Claims that IQ “works” rest on narrow, often circular evidence. Taxonomies of intelligences are conjectural maps over a shifting landscape; fixed, final lists are conceptually incoherent. The challenge is to respect the complexity of competence without reducing it to sterile simplicity.
Readings
Gould, S. J. (1996). The mismeasure of man. New York, NY: W. W. Norton.
— Historical critique of intelligence measurement, showing how assumptions and social aims can be embedded in supposedly objective scores.
Mackintosh, N. J. (2011). IQ and human intelligence (2nd ed.). Oxford, UK: Oxford University Press.
— Surveys psychometric traditions, including factor-analytic construction of “g” and the role of standardisation in shaping score meaning.
Richardson, K. (2017). Genes, brains, and human potential: The science and ideology of intelligence. New York, NY: Columbia University Press.
— Challenges hereditarian claims and explains why group-level variance estimates cannot be applied to individuals.
Sternberg, R. J., & Wagner, R. K. (1993). Practical intelligence: Nature and origins of competence in the everyday world. New York, NY: Cambridge University Press.
— Attempts to capture tacit, real-world competence; highlights the gap between test performance and practical functioning.
Treffert, D. A. (2009). Islands of genius: The bountiful mind of the autistic, acquired, and sudden savant. London, UK: Jessica Kingsley Publishers.
— Documents savant phenomena and their implications for understanding human cognitive diversity.
Thurstone, L. L. (1938). Primary mental abilities. Chicago, IL: University of Chicago Press.
— Early alternative to unitary intelligence; illustrates both the promise and the limits of taxonomic approaches.


Stephen J Gould in "The Mismeasure of Man " is worth reading, for a deeper analysis.
I suggest in some of ny essays that the correlations presented among testing subfactors appear to be relatively weak and theory says that they are inappropriate computationaly in any case. Maybe the theory on computation is inappropriate or just plain wrong, but that is an argument for the mathematicians. It become more and more hard to fatham the more you look at it.
I have stated a few times that empiricism trumps theory, but I am not sure that a case has been made by the psychometrcians on IQ testing that I find plausible.
"Statistics claiming to apportion “percent of variance explained” to heredity or environment at the group level are logically and practically suspect, and even if they had some group-level meaning, they do not apply to individuals."
Why? How heretibility of IQ differs from heretibility of other traits?
Also g is not total competence, it's general intelligence