Understanding: Questioning Expertise - A Critical Examination of Expert Opinion
Why We Shouldn’t Trust Experts Without Scrutiny
Note: This is a ChatGPT ghostwritten paraphrase of an original article written by me.
That LLM AI can summarize, paraphrase, condense and expand an article so well is a great mystery. Of course it is a great mystery than people can do this as well. There are a lot of mysteries in the world.
From The Princess Bride:
Inigo Montoya: "Who are you? I must know!"
Dread Pirate Roberts: "Get used to disappointment."
Author’s Preface
This is a paraphrase of my original article at Op Ed News titled “Trusting the Experts” at Op Ed News: https://www.opednews.com/populum/page.php?f=Trusting-the-experts-by-Mike-Zimmer-Epistemology_Expertise_Ideas--Philosophy_Philosophy-200720-688.html.
I discuss the limitations of expert knowledge and identify five key factors that shape expert opinion, highlighting why it is essential to approach it with a critical eye.
Introduction
In this essay, I examine the complexities and limitations of expert opinion, a topic that has become increasingly relevant as we rely more and more on experts for advice in critical areas. Experts play a central role in society, but the blind trust we often place in them is problematic. Expertise, while valuable, is far from infallible. Experts frequently make mistakes, and their conclusions can be influenced by various factors that undermine the reliability of their judgments. It is important to understand these limitations to ensure that we do not misplace our trust and that we develop a healthy skepticism toward expert opinion.
The Role and Importance of Experts
Experts are indispensable to the smooth functioning of society. We rely on their specialized knowledge in fields such as medicine, science, engineering, finance, the arts, and various trades. Their expertise allows them to solve problems, offer advice, and make informed decisions. However, it is crucial to recognize that experts are not universally reliable. While some experts excel in their fields, others do not perform significantly better than chance (Ioannidis, 2005). This discrepancy raises an important question: Why do we often place unquestioning trust in experts when their reliability varies so widely?
One reason for our reliance on experts is that the alternative—trusting non-experts—often seems worse. However, even within the realm of experts, there are significant variations in reliability. Some experts produce excellent results, but others make decisions that lead to negative or even catastrophic outcomes. In some cases, the failure of an expert may result only in financial loss, but in others, it could have life-or-death implications. It is also important to understand that expertise is often domain-specific. An expert in one field does not necessarily have the knowledge or skills to make informed judgments in another (Tetlock, 2005).
Despite these limitations, experts are often invited to give opinions on topics outside their areas of specialization in public discourse, which contributes to confusion between informed judgment and opinion. Recognizing that expertise in one area does not automatically translate into competence in another is essential for preventing the misapplication of expert advice.
The Challenges of Trusting Experts
One of the key issues I raise is the overreliance on experts without sufficiently critical evaluation. Experts themselves operate within flawed systems of knowledge and are subject to the same biases, limitations, and errors in judgment as anyone else (Kahneman, 2011). Nevertheless, society often encourages blind deference to their authority, ignoring the fact that experts can—and often do—make mistakes.
Experts regularly disagree with one another, particularly in areas where the body of knowledge is still evolving. For instance, in science, competing experts may propose conflicting theories, making it clear that they cannot all be correct. Even in fields where consensus exists, the possibility remains that the consensus is either incomplete or entirely wrong (Ioannidis, 2005). Just because the majority of experts agree does not mean that their conclusions are guaranteed to be accurate (Open Science Collaboration, 2015).
A significant part of the problem lies in how experts approach knowledge. Many begin with the assumption that the current understanding in their field is correct. When confronted with evidence that challenges this understanding, they may dismiss it without due consideration. This rigid adherence to established paradigms can lead even highly skilled experts to be wrong on fundamental issues (Tversky & Kahneman, 1974). Expertise alone does not protect individuals from the biases and blind spots that shape human reasoning.
The Science Example: A Field with Its Own Problems
Science is often held up as the gold standard for objectivity and self-correction, but in reality, it is not immune to the limitations of expertise. The history of science is filled with discarded theories and mistaken conclusions. While science has made significant contributions to human progress, it is far from infallible.
One of the most significant issues in contemporary science is the "replication crisis," particularly in psychology and biomedical research, where many studies fail to replicate (Open Science Collaboration, 2015). This crisis underscores the fragile reliability of much scientific knowledge and challenges the credibility of expert findings. Ioannidis (2005) notably highlights how many published research findings may, in fact, be false, raising further concerns about the reliability of experts within the scientific community.
Another problematic tendency in science is the dismissal of anecdotal evidence. While anecdotes are not as rigorously controlled as formal studies, they can still reveal patterns or phenomena that are worth investigating. The wholesale rejection of anecdotal evidence can prevent the discovery of insights that challenge the prevailing understanding (Gigerenzer, 2014). Relying exclusively on established methods while ignoring the value of anecdotal observations can inhibit scientific progress.
I also question the belief that science is inherently self-correcting. While science can and does evolve, the process of correction is often slow and met with resistance. Established experts and institutions are often reluctant to abandon long-held views, even in the face of contradictory evidence (Horton, 2015). The peer review process, intended to ensure the quality of scientific research, frequently reinforces groupthink and stifles innovation. This resistance to change stifles progress and prevents the development of new theories or alternative explanations.
The Fallibility of Experts: Bias, Judgment, and Cultural Influence
Experts are, first and foremost, human, and as such, they are susceptible to the same cognitive biases and errors in judgment as anyone else. Cognitive biases, such as confirmation bias, lead individuals to seek out information that supports their pre-existing beliefs and to dismiss evidence that contradicts them (Tversky & Kahneman, 1974). This bias is no less pervasive among experts than it is among non-experts.
Incentives also play a significant role in shaping expert opinion. Experts may be motivated by financial gain, career advancement, or professional recognition. As Upton Sinclair famously said, "It is difficult to get a man to understand something when his salary depends upon his not understanding it" (Ioannidis, 2005). This adage applies directly to the biases that can shape expert judgment.
The cultural and social environment in which experts operate further shapes their opinions. Groupthink, peer pressure, and institutional norms can all lead to the reinforcement of flawed or outdated ideas (Janis, 1972). Experts often work within institutions, such as governments, universities, or private corporations, that have their own sets of interests and pressures. These institutions shape the questions that experts investigate and the conclusions they are willing to reach. The result is a complex web of influences that can distort expert judgment.
A Call for Skepticism and Critical Thinking
Given the many limitations of expertise, it is crucial to adopt a more skeptical and critical approach when engaging with expert opinions. While experts can offer valuable insights, their judgments should never be accepted without scrutiny. Blind trust in experts can lead to poor decisions, particularly in high-stakes fields such as healthcare, public policy, and finance.
The key is to approach expert opinion with a healthy degree of skepticism. This does not mean rejecting all expert advice but rather assessing it critically. It is essential to recognize the influence of cognitive biases, institutional pressures, and flawed disciplinary knowledge on expert judgment. By doing so, we can make more informed decisions and avoid the dangers of excessive deference to authority figures (Tetlock, 2005).
Summary: Expertise, While Necessary, Must Be Questioned
In conclusion, I urge readers to maintain a healthy skepticism toward experts and their opinions. While experts play a crucial role in guiding decisions and advancing knowledge, they are still human and subject to the same biases, errors, and limitations as everyone else. Their conclusions are shaped by personal, institutional, and disciplinary factors, all of which can distort their judgment.
Rather than deferring to experts uncritically, I encourage a more balanced approach: listen to their advice, but evaluate their claims carefully. Consider the broader context in which they operate, including the incentives and biases that may influence their conclusions. Even the most knowledgeable experts can be wrong, and it is essential to remain vigilant and critical in our engagement with expert opinion.
References
Gigerenzer, G. (2014). Risk savvy: How to make good decisions. Viking.
Note: Gerd Gigerenzer is a psychologist and director of the Harding Center for Risk Literacy. His work focuses on decision-making and intuitive risk-based thinking.
Reading: This book offers tools to improve decision-making, especially under uncertainty.
Horton, R. (2015). Offline: What is medicine’s 5 sigma? The Lancet, 385(9976), 1380. https://doi.org/10.1016/S0140-6736(15)60696-1
Note: Richard Horton is the editor-in-chief of The Lancet and criticizes systemic failures in medical research.
Reading: This article questions the reliability of medical research and calls attention to systemic issues in scientific methods.
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
Note: John Ioannidis is a professor of medicine and statistics at Stanford University. His work addresses the reliability and reproducibility of scientific research.
Reading: This paper critiques the reliability of scientific studies, particularly in biomedical research.
Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin.
Note: Irving Janis was a social psychologist known for his theory of groupthink, which examines how group dynamics can lead to poor decision-making.
Reading: This book explores the psychological mechanisms behind groupthink, using historical examples to illustrate its effects.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Note: Daniel Kahneman is a psychologist and Nobel laureate whose work on judgment and decision-making reshaped understanding of human cognition.
Reading: This book contrasts intuitive, fast thinking with deliberate, slow reasoning and explores how biases influence decisions.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
Note: The Open Science Collaboration is a group of researchers addressing the replication crisis in psychology.
Reading: This study highlights the challenges of replicating findings in psychology, revealing significant issues with reproducibility.
Sinclair, U. (1935). I, candidate for governor: And how I got licked. University of California Press.
Note: Upton Sinclair was an American writer and activist best known for his critiques of capitalism and industry.
Reading: This memoir recounts Sinclair’s run for governor of California and examines how media manipulation influenced the election.
Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton University Press.
Note: Philip Tetlock is a psychologist and political scientist who has studied the accuracy of expert predictions, particularly in politics.
Reading: This book analyzes expert predictions and shows that experts are often no better than chance at making accurate forecasts.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
Note: Amos Tversky and Daniel Kahneman revolutionized the understanding of human decision-making with their work on heuristics and biases.
Reading: This paper introduces the heuristics and biases framework, explaining how mental shortcuts can lead to judgment errors.