Understanding the World: Epistemological Challenges in the Age of AI - A Skeptical Inquiry into Truth and Falsehood
Exploring the Limits of Knowledge in Human Cognition and Artificial Intelligence
Note: This essay was crafted with the assistance of Large Language Model AI (LLMAI), which served as both research assistant and ghostwriter. No LLMAI was harmed in the making of this essay—though its assertions have been met with the necessary skepticism.
Author's Preface
Over the past several months, I've been using a large language model (LLM) artificial intelligence, and during this time, I've encountered dubious information and bias. These experiences made me ponder whether human-generated information is inherently better or worse than AI-generated content. As I reflected on this, I realized the question is far more complex, with no straightforward answers.
Human information is often flawed, containing both truth and falsehood, and AI, trained on human information, inevitably inherits these same errors and biases. While the scope of information any single human can access is limited, AI is trained on a much broader dataset. Yet, this breadth does not guarantee accuracy; AI inherits all the flaws within its training data, just as humans are influenced by the imperfections in the information they consume.
The challenge of determining the accuracy of AI compared to human understanding is significant. AI, despite its processing power, makes mistakes. However, humans are also make mistakes - lots of them, but the do not know it. A difference is that AI lacks conscious processing, understanding, and meaning, while humans possess these qualities, but that is not relevant to the error rate. Ultimately, the question of whether we can trust AI is tied to whether we can trust human sources of information. Both AI and humans have the potential to provide accurate and false information, yet distinguishing between the two is often beyond our capabilities.
Introduction
The rise of AI introduces complex challenges to our understanding of truth and knowledge. Both humans and AI rely on data to form conclusions, but the quality of this data is influenced by biases, errors, and the inherent limitations of human understanding. This essay explores whether AI can genuinely approach truth or merely magnifies the uncertainties that have historically complicated human understanding.
Epistemological Concerns and Curated Data
Knowledge, whether human or AI-generated, is deeply entwined with the data we access. This data is rarely neutral; it is shaped by those who present it, influencing the conclusions we draw. The challenge lies in discerning truth from falsehood when the very foundation of our knowledge is subject to manipulation.
Human knowledge is shaped by information from various sources, which is not gathered randomly but selected and presented with inherent biases. Similarly, AI systems rely on curated datasets that are deliberately chosen, often with underlying biases and dubious understanding, containing errors, raising concerns about the reliability of the knowledge they produce.
Accuracy and Challenges in Determining Truth
One of the core concerns in epistemology is the accuracy of the data on which knowledge is based. Simple tasks, like cracking a walnut with a hammer, are easy to verify because the outcome is immediate and observable. However, in more abstract domains like nutrition, where effects may take years to manifest, the relationships are far more difficult to establish. This issue is carried over to AI systems, where the data used for training may be wrong, even dead wrong, leading to outputs that are plausible but incorrect.
The Problem of Correctness in AI and Human Outputs
Both AI and human-generated outputs are prone to error, particularly when the underlying data is flawed. AI can produce information that seems plausible but is factually incorrect, often referred to as "hallucination" or "confabulation." Humans, too, are susceptible to similar errors, especially when dealing with complex information. The challenge lies in identifying and correcting these errors, which requires ongoing vigilance and critical analysis.
Assessing Knowledge in Complex Cases
The issue of determining truth becomes even more contentious in complex cases where evidence is subject to interpretation, biases, and the limitations of human understanding. While simple cases allow for straightforward verification, more complex scenarios, such as the long-term effects of nutritional choices, are far less clear-cut. Fact-checking in these instances is fraught with challenges, as biases can infiltrate the process, and the inherent limitations of our knowledge base complicate the task.
Given that AI is trained on human knowledge, and this knowledge is curated according to human biases and limitations, it becomes difficult to compare the correctness of AI and human-generated outputs in terms of accuracy. AI has the advantage of processing and integrating a wide array of information far beyond the capacity of any individual or group of experts. However, this capability exists apart from the issue of correctness.
Summary
The epistemological challenges posed by AI highlight the limitations of both human and machine cognition in the pursuit of truth. A careful and non-dogmatic approach to evaluating knowledge claims is necessary to navigate the uncertainties inherent in both human and AI-generated information. Recognizing that all knowledge is provisional and subject to revision is crucial in maintaining intellectual humility and rigour in our quest to understand the world. The possibility of assessing knowledge for truth in complex cases remains open, as both human and AI understanding are limited by biases, partial information, and the inherent complexity of the issues at hand.
References
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. ACM. https://doi.org/10.1145/3442188.3445922
Bett, R. (2011). Pyrrhonian Skepticism. In The Routledge Companion to Epistemology (pp. 42-52). Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9780203839065-42/pyrrhonian-skepticism-richard-bett
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. https://global.oup.com/academic/product/superintelligence-9780199678112
Floridi, L., & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681-694. https://doi.org/10.1007/s11023-020-09548-1
Fogelin, R. (1994). Pyrrhonian Reflections on Knowledge and Justification. Oxford University Press. https://global.oup.com/academic/product/pyrrhonian-reflections-on-knowledge-and-justification-9780195089875
Marcus, G. (2021). The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. https://arxiv.org/abs/2002.06177
Mitchell, M. (2021). Artificial Intelligence: A Guide for Thinking Humans. Penguin Random House. https://www.amazon.ca/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833
Norvig, P. (2016). Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. https://people.engr.tamu.edu/guni/csce421/files/AI_Russell_Norvig.pdf
Perin, C. (2015). The Demands of Reason: An Essay on Pyrrhonian Scepticism. Mind, 124(494), 668-675. https://ndpr.nd.edu/reviews/the-demands-of-reason-an-essay-on-pyrrhonian-scepticism/
Sextus Empiricus. (2000). Outlines of Scepticism. Cambridge University Press. https://www.cambridge.org/9780521778091
Weng, Y., Zhu, M., Xia, F., et al. (2023). Large Language Models are Better Reasoners with Self-Verification. Findings of the Association for Computational Linguistics: EMNLP 2023. https://aclanthology.org/2023.findings-emnlp.167
Eight Things to Know about Large Language Models (2023). arXiv. https://arxiv.org/abs/2304.00612