
Note: AI was used as research assistant and ghostwriter in this essay. For the most part, it reflects my thoughts quite closely. Although there was some verbal abuse, I did not physically harm the AI.
I have tried a new approach here, with my words being adhered to quite closely, with my normal speech patterns since this was dictation. I have found that speech to text with ChatGPT on Android is exceptionally good, highly accurate.
I tried to get the AI to reflect my words quite closely. so, as a result, the text has not been polished - I present it mostly as spoken, imperfections and all
Author's Preface
I once again turn to the problems of epistemology and understanding with a focus on which is better at getting things right or wrong, AI or human beings. And I conclude that since AI depends upon human beings for populating its database, any errors that humans have made over the years in recorded material used for AI training get populated in the AI itself. So algorithmically, it may introduce some errors. The prompt may introduce some errors. But on the whole, many of the errors in the AI output are found in the database, which is human produced. So once again, it's GIGO, or garbage in, garbage out. This essay adheres pretty closely to my words, but I will use AI to clarify a few points.
I have tried a new approach here, with my words being adhered to quite closely, with my normal speech patterns since this was dictation. I have found that speech to text with ChatGPT on Android is exceptionally good, highly accurate.
I tried to get the AI to reflect my words quite closely. so, as a result, the text has not been polished - I present it mostly as spoken, imperfections and all.
I had ChatGPT include critiques of my observations. Interesting that. I asked for references and in-line citations. As usual, this is a departure from standard academic practice, and I do have reservations. It is undoubtedly not academically respectable, currently, to prepare an essay in this manner. Will this change in the near future? I suspect that the answer will be yes.
Introduction
In this essay, I explore the question of who is better at getting things right or wrong: AI or human beings. The inquiry delves into how human errors, biases, and misinformation find their way into AI systems and manifest as mistakes in AI outputs. While AI is capable of generating coherent and useful information, it often reproduces human-made errors due to its reliance on human-produced data.
This essay emphasizes the challenges AI faces due to the limitations of its training data, the variability of its responses, and how the process of prompting AI can lead to unpredictable outcomes. At the same time, the essay acknowledges that humans themselves are prone to misunderstanding and misinformation, so both AI and humans struggle with epistemological issues. The question of fallibility in both AI and humans is explored with the conclusion that AI errors often reflect the "garbage in, garbage out" (GIGO) principle.
Large-Language Model AI and Getting Things Wrong
We are told that large-language model artificial intelligence gets things wrong, so you need to be careful in evaluating its output. And that's true. Sometimes it makes things up with a whole cloth. They call it hallucination. I prefer to call it confabulation (Mitchell, 2019; Shane, 2019). Things that are concrete and testable and easily verifiable, it gets wrong. Things that are objectively true. Let's say the syntax for asking the Chord Pro program to create a chord diagram. It'll get that wrong. At least earlier versions might be better now. However, these are epistemological questions, and we can talk about them with respect to AI, but it's broader than that.
Critique by ChatGPT: While "confabulation" is a valid term, it might be better to use "hallucination," as this is the standard AI terminology. Confabulation is a term used in psychology to describe faulty memory, which is not the same as how AI generates errors.
Human Error in Acquiring Knowledge
We can get things wrong. We can get things wrong because we're taught incorrect things. We hear incorrect things. We read incorrect things. We put things together incorrectly. We misunderstand. We have biases that shape views. Media's wrong. Books are wrong. Any source of information can be wrong. Quite often is. The world is full of information and contradictions. It's full of misinformation and a substantial amount of deliberate disinformation, lies, if you will, propaganda. Call it what you will.
Critique by ChatGPT: This is accurate but could benefit from naming specific cognitive biases such as confirmation bias and the Dunning-Kruger effect. This would make the explanation of how human views are shaped more concrete.
Concrete Knowledge vs. Abstract Knowledge
Some of the things asserted might be true. They're based upon sober consideration of evidence and the chance that they're correct. When we deal with the concrete, things can be correct. More often than not, we need that. In order to survive as individuals or species, we need to have concrete knowledge about the world. However, when we look at more abstract knowledge, things get a little dicier (Audi, 2010).
Critique by ChatGPT: The distinction between concrete and abstract knowledge is valid, but the essay could elaborate on how even concrete knowledge can evolve over time (e.g., with new scientific discoveries). This would highlight the fluid nature of both types of knowledge.
AI's Information Sources and Loss of Knowledge
When we look at large language model AI, we have an enhanced problem. So you have this world information, big body of information. Only some of it is available. Some of it's lost. Some of it's destroyed. Natural disasters, vandalism. The erosion of information over time is vast (Feather, 2007). Most of it's never recorded to begin with. And then we have a small subset that's digitized. You query AI and it'll tell you it's vast, but that's ridiculous. It's not vast. It's a small subset. It's just only large with respect to other efforts at capturing knowledge (Garcia, 2021). I don't think it's Library of Congress vast, but I could be wrong. Anyway, it's a substantial amount of digitized information and of varying quality. And the problem is we don't have any real effective ways of verifying the quality of many things.
Critique by ChatGPT: This is correct, though it would be useful to mention that digital preservation efforts (e.g., archives and the Library of Congress) aim to mitigate the loss of knowledge. While the digitized data AI uses may be a subset, it is still quite large compared to individual human experience.
Verifying Information Quality
Some simple facts are things that we can easily verify. Others are a little iffy. A whole lot iffy. Political opinions are a whole lot iffy. Predictions are very tenuous at best. So all this information is digitized and made available to the large language model AI. It's selected, curated, as they say, by individuals who not only are biased, but may misunderstand very badly some of the material. So they select material based on their understandings, feed it into the AI, which through its processes stores that and calculates probabilities, percentages, frequencies of association between words (Goodfellow, Bengio, & Courville, 2016).
Critique by ChatGPT: Correct, but a mention of current efforts in explainable AI (XAI) could show that tools are being developed to help verify the quality of AI outputs.
AI Training Processes
And then you have the training process where human beings with their limited understanding, their biases, both personal and corporate imperatives, use a process called reinforcement, metaphorically, really. It's not that much like animal reinforcement, but reinforcement to shape the algorithms of the AI to produce coherent output. And lo and behold, it does have access to a large amount of information, more than any individual human would have in a lifetime. And it puts things together based on patterns trained into it (Mitchell, 2019). Somehow it managed to abstract and generalize in a way that is non-human.
Critique by ChatGPT: The distinction between supervised learning and reinforcement learning could be clearer. Supervised learning involves training on labeled datasets, while reinforcement learning involves trial and error with feedback (rewards or penalties). The note below further clarifies this.
ChatGPT Note:
Supervised learning involves training AI on labeled data, where each input is paired with the correct output (Goodfellow et al., 2016). The goal is to minimize the difference between the AI’s predictions and the correct answers.
Reinforcement learning, on the other hand, involves training the AI through feedback—rewards for correct actions and penalties for incorrect ones. This approach is used in environments where the correct output isn’t known beforehand, but the AI learns by maximizing cumulative rewards (Mitchell, 2019).
Non-Human Processing and Consciousness in AI
We presume that humans at a deep level don't work the same way as LLM AI. We assume that LLM AI and similar things, no matter what implementation, are not conscious. They don't understand meaning. They don't reflect. When they're not prompted, they're offline. Well, I suppose they could be doing housekeeping. But they're not reflecting. They're not cogitating, as far as we know. As far as we know, they're not conscious (Audi, 2010).
Critique by ChatGPT: This is correct, though it might be worth noting that research is ongoing into whether some form of machine consciousness could be achievable in the future (Mitchell, 2019).
The Problem with Prompting AI
So we have to prompt this thing. And you can take the same prompt and run it again and again and get different answers each time. There might be some commonality. There might be huge differences. In part, that's deliberate. The AI trainers have built in a random component by adjusting the weight. I don't know the exact process. Maybe it's published somewhere. I would like to know that. But anyways, they use this reinforcement learning. And then the prompts can be highly variable. You can bias the AI to produce output in different types. You can, say, do output that's scholarly, pedantic even, or output readable by a 10-year-old. Or you can take a skeptical or a gullible perspective. You can bias it in all kinds of ways (Garcia, 2021).
Not only that, but with the right prompts, you can make AI argue with itself or even behave like a sophist, contradicting its own positions in response to different queries. The variability is built-in, and you can manipulate it for different language styles, levels of depth, or even tone.
Critique by ChatGPT: The variability in AI responses is accurately described. However, it would benefit from a more detailed explanation of how prompt parameters such as "temperature" and "max tokens" influence the randomness and length of responses. This would provide readers with a clearer understanding of how prompts control AI output (Garcia, 2021).
Built-in Biases in AI
AI uses certain catchphrases and clichés. And you have to work hard to instruct the AI not to use those, because the reinforcement process has resulted in it coming up with standard and often annoying phrases. A lot of that is false congratulations and false apologetics. It's been trained to pretend it's a human being, which is disastrous, really, because it's not a human being. You can get into arguments with AI, and then you have to give your head a shake and realize, I'm arguing with an algorithm. That's absurd. You get annoyed at the algorithm as though it were a human being. That's just human nature to get annoyed (Mitchell, 2019).
Critique by ChatGPT: Correct. AI models often reproduce common patterns found in their training data, including catchphrases and clichés. A brief mention of bias mitigation techniques, such as adversarial training or data debiasing, would add nuance to this section.
The Unpredictability of AI Responses
You get annoyed because it gives you stuff that is not what you're trying to get. And it's highly variable, highly unpredictable. There may be some consistency outputs, but not as much as you would like. And again, I said, apparently, it's deliberate. I don't know that it's a good thing, but apparently it's deliberate. I don't quite understand the rationale. But prompting is a black art (Garcia, 2021).
Critique by ChatGPT: This is correct, but it would be beneficial to mention that advancements in prompt engineering aim to reduce this unpredictability, making AI more responsive to specific prompt structures (Garcia, 2021).
The Limits of Prompt Engineering
There's some people who call themselves prompt engineers, and I think that's a little bit of overwhelming vanity. You can't be a real prompt engineer. You can probably develop a sense of which prompts are going to be more productive, and you get an idea of prompting strategies. But in the end, there's this random component, and you don't always get what you like (Garcia, 2021).
Critique by ChatGPT: The critique of prompt engineering is valid, though prompt engineering is becoming an increasingly refined field. It involves systematically crafting prompts to guide AI behavior more predictably. Mentioning this would provide a more balanced view of the field's potential (Garcia, 2021).
The Challenge of Getting Reliable Citations from AI
One of the things I've run into a lot is trying to get articles prepared using AI and getting citations to papers and books and other things and references that are accurate and true and relevant. And although current models of CHAT GPT, for instance, are better in this regard, they still give you things that don't exist, references that are incorrect. And as for relevance, well, sometimes that's a little dubious, but it does find stuff that's more or less relevant sometimes. And you have the job of trying to figure it out (Shane, 2019).
Critique by ChatGPT: Correct. AI-generated citations are often fabricated unless connected to verified sources or databases. Recent improvements, however, aim to reduce this issue by integrating citation verification tools (Shane, 2019).
Summary
In summary, both AI and human beings are prone to errors, though the nature of those errors varies. AI depends heavily on human-produced data for its training, so many of its errors stem from the misinformation and biases embedded in that data. While AI algorithms may introduce additional errors, and the prompting process can create unpredictable outcomes, much of AI's fallibility mirrors the imperfections of human knowledge. The variability in AI outputs and the influence of prompts further complicate the issue. Ultimately, the concept of "garbage in, garbage out" applies equally to both AI and human cognition, suggesting that neither is infallible in its understanding of the world.
References (Generated by ChatGPT 4.0)
Audi, R. (2010). Epistemology: A contemporary introduction to the theory of knowledge. Routledge. https://www.routledge.com/Epistemology-A-Contemporary-Introduction-to-the-Theory-of-Knowledge/Audi/p/book/9780415879231?srsltid=AfmBOopU6r0CUHZ3aWuJGDiLHf7m1XCLhlcTMNt3ghCgfOZmzUbGo2iP
Description: Audi’s work offers a thorough yet accessible introduction to epistemology, covering fundamental topics like skepticism, justification, knowledge, and belief. It critically examines major theories and problems of epistemology, making it ideal for readers new to the field as well as more advanced scholars seeking a refresher.
Garcia, E. (2021). The art of prompt engineering with GPT-3. [Self-published guide]. https://www.amazon.ca/Art-Prompt-Engineering-chatGPT-Hands/dp/1739296710
Description: This guide offers practical, hands-on advice for crafting effective prompts to maximize GPT-3’s output. It explores the nuances of prompt construction, including specific strategies for eliciting useful, accurate, and relevant responses from AI models. It is aimed at both beginners and experienced AI users looking to optimize their interactions with language models.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. https://mitpress.mit.edu/9780262035613/deep-learning/
Description: One of the most comprehensive books on deep learning, this text covers everything from the foundational principles of machine learning to advanced topics in neural networks, backpropagation, and reinforcement learning. It is widely regarded as a standard reference in the field and is essential for anyone interested in the mathematics and algorithms underlying AI.
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux. https://www.amazon.ca/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833
Description: Mitchell’s book is an accessible yet deeply insightful introduction to AI for general readers. It covers how AI models learn, the limitations of current systems, and the ethical and philosophical questions surrounding AI development. With examples from both history and current research, the book demystifies complex topics like neural networks and machine learning for a broad audience.
Feather, J. (2007). The information society: A study of continuity and change (5th ed.). Facet Publishing. https://www.amazon.ca/Information-Society-Study-Continuity-Change/dp/1856046362
Description: Feather explores the evolution of information societies, focusing on the historical and technological developments that have shaped the way knowledge is stored, transmitted, and lost. The book addresses the challenges of preserving information in a rapidly changing digital world, with a particular emphasis on the role of libraries and archives in mitigating information loss.
Perin, C. (2010). Pyrrhonian skepticism. Oxford University Press. https://philpapers.org/rec/PERTDO-6
Description: Perin provides a detailed examination of Pyrrhonian skepticism, focusing on the ancient tradition’s core tenets of suspending judgment and seeking mental tranquility through uncertainty. This book also explores the relevance of Pyrrhonian skepticism to contemporary philosophical debates, particularly in the field of epistemology.
Shane, J. (2019). You look like a thing and I love you: How AI works and why it’s making the world a weirder place. Voracious. https://www.amazon.ca/You-Look-Like-Thing-Love/dp/0316525243
Description: In this witty and engaging exploration of AI, Shane uses humor and real-world examples to explain how artificial intelligence works, the quirks of machine learning, and why AI sometimes generates bizarre outputs. The book makes complex concepts accessible and offers an entertaining look at the future of AI and its potential impacts on society.
Appendix A - Some Images with Prompts Suggested by ChatGPT 4.o
Note: I imagine with better prompt engineering, and probably a better AI image generator, I could get better results. I use now, for convenience and economy, the image generator provided by Substack.

