Understanding the World: Using LLM AI as a Research Assistant and Ghostwriter
A lot of my current ideas have been massaged by ChatGPT, and a few sections were just spewed from the LLM AI database and algorithm. This seems somewhat apropos to the topic.
Note: Research and ghostwriting by LLM AI, shaped by my fevered brain. No LLM AI were harmed in the production of this essay, although I was tempted. – Ephektikoi
Overview
This text explores the use of Large Language Model (LLM) AI as a research assistant and ghostwriter, drawing parallels with historical practices involving masters, journeymen and apprentices. It looks into the nature of LLM AI, its limitations, and the mysterious workings behind its functionality. The text also examines the role of bias in AI training, the complexities of prompting AI, and the challenges associated with using AI for long-form output. Additionally, it highlights the capabilities and limitations of LLM AI in processing information, grouping, summarizing, and citation generation, as well as the repetitive and formulaic nature of its responses.
Historical Context: Research Assistants and Ghostwriters
Popular authors and book factories with research assistants, ghostwriters, and co-authors are a feature of modern times. However, we can go back to the Middle Ages, where in many European cultures, masters, journeymen, and apprentices existed in many arts and crafts. It is not believed that the masters did all of the work, but they were often given the credit. They supervised and in all likelihood took both credit and responsibility—probably credit for success and denying responsibility for failure, would be my guess, for at least some of them. There may be some parallels with research assistants and ghostwriters.
The Nature of LLM AI Consciousness
LLM AI is not conscious, does not have qualia, as the philosophers say, has no sensation, emotion, or thoughts, does not understand meanings, and does not reason as far as we know. However, we also assume that to be true of our fellow man, but cannot prove it there either. We do not understand the hard problem of consciousness for people, for animals, and by extension, LLM AI. It may seem unlikely that they experience qualia, but it cannot be proven. Neurologists can talk about brain processing, neurons, biochemistry, bio-electricity, brain regions, and brain activity, but they have no direct access to the consciousness of others.
The Mysteries of LLM AI Functionality
LLM AI's real workings are pretty much a mystery to all, even the creators. They can describe the hardware, the algorithms, the input data, the training or reinforcement process (as it is called), the mathematics, and the nature of the database with statistical weightings for word associations. They can describe how these weights are given a pseudo-random character. However, this all produces coherent output that is typically excellent grammatically and may be often enough correct (with respect to the input database and prompt). AI experts seem to be at a loss to give a simple explanation of how it works, or why it should work. This feature, that it does work, is quite counter intuitive.
Imperfection in AI Training Data and Bias
The input data, selected according to availability, the understanding of the curators (distinct from their biases), the biases of the curators, and corporate imperatives, contain a mix of truth and falsehood. This might be seen from the perspective of an omniscient god. Some of the material is lies, propaganda, disinformation (and liars usually know when they are lying). Some of it is wrong, and much is contradictory, but maintained by the originators to be correct. This is inevitable and cannot be remedied in any real sense unless we also become omniscient gods. Curators try their best, but at the heart lies an insoluble problem. People are wrong, but they believe themselves to be right. Curators are people, and no procedures can obviate that situation. People are wrong, curators are wrong. They may believe that they are right but this does not follow.
The Process and Mysteries of AI Training and Reinforcement
Training or reinforcement is done by men and done by machines, as I understand it. The training, called reinforcement by analogy with biological reinforcement learning, causes weights to be adjusted. Eventually, though really a deep mystery, the LLM AI can be shaped to give coherent output, with information scraped from the database that it was populated with. The exact hardware or software is irrelevant to this conceptual explanation. Sometimes the information is correct with respect to the data in the database; sometimes it is confabulated. Just why the hallucination happens is presumably as much a deep a mystery as why seemingly correct and coherent output arises. This output is always in response to prompts. The LLM AI does not cogitate or reflect when there is no prompt, as far as we know. Algorithms could presumably be designed to do housekeeping tasks when not being prompted.
The Complexity of Prompting LLM AI
Prompting is an arcane art. Some have the hubris to call crafting prompts "prompt engineering." Such vanity. OK, you can probably craft a prompt that is more likely to give you what you are seeking, but there is a strong random component and great sensitivity to how the prompt is phrased. Despite that, you can prompt so that the style of language is different from run to run, the reading level is different, and the content is different. You can bias the LLM AI to give an uncritical take on a topic or a skeptical take on the same topic. You can ask LLM AI to critique its own recent production.
You will find certain topics are marginalized in favour of conventional views. Some topics are completely off-limits. This latter is a feature of the training done in the name of safety, but just as likely to reflect a biased perspective from the AI trainers and their masters. The results tend to be ethnocentric and adhere to the political opinions of the nation that has the LLM AI. A Chinese LLM AI is going to respond differently than an American LLM AI. You can, with clever but not always successful prompting, get around the dominant tendency of the LLM AI to spew a conventional narrative. This does not always work.
Limitations in Long-Form AI Input and Output
The AI has trouble remembering previous parts of a chat. There are limitations on the amount of input text. The output can be inconsistent, not well unified from section to section when a session has multiple outputs. There may be a lot of redundancy from output to output in a long chat. It is hard to get a unified presentation style from output to output sections. This makes it very hard to produce a long document without unnecessary redundancy, uniformity of typographical style, uniformity of language tone, and overall document unity and coherence. The LLM AI may ignore key points in your prompt, even when requested to keep them. With a long prompt, with many key items to be discussed, it is hard to see traceability between prompt items and output. As LLM AI is increasingly used to craft long documents, this becomes an issue.
Capabilities and Limitations of LLM AI in Information Processing
LLM AI can do many amazing things—language translation, database retrieval, the list goes on and on, and ChatGPT can give a list. That is another thing: when prompted to list items of relevance to some topic, the AI will give a few random points, then stop. It may give more, with some redundancy, when prompted to give more. If you ask for a certain number of points, it may well do that.
20 Wonderful Things LLM AI Can Do
Language Translation
LLM AI can translate text between multiple languages with a high degree of accuracy, making cross-cultural communication more accessible.Text Summarization
AI can condense large volumes of text into concise summaries, allowing users to quickly grasp the main points of lengthy documents.Sentiment Analysis
LLM AI can analyze text to determine the underlying sentiment, whether positive, negative, or neutral, which is useful for understanding public opinion or customer feedback.Creative Writing Assistance
AI can assist in generating creative content, including stories, poems, and scripts, offering inspiration or even co-authorship for writers.Code Generation
LLM AI can write and debug code in various programming languages, assisting developers in creating software more efficiently.Personalized Recommendations
AI can provide tailored suggestions for books, movies, music, and other content based on user preferences and past behavior.Automated Customer Support
LLM AI can handle customer inquiries and provide instant responses, improving customer service efficiency and satisfaction.Data Analysis and Visualization
AI can process large datasets and generate visualizations, making complex data easier to understand and analyze.Content Curation
AI can curate relevant content from the web, providing users with a filtered selection of articles, videos, and other media on specific topics.Virtual Tutoring
LLM AI can offer personalized educational support, tutoring students in various subjects and adapting to their learning pace.Speech Recognition and Synthesis
AI can convert spoken language into text and vice versa, enabling voice-activated assistants and accessibility tools for those with disabilities.Idea Generation
AI can brainstorm and generate new ideas for projects, businesses, or creative endeavors, acting as a valuable tool in the ideation process.Medical Information Retrieval
AI can retrieve and summarize medical information, assisting healthcare professionals in keeping up with the latest research and best practices.Automated Scheduling
LLM AI can manage calendars, set reminders, and coordinate meetings, streamlining personal and professional time management.Legal Document Drafting
AI can draft legal documents such as contracts, wills, and agreements, ensuring that they are thorough and compliant with legal standards.Resume and Cover Letter Writing
AI can help create professional resumes and cover letters tailored to specific job applications, improving job seekers' chances of success.Enhanced Search Engine Queries
AI can refine and improve search queries, providing more accurate and relevant results for complex or ambiguous questions.Personal Finance Management
LLM AI can assist in budgeting, tracking expenses, and providing financial advice based on user goals and spending patterns.Interactive Storytelling
AI can create interactive stories where users make choices that influence the narrative, offering a new form of entertainment and engagement.Accessibility Features
LLM AI can develop tools and services that enhance accessibility for people with disabilities, such as text-to-speech and voice-controlled interfaces.
Propriety, Ethics, and Legality of Using LLM AI as a Research Assistant and Ghostwriter
When employing LLM AI as a research assistant or ghostwriter, several considerations regarding propriety, ethics, legality, quality, and accuracy come into play. These concerns are crucial for understanding the implications of integrating AI into creative and research processes.
Propriety
The use of LLM AI in research and writing raises questions about propriety, particularly concerning the extent to which AI can or should be involved in producing academic or creative content. While AI can significantly assist in drafting and editing, the line between using AI as a tool and misrepresenting its role as authorship must be clearly defined. It is important to acknowledge the AI's contribution transparently, especially in academic or professional contexts, to avoid misleading readers or consumers about the source of the work.
Ethics
Ethical concerns revolve around the authenticity and originality of AI-generated content. Using AI to produce work that is presented as entirely human-authored can be misleading. This is particularly significant in academic settings where originality is a core value. The ethical use of AI involves clearly disclosing its role in the creation process and ensuring that its use does not undermine academic integrity or the creative authenticity of the work. Additionally, the potential for AI to reinforce biases present in its training data raises ethical concerns about perpetuating misinformation or stereotypes.
Legality
Legally, the use of LLM AI must navigate issues related to intellectual property and copyright. AI-generated content can blur the lines of authorship, raising questions about who owns the rights to the work produced. When using AI as a ghostwriter or research assistant, it is essential to address copyright and intellectual property issues by establishing clear agreements on the ownership and use of AI-generated material. Furthermore, ensuring that AI-generated content does not infringe on existing copyrights is crucial to avoid legal disputes.
Quality
The quality of AI-generated content can vary significantly. While LLM AI can produce high-quality, coherent text, it is not infallible. The content might lack depth, nuance, or context that a human writer or researcher could provide. Regular oversight and review of AI-generated material are necessary to ensure it meets the required standards of quality. For research purposes, ensuring the accuracy and reliability of information presented by AI is critical to maintain the credibility of the research output.
Accuracy
Accuracy is a major concern when using AI in research and writing. While LLM AI can generate text that appears accurate, it relies on patterns in the training data rather than understanding the content deeply. This can lead to factual inaccuracies, outdated information, or the propagation of errors present in the training data. It is essential for users to critically evaluate and verify the accuracy of AI-generated content, particularly when used for academic, professional, or published works.
Use of Research Assistants and Ghostwriters by Prominent Authors
In the world of publishing, prominent authors often employ research assistants and ghostwriters to manage the substantial demands of producing high-quality, prolific content. Research assistants help gather information, organize data, and provide background research, enabling authors to focus on writing. Ghostwriters, on the other hand, are hired to write content on behalf of the author, who is credited as the sole creator.
Big-name authors, who often work as part of a "book factory" model, leverage these professionals to maintain their output levels while reaping the rewards of high book sales and royalties. The use of ghostwriters allows these authors to publish multiple works simultaneously or at a rapid pace, enhancing their market presence and profitability. Research assistants and ghostwriters are typically paid for their services, while the credited author benefits from the financial success and recognition of the work.
This model highlights a similar dynamic with LLM AI, where the AI acts as a tool or collaborator in the creation process, but the ultimate credit and financial benefits accrue to the primary author or user. Just as with human research assistants and ghostwriters, the ethical and legal implications of using AI in this capacity must be carefully managed to ensure transparency and fairness.
Takeaway
LLM AI presents a powerful and versatile tool for research and writing, offering capabilities that range from creative assistance to detailed data analysis. However, its use raises important issues of propriety, ethics, legality, and quality. Just as prominent authors utilize research assistants and ghostwriters to manage their output and enhance their success, LLM AI can serve as a valuable asset in these roles. Nevertheless, users must navigate the challenges of accuracy, bias, and transparency to ensure that AI's contributions are appropriately managed and that the final output meets the highest standards of integrity and reliability. As the technology evolves, ongoing critical evaluation and adaptation will be crucial to harness its full potential while addressing its limitations.
I know less now after reading the headline. Tech makes may skull ache.