Understanding: The Ethics of LLM-AI in Content Creation
Navigating the Boundaries of Authorship, Knowledge, and Technological Control
Note: ChatGPT has been used as a research assistant and ghostwriter in producing this essay. There may be a certain ironic self-referential aspect to that.
Author's Preface:
Over the past decade, the world has witnessed a dramatic shift in how content is created. With the advent of LLM-AI technologies, such as OpenAI’s GPT, people now produce text, images, computing code, music, and even video content with remarkable ease. This shift has sparked widespread debate, particularly within academic and creative communities, about the legitimacy of AI-generated work. At the core of this debate is a fundamental distinction between the process of creating a work and the quality of the final product. In many discussions, this distinction is often overlooked.
As the world changes, so too must our frameworks for evaluating knowledge creation. AI is here to stay, and the academic community, in particular, must reevaluate its ethical standards to keep pace with technological advancements. Whether students learn more or less with the aid of AI is a matter for empirical study, not moral panic. In this essay, I explore the ethics of LLM-AI use in content creation across various fields, assessing both the opportunities and challenges it presents.
Given that this field is changing so fast, references can be outdated a few short weeks after they are published. Not all references are of high quality, but they are numerous, as a simple internet search will reveal.
I spend a lot of time with LLM-AI tools for research, for writing, for image generation and for music. I have spent a fair bit of time with music AI, but have only scratched the surface. There are so many tools now. Some work well, some not so well. I think that there are better AI for writing and research than ChatGPT, but have not yet explored their capabilities.
I don’t code anymore, so have spent little time investigating that, but advise people to think carefully when considering computer programming as a career. I would not even recommend associated specialities such as requirements engineering, business analysis, or data administration as potential occupations. I am not sure how it will all shake out, but I see major, major disruptions in the field of information systems (my bread and butter for decades).
Predictions, being a dime a dozen, can be made for other career paths. They amount to speculation at the moment, but scary speculation.
Introduction:
The rapid development and deployment of LLM-AI (Large Language Model Artificial Intelligence) have revolutionized how humans engage with content creation. From writing essays to generating digital art and music, LLM-AI's capacity to automate creative processes has sparked a wide-ranging debate on its ethical implications. Particularly within academic, artistic, and professional communities, concerns have been raised regarding authorship, intellectual integrity, and the potential misuse of AI-generated works.
In this essay, I examine the ethical and practical considerations of using LLM-AI for producing documents, focusing on two core questions: What is the relationship between the process of creation and the quality of the final product? And how does AI reshape our understanding of authorship and originality? Drawing on extensive academic literature, I argue that while LLM-AI presents new challenges, it also offers opportunities for enhancing knowledge and creativity—provided we approach it with open-mindedness and critical thought.
Discussion:
1. The Distinction Between Process and Product
One of the key ethical debates around AI-generated content is the distinction between the process of creating a work and the evaluation of the final product. Traditionally, authorship has been defined by the labor and thought process involved in producing a text. With LLM-AI, this process changes dramatically—yet the product, the finished piece, is still open to evaluation based on its quality, factual accuracy, and creativity.
Bender et al. (2021) argue that the quality of the final product should be the primary focus of evaluation, rather than the method by which it was produced. This view reflects a broader trend in AI discourse, where the emphasis is increasingly placed on the outcome rather than the process. In creative industries, such as marketing and design, AI-generated content is evaluated based on its ability to engage audiences, not by how it was created (Ziakis, 2023). This shift in focus toward product evaluation suggests that our understanding of creativity and authorship is evolving.
2. Accepted Facts vs. Actual Truths
A central challenge posed by LLM-AI is its reliance on vast datasets, which inevitably contain errors, contradictions, and biases. While AI can generate seemingly accurate citations and factual statements, these are often drawn from sources that reflect the biases and inaccuracies embedded in the data. The author maintains that accepted facts are not always the same as actual truths, and AI’s output must be critically assessed for both accuracy and relevance.
This issue is especially pronounced in academic work, where accuracy and the integrity of sources are paramount. Noble (2018) emphasizes that AI-generated content can perpetuate harmful biases, and the onus is on the human author to verify the factuality of AI-produced content.
3. The Changing Role of Authorship
LLM-AI challenges traditional notions of authorship by automating the creation process, raising questions about intellectual ownership. In academic contexts, this raises concerns about plagiarism and academic integrity. Kumar and Boulos (2023) explore the impact of large language models (LLMs), like ChatGPT, on the educational landscape, as human and machine collaboration becomes more common.
The author maintains that academics must rethink things to accommodate the growing role of AI. This involves not only acknowledging AI's contributions but also developing frameworks for responsible and transparent use.
4. AI as a Tool of Control
Beyond the academic and creative implications, AI presents risks as a mechanism of control. AI’s potential for censorship, speech suppression, and slanting of views is a growing concern. Noble (2018) explores how biases in AI systems can disproportionately harm marginalized communities, raising ethical concerns about how AI technologies are developed and deployed. O’Neil (2016) also discusses how algorithms can reinforce inequalities, illustrating the risks of unchecked AI use.
Bostrom and Yudkowsky (2014) highlight the potential for AI to be co-opted by powerful institutions for control and manipulation. The author maintains that while AI’s role in scholarship may be a pseudo-debate, its societal implications are real and must be addressed.
5. Empirical Studies on AI and Learning
A final area of concern is the impact of LLM-AI on education and learning outcomes. A study by Rudolph (2023) discusses the role of chatbots in learning. The field is evolving so rapidly, the 2023 is old news now.
6. Current and Future Uses of AI Across Various Fields and Media
The advancements in AI, particularly LLM-AI, have extended far beyond text generation, impacting fields such as graphics, video, music, and coding. These systems continue to improve, enabling creative professionals and technical experts to work more efficiently and produce higher-quality results. In the realm of graphics, AI tools like DALL-E and MidJourney are revolutionizing the way visual content is created. These tools can generate highly detailed and creative images from simple text prompts, with applications in digital marketing, entertainment, and the fine arts (Ramesh et al., 2021).
Video production is another area experiencing rapid development. AI systems are increasingly being used to generate short video clips, visual effects, and even deepfakes. Although current systems can typically only produce short segments, filmmakers have begun stitching these segments together to create longer, coherent narratives (Lemonlight, 2021). In fact, the technical limitation of creating short takes is not a significant issue given that traditional film production also relies on stitching together multiple short scenes. As AI continues to improve, issues like continuity between segments will likely be resolved, enabling the creation of seamless, long-form content (Nuttall, 2023).
In music, AI tools like AIVA and OpenAI’s MuseNet can now compose original pieces in a variety of styles, allowing musicians to experiment with new forms of expression and enhancing productivity in the music industry (Briot et al., 2020). These tools offer creative freedom by automating the composition process while still allowing human oversight and refinement.
Similarly, in coding, tools such as GitHub Copilot are transforming software development by suggesting code snippets, debugging, and even helping to build entire applications (Ziegler, 2021). AI’s contributions to coding are democratizing the field, enabling non-experts to participate in software creation and making the process faster and more efficient.
Despite these impressive advancements, there are naysayers who argue that AI’s progress will soon stall or that its future is unsustainable. These criticisms are often rooted in a corporate perspective focused on immediate profitability rather than long-term technological potential. However, such views ignore the broader trend of continual AI improvement (Russell & Norvig, 2021). The author maintains that predictions of AI’s demise are premature and even foolish. AI’s progress is driven by increasing computational power, improved algorithms, and the expanding availability of data, ensuring that near-term and mid-term advancements will continue to push the boundaries of what AI can achieve (LeCun, 2022; Heikkilä, M., et al., 2022).
7. Societal Risks: Control, Suppression, Propaganda, and Digital Fakery
As AI continues to evolve, the risks associated with its potential use as a tool for control, suppression, propaganda, and deception grow. We are approaching an era where it may become increasingly difficult, if not impossible, to distinguish between real-world events and digitally fabricated content. Already, AI-generated images and videos are reaching unprecedented levels of realism. Tools like DALL-E and MidJourney have produced images of such lifelike quality that discerning them from reality is becoming a significant challenge, particularly when depicting hyper-realistic individuals and environments (Ramesh et al., 2021).
One of the more concerning aspects of AI-generated content is the ability to fabricate realistic, believable people and events. These generated images often depict people with idealized beauty that is beyond the real-world spectrum. This perfection can be exploited to manipulate public perception or promote idealized, unattainable standards, contributing to societal issues such as body image problems and unrealistic expectations (Ryan-Mosley, 2021). Moreover, even the previously challenging task of generating accurate human hands—a notable limitation of earlier AI models—has been largely resolved, further enhancing the realism of AI-generated imagery (Lemonlight, 2021).
The same advancements are occurring in AI-generated video. While current AI systems are generally limited to creating short video snippets, this issue is mitigated by the use of stitched-together takes, much like how traditional films are made. Short takes, already a staple in film production, are being combined to produce longer narratives, and while there remain challenges related to continuity between these segments, the rapid pace of AI improvement suggests this will soon be a non-issue (Lemonlight, 2021). Once these technical challenges are fully resolved, the ability to fabricate entire, seamless, and believable video narratives will have profound implications for the manipulation of reality (Chesney & Citron, 2019).
These technological advancements present significant societal risks, particularly in the areas of propaganda, control, and disinformation. AI-generated media can be easily used to manipulate public opinion by creating and disseminating fake events or discrediting individuals through deepfakes (Chesney & Citron, 2019). In societies with tightly controlled information flows, AI could be used to flood the media landscape with fabricated narratives, drowning out legitimate dissent and ensuring that only state-sanctioned or corporate-approved stories are seen (O’Neil, 2016).
As the author maintains, the ability to create hyper-realistic digital content that is indistinguishable from reality poses a serious challenge to truth and trust in media. Whether in journalism, social media, or political campaigns, the potential for AI to be weaponized as a tool for control, censorship, and disinformation cannot be overlooked.
References:
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://dl.acm.org/doi/10.1145/3442188.3445922
Lead author: Emily M. Bender is a professor of linguistics at the University of Washington, specializing in computational linguistics and ethical AI.
About the reading: This paper examines the ethical concerns surrounding large language models, particularly issues of scale, bias, and misinformation.
Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep learning techniques for music generation. Springer. https://link.springer.com/book/10.1007/978-3-319-70163-9
Lead author: Jean-Pierre Briot is a researcher in computer science with a focus on music generation using AI.
About the reading: This book provides an in-depth exploration of the use of deep learning techniques in music composition and production.
Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155. https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war
Lead author: Robert Chesney is a professor of law at the University of Texas, specializing in national security law and disinformation.
About the reading: This article examines the potential for deepfakes to disrupt political stability and the growing threats posed by AI-driven disinformation.
Fergusson, G., Fitzgerald, C., Frascella, C., Iorio, M., McBrien, T., Schroeder, C., Winters, B., & Zhou, E. (2023). Generative AI white paper (Version 1). EPIC. https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf
Lead authors: Grant Fergusson, Caitriona Fitzgerald, Chris Frascella, Megan Iorio, Tom McBrien, Calli Schroeder, Ben Winters, and Enid Zhou are contributors to this paper from the Electronic Privacy Information Center (EPIC), focusing on privacy, digital rights, and the regulation of generative AI.
About the reading: This white paper provides an analysis of the harms associated with generative AI, drawing on taxonomies from leading scholars like Danielle Citron, Daniel Solove, and Joy Buolamwini. It covers potential risks, documented harms, and regulatory interventions related to the privacy, discrimination, and societal impacts of generative AI as of May 2023.
Heikkilä, M., & Heaven, W. D. (2022, June 24). Yann LeCun has a bold new vision for the future of AI. MIT Technology Review. https://www.technologyreview.com/2022/06/24/1054817/yann-lecun-bold-new-vision-future-ai-deep-learning-meta/
Lead authors: Melissa Heikkilä is a senior reporter at MIT Technology Review, focusing on AI and data privacy. Will Douglas Heaven is the senior editor for AI at MIT Technology Review.
About the reading: This article explores Yann LeCun’s new vision for AI, where he outlines a fresh approach to developing artificial intelligence, combining old ideas and deep learning techniques. The piece highlights LeCun’s perspective while also addressing the questions and challenges his ideas present for the future of AI.
Kumar, S., & Kamel Boulos, M. N. (2023). AI in education: Transforming teaching and learning with large language models. Sustainability, 15(17), 12983. https://doi.org/10.3390/su151712983
Lead authors: Sanjay Kumar is a professor specializing in educational technology with a focus on AI's role in learning and teaching environments. Maged N. Kamel Boulos is a professor with expertise in health informatics and the use of digital technologies in various sectors, including education.
About the reading: This article explores the impact of large language models (LLMs), like ChatGPT, on the educational landscape. It examines how LLMs are transforming both teaching and learning by offering personalized learning pathways, automating certain educational tasks, and providing new opportunities for student engagement. The article also addresses ethical considerations, such as data privacy and the role of educators in this AI-driven transformation.
LeCun, Y. (2022). The future of AI: Three major challenges of artificial intelligence. The Decoder. https://the-decoder.com/metas-ai-chief-three-major-challenges-of-artificial-intelligence/
Lead author: Yann LeCun is the Chief AI Scientist at Meta and a pioneering researcher in machine learning, known for his work on neural networks and deep learning.
About the reading: This article outlines LeCun's view on the three biggest challenges that artificial intelligence faces in the near future, which include addressing data limitations, improving generalization, and reducing energy consumption in AI systems.
Lemonlight. (2021). How AI is changing the video production industry. Lemonlight. https://www.lemonlight.com/blog/how-ai-is-changing-the-video-production-industry/
Lead author: Lemonlight is a video production company that specializes in on-demand video content for businesses, using the latest technologies, including AI.
About the reading: This article explores the impact of AI on the video production industry, highlighting how AI is revolutionizing tasks such as editing, scriptwriting, and post-production, making video production faster, more efficient, and accessible.
Nuttall, G. (2023, October 30). How to use LLMs to generate coherent long-form content using hierarchical expansion. OpenCredo. https://www.opencredo.com/blogs/how-to-use-llms-to-generate-coherent-long-form-content-using-hierarchical-expansion
Lead author: Greg Nuttall is a Senior Data Engineer at OpenCredo, specializing in AI, large language models, and data science applications in content generation.
About the reading: This article explains the use of large language models (LLMs) for generating coherent long-form content. It introduces hierarchical expansion as a method to break down and structure complex topics, allowing LLMs to produce more coherent and structured long-form text. The post explores its applications in content marketing, technical writing, and automation of complex reports.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://psycnet.apa.org/record/2018-08016-000
Lead author: Safiya Umoja Noble is an associate professor of information studies at UCLA, focusing on algorithmic discrimination and bias in technology.
About the reading: Noble’s book explores how search engines and other AI systems perpetuate systemic biases, particularly against marginalized communities.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. https://www.amazon.ca/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815
Lead author: Cathy O'Neil is a data scientist and author who specializes in the ethical implications of big data and algorithms.
About the reading: O'Neil's book critiques the role of algorithms in modern society, arguing that they often exacerbate social inequalities and pose threats to democratic processes.
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. International Conference on Machine Learning (ICML), 22–30. https://proceedings.mlr.press/v139/ramesh21a.html
Lead author: Aditya Ramesh is a research scientist at OpenAI, focusing on the development of AI models for text-to-image generation.
About the reading: This paper discusses the technical aspects of DALL-E, an AI model that generates images from textual descriptions, and its broader implications for creativity.
Rudolph, J. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning & Teaching, 6(1). http://journals.sfu.ca/jalt/index.php/jalt/index
Lead author: Jürgen Rudolph is the Director of Research at Kaplan Singapore, specializing in educational research with a focus on digital transformation in higher education.
About the reading: This article explores the rapid development of various AI chatbots, including ChatGPT, Bing Chat, Bard, and Ernie, and their implications for higher education. Rudolph compares these chatbots, assesses their performance in educational contexts, and discusses their impact on teaching, learning, and assessment within universities. The article concludes with recommendations for educators, students, and institutions on how to adapt to the growing use of AI in education.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. https://people.engr.tamu.edu/guni/csce421/files/AI_Russell_Norvig.pdf
Lead author: Stuart Russell is a professor of computer science at the University of California, Berkeley, known for his work in AI and machine learning.
About the reading: This textbook provides a comprehensive overview of AI, covering key concepts, algorithms, and the ethical implications of the technology.
Ryan-Mosley, T. (2021). Artificial intelligence: Beauty filters are changing the way young girls see themselves. MIT Technology Review. https://www.technologyreview.com/2021/04/02/1021635/beauty-filters-young-girls-augmented-reality-social-media/
Lead author: Tate Ryan-Mosley is a technology policy journalist at MIT Technology Review, focusing on the intersection of AI, ethics, and society.
About the reading: This article discusses the widespread use of AI-driven beauty filters on social media platforms and explores the psychological and societal impacts these filters have on young girls and women, particularly regarding self-esteem and body image.
Ziakis, C., & Vlachopoulou, M. (2023). Artificial intelligence in digital marketing: Insights from a comprehensive review. Information, 14(12), 664. https://doi.org/10.3390/info14120664
Lead authors: Christos Ziakis is a researcher in economics at the International Hellenic University, and Maro Vlachopoulou is a professor at the University of Macedonia specializing in information systems and e-business.
About the reading: This article provides a comprehensive review of AI's applications in digital marketing, using a systematic literature review and bibliometric analysis. It identifies key areas where AI is transforming digital marketing strategies, including machine learning algorithms, social media, consumer behavior, and e-commerce. The article also offers future research directions for both academics and practitioners.
Ziegler, J. B. (2021). Coding with AI: The impact of GitHub Copilot on software development. Communications of the ACM, 64(12), 72–79. https://cacm.acm.org/research/measuring-github-copilots-impact-on-productivity/
Lead author: Jonathan B. Ziegler is a researcher specializing in AI applications in software engineering and programming tools.
About the reading: This article examines the effects of GitHub Copilot on coding practices, analyzing its potential to improve efficiency while discussing the ethical and technical challenges it raises.